Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform tries to recreate clusters previously using ConfigMap auth #3025

Open
1 task done
igorbrites opened this issue May 2, 2024 · 0 comments
Open
1 task done
Labels

Comments

@igorbrites
Copy link
Contributor

Description

I'm trying to upgrade my clusters to use v20, which all use the aws-auth configmap. But now, as bootstrap_cluster_creator_admin_permissions is fixed to false, even if I try setting authentication_mode = "API_AND_CONFIG_MAP", Terraform says it needs to recreate the whole cluster.

  • ✋ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. ✅ Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. ✅ Re-initialize the project root to pull down modules: terraform init
  3. ✅ Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: [email protected]:clowdhaus/terraform-aws-eks-migrate-v19-to-v20.git?ref=3f626cc493606881f38684fc366688c36571c5c5

  • Terraform version: 1.7.4

  • Provider version(s):

+ provider registry.terraform.io/cloudposse/utils v1.22.0
+ provider registry.terraform.io/gavinbunney/kubectl v1.14.0
+ provider registry.terraform.io/hashicorp/aws v5.47.0
+ provider registry.terraform.io/hashicorp/cloudinit v2.3.4
+ provider registry.terraform.io/hashicorp/helm v2.13.1
+ provider registry.terraform.io/hashicorp/kubernetes v2.29.0
+ provider registry.terraform.io/hashicorp/random v3.6.1
+ provider registry.terraform.io/hashicorp/time v0.11.1
+ provider registry.terraform.io/hashicorp/tls v4.0.5

Reproduction Code [Required]

module "eks" {
  # source  = "terraform-aws-modules/eks/aws"
  # version = "~> 19.0"
  source  = "[email protected]:clowdhaus/terraform-aws-eks-migrate-v19-to-v20.git?ref=3f626cc493606881f38684fc366688c36571c5c5"

  cluster_name    = local.cluster_name
  cluster_version = var.cluster_version

  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = true

  create_kms_key = true
  cluster_encryption_config = {
    resources = ["secrets"]
  }

  vpc_id     = var.vpc_id
  subnet_ids = local.private_subnet_ids

  # (...)

  # aws-auth configmap
  authentication_mode                      = "API_AND_CONFIG_MAP"
  enable_cluster_creator_admin_permissions = true

  manage_aws_auth_configmap = true
  aws_auth_roles            = concat(local.eks_node_groups_iam_roles, var.aws_auth_roles)
  aws_auth_users            = var.aws_auth_users

  tags = module.labels.tags
}

Steps to reproduce the behavior:

  • Follow the upgrade steps from the migration repo;
  • Set authentication_mode = "API_AND_CONFIG_MAP";
  • Tryed to set enable_cluster_creator_admin_permissions to true or false, no changes;
  • Run terraform init -upgrade;
  • Run terraform plan;

Expected behavior

Plan adding the API-related authentications, keeping the cluster intact.

Actual behavior

Provider tries to recreate the entire cluster.

Terminal Output Screenshot(s)

  # module.eks_cluster.module.eks.aws_eks_cluster.this[0] must be replaced
+/- resource "aws_eks_cluster" "this" {
      ~ arn                       = "<REDACTED>" -> (known after apply)
      ~ certificate_authority     = [
          - {
              - data = "<REDACTED>"
            },
        ] -> (known after apply)
      + cluster_id                = (known after apply)
      ~ created_at                = "<REDACTED>" -> (known after apply)
      ~ endpoint                  = "<REDACTED>" -> (known after apply)
      ~ id                        = "<REDACTED>" -> (known after apply)
      ~ identity                  = [
          - {
              - oidc = [
                  - {
                      - issuer = "<REDACTED>"
                    },
                ]
            },
        ] -> (known after apply)
        name                      = "<REDACTED>"
      ~ platform_version          = "eks.11" -> (known after apply)
      ~ status                    = "ACTIVE" -> (known after apply)
      ~ tags                      = {
          + "terraform-aws-modules" = "eks"
        }
      ~ tags_all                  = {
          + "terraform-aws-modules" = "eks"
            # (8 unchanged elements hidden)
        }
        # (3 unchanged attributes hidden)

      + access_config {
          + authentication_mode                         = "API_AND_CONFIG_MAP"
          + bootstrap_cluster_creator_admin_permissions = false # forces replacement
        }

    # (<REDACTED> unchanged blocks hidden)
  }

Additional context

I need to start using the use_latest_ami_release_version for the node groups, that's why I'm trying to upgrade the module. Any help is much appreciated! 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants