New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creating an access entry fails if it already exists #2968
Comments
@cweiblen Are you able to reproduce this issue as well? |
if you are migrating a cluster into cluster access entry, you can't use |
bryantbiggs For the other issue where the policy is never attached:
I see this in the plan:
Is this related with #2958? |
I don't follow, what is the issue? |
In the reproduction code, see my access entry for principal_arn = aws_iam_role.cluster_management_role.arn |
|
Migrating an existing cluster from 19.20 -> 20.2, I was not able to get it working using |
I would be curious to see what you are doing differently. If an access entry already exists, it already exists - there isn't anything unique about the implementation that would allow you to get around that |
@bryantbiggs There are 2 issues that we see. when using access_entries I plan to attempt the same thing as @cweiblen mentioned. Move it out of eks module and add a separate access entry 'resource' |
We do not control this - this is the EKS API. Its stating that you can't have more than one entry for the same principal. This would be similar to trying to create two clusters both named the same, in the same region - the API does not allow that, nothing to do with this module
Do you have a reproduction? I'd love to see whats different about a standalone resource versus whats defined here. Here is what we have in our example that works as intended terraform-aws-eks/examples/eks_managed_node_group/main.tf Lines 304 to 342 in 907f70c
|
@bryantbiggs In my code I have an access entry of type 'cluster' as shown below: In your example, ex-two is of type cluster but no 'policy_associations' section only a policy_arn.
Can you post an example of ex-single of type cluster with a policy association/policy_arn? |
Reg:
This is a problem when we are doing an upgrade. The first time we run it works fine, the second time you run it may be for an upgrade in another part of the code - it attempts to create it again. It should simply ignore if it already exists. But as you are saying its the EKS API and we need to log an issue there. |
From the details you have provided, its very hard to understand what you are doing and why you are encountering issues. I would suggest re-reading the upgrade guide. In short, there are two areas where access entries will already exist that YOU do not need to re-add them in code. Both of these scenarios are when you have a cluster that was created with the
|
and for sake of completeness, here is an example as requested of a single entry with cluster scope as the module is currently written - it works without issue: module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.8"
cluster_name = local.name
cluster_version = local.cluster_version
cluster_endpoint_public_access = true
enable_cluster_creator_admin_permissions = true
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
control_plane_subnet_ids = module.vpc.intra_subnets
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"]
}
eks_managed_node_groups = {
# Default node group - as provided by AWS EKS
default_node_group = {
# By default, the module creates a launch template to ensure tags are propagated to instances, etc.,
# so we need to disable it to use the default template provided by the AWS EKS managed node group service
use_custom_launch_template = false
}
}
access_entries = {
# One access entry with a policy associated
ex-single = {
principal_arn = aws_iam_role.this["single"].arn
policy_associations = {
ex = {
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy"
access_scope = {
type = "cluster"
}
}
}
}
}
tags = local.tags
} Describe the access entry: aws eks describe-access-entry \
--cluster-name ex-eks-managed-node-group \
--principal-arn "arn:aws:iam::000000000000:role/ex-single" \
--region eu-west-1 {
"accessEntry": {
"clusterName": "ex-eks-managed-node-group",
"principalArn": "arn:aws:iam::000000000000:role/ex-single",
"kubernetesGroups": [],
"accessEntryArn": "arn:aws:eks:eu-west-1:000000000000:access-entry/ex-eks-managed-node-group/role/000000000000/ex-single/40c71997-3891-aa1c-0997-e0352c7ca25a",
"createdAt": "2024-03-12T11:01:05.685000-04:00",
"modifiedAt": "2024-03-12T11:01:05.685000-04:00",
"tags": {
"GithubRepo": "terraform-aws-eks",
"GithubOrg": "terraform-aws-modules",
"Example": "ex-eks-managed-node-group"
},
"username": "arn:aws:sts::000000000000:assumed-role/ex-single/{{SessionName}}",
"type": "STANDARD"
}
} List the policies associated with this principal: aws eks list-associated-access-policies \
--cluster-name ex-eks-managed-node-group \
--principal-arn "arn:aws:iam::000000000000:role/ex-single" \
--region eu-west-1 {
"associatedAccessPolicies": [
{
"policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy",
"accessScope": {
"type": "cluster",
"namespaces": []
},
"associatedAt": "2024-03-12T11:01:07.063000-04:00",
"modifiedAt": "2024-03-12T11:01:07.063000-04:00"
}
],
"clusterName": "ex-eks-managed-node-group",
"principalArn": "arn:aws:iam::000000000000:role/ex-single"
} |
Somehow the policy does not get attached in my case and in @cweiblen 's case as well. |
You have shared some code, yes, but its all variables and values that are unknown to anyone but yourself. For now I am putting a pin in this thread because I am not seeing any issues on the module as it stands. If there is additional information that will highlight this issue, we can definitely take a another look |
We faced the same issue that @deshruch mentioned. e.g.
and we have this Exact error was: We had to manually intervene and delete that entry or attach policy. |
We had to do the same thing that @cweiblen did to get around this. We had to create access entries using 'resource'. Note that this was the case for a custom IAM role that we were migrating form Config Map to EKS access entry. However, if this is for the node group role, EKS module automatically moves it. We were also using the 'karpenter' module in which you need to explicitly set create_access_entry = false (default is true), s o that the karpenter module does not try to recreate it again and throw the 'the specified access entry resource is already in use on this cluster' error. For user defined/custom IAM role, we had to add access entry and policy association using 'resource' |
In case anyone needs to import the existing access entry: $ terraform import 'module.cluster_name.module.eks.aws_eks_access_entry.this["cluster_creator"]' cluster_name:principal_arn
$ terraform import 'module.cluster_name.module.eks.aws_eks_access_policy_association.this["cluster_creator_admin"]' cluster_name#principal_arn#policy_arn |
This issue has been automatically marked as stale because it has been open 30 days |
This issue was automatically closed because of stale in 10 days |
Description
I am trying to create a new access entry. I am migrating from 19.20 -> 20.5.0 and so getting rid of config map entry and migrating to access entry: Creation of access entry fails if it already exists. I have to go manually delete the role so it attempts to create it again.
See Actual Behaviour for a full error message
Also for user defined roles such as the 'cluster_management_role' as shown in the terraform code - it sometimes fails to attach the policy. This results in failed deployment for us since we are using this role to for EKSTokenAuth.
Versions
Module version [Required]: 20.5.0
Terraform version:
1.5.7
Provider version(s):
5.38.0
Reproduction Code [Required]
Steps to reproduce the behavior:
terraform init
terraform apply
Expected behavior
The cluster entry should be properly created even if it already exists.
Policy should be attached correctly
Actual behavior
The behaviour is very intermittent and unpredictable. It sometimes creates tge
We see error messages such as:
Actual behaviour when cluster_management_role custom role access entry fails to attach the policy
Plan: 18 to add, 2 to change, 13 to destroy.
Terminal Output Screenshot(s)
Additional context
The text was updated successfully, but these errors were encountered: