Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kuberhealthy installation uses outdated cluster URL #207

Open
eickler opened this issue Nov 17, 2020 · 4 comments
Open

kuberhealthy installation uses outdated cluster URL #207

eickler opened this issue Nov 17, 2020 · 4 comments

Comments

@eickler
Copy link

eickler commented Nov 17, 2020

Summary

When creating a new EKS cluster from scratch with terraform, kuberhealthy is installed with the cluster URL from the previous run of terraform.

Steps to reproduce the behavior

terraform apply
# Clean up ELB
terraform destroy
# Check if everything is cleaned up
terraform apply

Expected behavior

Cluster is created and kuberhealthy is installed.

Actual behavior

Error: Kubernetes cluster unreachable: Get https://<old url>.gr7.us-east-1.eks.amazonaws.com/version?timeout=32s: dial tcp: lookup <old url>.gr7.us-east-1.eks.amazonaws.com on [2a02:908:2:a::1]:53: no such host

  on .terraform/modules/eks-jx.health.jx-health/main.tf line 1, in resource "helm_release" "kuberhealthy":
   1: resource "helm_release" "kuberhealthy" {

where <old url> is the cluster URL of the previous run. Note that kubeconfig is correctly updated with the new cluster when terraform aborts with the error message.

Maybe there is a missing dependency or race condition. The terraform log shows first

module.eks-jx.module.health.module.jx-health[0].helm_release.kuberhealthy: Creating...

and then much later

module.eks-jx.module.cluster.module.eks.local_file.kubeconfig[0]: Creating...
module.eks-jx.module.cluster.module.eks.local_file.kubeconfig[0]: Creation complete after 0s [id=294be563e46ce5a5d60a5d34b2737b7c55791cda]

Terraform version

The output of terraform version is:

Terraform v0.13.5

Module version

1.11.0

Operating system

MacOS

@eickler
Copy link
Author

eickler commented Nov 17, 2020

Workaround is probably to remove the previous kubeconfig entry after terraform destroy. I will try it out next time and add a comment to the cleanup procedure.

@ahmetcetin
Copy link
Contributor

ahmetcetin commented Jan 10, 2021

@eickler, @ankitm123 I have exactly the same problem, shall I delete the content of ~/.kube folder after terraform destroy?

@eickler
Copy link
Author

eickler commented Jan 11, 2021

@ahmetcetin Yes, that worked for me. Sorry for lack of updates.

@ankitm123
Copy link
Member

The latest version should fix that issue - the issue was that the helm provider was using outdated credentials. It should work now in the latest version, can someone try and confirm?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants