Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Claims that are stuck waiting for an address when a pool has no free addresses should be re-reconciled when addresses become available #179

Closed
flawedmatrix opened this issue Aug 4, 2023 · 3 comments · Fixed by #253
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@flawedmatrix
Copy link
Contributor

I noticed an issue recently when creating multiple clusters with the same pool and forgetting that my pool size was too small for the number of machines I created. The observed behavior was that when the pool runs out of IPs, the cluster will never come up because it's waiting for a node to obtain an IP address. Even when deleting an unused cluster, this issue does not resolve itself. I have to manually delete the claim and its owner resources in order to get the deployment to progress.

Reproduction

  1. Create a pool with N addresses, and reserve all of them by creating N claims
  2. Attempt to create another claim
  3. This claim should be forever waiting for an IP address
  4. Delete some claims to free up capacity in this pool
  5. Observe that the N+1 claim will still be unable to reserve an IP address.

Proposed Solution

We should add a watch for when IPAddressClaims are deleted to trigger reconciles on other IPAddressClaims.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 25, 2024
@schrej
Copy link
Member

schrej commented Jan 31, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 31, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
4 participants