Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Panic when updating service annotations for EMLB-backed LoadBalancer service #477

Open
ctreatma opened this issue Nov 1, 2023 · 8 comments
Labels
triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@ctreatma
Copy link
Contributor

ctreatma commented Nov 1, 2023

During testing we observed the following error, but we're not entirely sure how it happened; one potentially useful piece of information is that the service that caused the error had port: 8 instead of port: 80.

goroutine 180 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1d71560?, 0x23be630})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:75 +0x85
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000f89ce0?})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:49 +0x6b
panic({0x1d71560?, 0x23be630?})
	/usr/local/go/src/runtime/panic.go:914 +0x21f
github.com/equinix/cloud-provider-equinix-metal/metal/loadbalancers/emlb.(*LB).reconcileService(0xc0004c9e30, {0x23e4fc8, 0xc00022b5e0}, 0xc000333b80, {0xc000324798, 0x1, 0x2c?}, {0xc000194ea0, 0x89})
	/workspace/metal/loadbalancers/emlb/emlb.go:93 +0x368
github.com/equinix/cloud-provider-equinix-metal/metal/loadbalancers/emlb.(*LB).AddService(0xc0009569a0?, {0x23e4fc8?, 0xc00022b5e0?}, {0x0?, 0x4fa00a?}, {0x2?, 0xc0009569a0?}, {0x0?, 0x0?}, {0x0, ...}, ...)
	/workspace/metal/loadbalancers/emlb/emlb.go:62 +0x4d
github.com/equinix/cloud-provider-equinix-metal/metal.(*loadBalancers).addService(0xc000550dc0, {0x23e4fc8, 0xc00022b5e0}, 0xc000333b80, {0xc000324798, 0x1, 0x1}, {0xc000194ea0, 0x89})
	/workspace/metal/loadbalancers.go:533 +0x1d14
github.com/equinix/cloud-provider-equinix-metal/metal.(*loadBalancers).EnsureLoadBalancer(0xc000550dc0, {0x23e4fc8, 0xc00022b5e0}, {0x20b8154, 0xa}, 0xc000542c80?, {0xc000324788, 0x1, 0x1})
	/workspace/metal/loadbalancers.go:192 +0x494
k8s.io/cloud-provider/controllers/service.(*Controller).ensureLoadBalancer(0xc0007eac30, {0x23e4fc8, 0xc00022b5e0}, 0xc000542c80)
	/go/pkg/mod/k8s.io/[email protected]/controllers/service/controller.go:438 +0x154
k8s.io/cloud-provider/controllers/service.(*Controller).syncLoadBalancerIfNeeded(0xc0007eac30, {0x23e4fc8, 0xc00022b5e0}, 0xc000542c80, {0xc000bba0a0, 0xb})
	/go/pkg/mod/k8s.io/[email protected]/controllers/service/controller.go:393 +0x752
k8s.io/cloud-provider/controllers/service.(*Controller).processServiceCreateOrUpdate(0xc0007eac30, {0x23e4fc8, 0xc00022b5e0}, 0xc000542c80, {0xc000bba0a0, 0xb})
	/go/pkg/mod/k8s.io/[email protected]/controllers/service/controller.go:325 +0x13f
k8s.io/cloud-provider/controllers/service.(*Controller).syncService(0xc0007eac30, {0x23e4fc8, 0xc00022b5e0}, {0xc000bba0a0, 0xb})
	/go/pkg/mod/k8s.io/[email protected]/controllers/service/controller.go:828 +0x27e
k8s.io/cloud-provider/controllers/service.(*Controller).processNextServiceItem(0xc0007eac30, {0x23e4fc8, 0xc00022b5e0})
	/go/pkg/mod/k8s.io/[email protected]/controllers/service/controller.go:283 +0x119
k8s.io/cloud-provider/controllers/service.(*Controller).serviceWorker(...)
	/go/pkg/mod/k8s.io/[email protected]/controllers/service/controller.go:250
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:190 +0x22
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:157 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x30?, {0x23c1b60, 0xc000a9e630}, 0x1, 0xc00027e3c0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:158 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x410745?, 0x3b9aca00, 0x0, 0x1?, 0xc0004cdf70?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:135 +0x7f
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x23e4fc8, 0xc00022b5e0}, 0xc000e900b0, 0x3b9aca00?, 0x0?, 0x0?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:190 +0x93
k8s.io/apimachinery/pkg/util/wait.UntilWithContext({0x23e4fc8?, 0xc00022b5e0?}, 0x3ee00000000?, 0x404?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:101 +0x25
created by k8s.io/cloud-provider/controllers/service.(*Controller).Run in goroutine 362
	/go/pkg/mod/k8s.io/[email protected]/controllers/service/controller.go:237 +0x43b
panic: assignment to entry in nil map [recovered]
	panic: assignment to entry in nil map

goroutine 180 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000f89ce0?})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:56 +0xcd
panic({0x1d71560?, 0x23be630?})
	/usr/local/go/src/runtime/panic.go:914 +0x21f
github.com/equinix/cloud-provider-equinix-metal/metal/loadbalancers/emlb.(*LB).reconcileService(0xc0004c9e30, {0x23e4fc8, 0xc00022b5e0}, 0xc000333b80, {0xc000324798, 0x1, 0x2c?}, {0xc000194ea0, 0x89})
	/workspace/metal/loadbalancers/emlb/emlb.go:93 +0x368
github.com/equinix/cloud-provider-equinix-metal/metal/loadbalancers/emlb.(*LB).AddService(0xc0009569a0?, {0x23e4fc8?, 0xc00022b5e0?}, {0x0?, 0x4fa00a?}, {0x2?, 0xc0009569a0?}, {0x0?, 0x0?}, {0x0, ...}, ...)
	/workspace/metal/loadbalancers/emlb/emlb.go:62 +0x4d
github.com/equinix/cloud-provider-equinix-metal/metal.(*loadBalancers).addService(0xc000550dc0, {0x23e4fc8, 0xc00022b5e0}, 0xc000333b80, {0xc000324798, 0x1, 0x1}, {0xc000194ea0, 0x89})
	/workspace/metal/loadbalancers.go:533 +0x1d14
github.com/equinix/cloud-provider-equinix-metal/metal.(*loadBalancers).EnsureLoadBalancer(0xc000550dc0, {0x23e4fc8, 0xc00022b5e0}, {0x20b8154, 0xa}, 0xc000542c80?, {0xc000324788, 0x1, 0x1})
	/workspace/metal/loadbalancers.go:192 +0x494
k8s.io/cloud-provider/controllers/service.(*Controller).ensureLoadBalancer(0xc0007eac30, {0x23e4fc8, 0xc00022b5e0}, 0xc000542c80)
	/go/pkg/mod/k8s.io/[email protected]/controllers/service/controller.go:438 +0x154
k8s.io/cloud-provider/controllers/service.(*Controller).syncLoadBalancerIfNeeded(0xc0007eac30, {0x23e4fc8, 0xc00022b5e0}, 0xc000542c80, {0xc000bba0a0, 0xb})
	/go/pkg/mod/k8s.io/[email protected]/controllers/service/controller.go:393 +0x752
k8s.io/cloud-provider/controllers/service.(*Controller).processServiceCreateOrUpdate(0xc0007eac30, {0x23e4fc8, 0xc00022b5e0}, 0xc000542c80, {0xc000bba0a0, 0xb})
	/go/pkg/mod/k8s.io/[email protected]/controllers/service/controller.go:325 +0x13f
k8s.io/cloud-provider/controllers/service.(*Controller).syncService(0xc0007eac30, {0x23e4fc8, 0xc00022b5e0}, {0xc000bba0a0, 0xb})
	/go/pkg/mod/k8s.io/[email protected]/controllers/service/controller.go:828 +0x27e
k8s.io/cloud-provider/controllers/service.(*Controller).processNextServiceItem(0xc0007eac30, {0x23e4fc8, 0xc00022b5e0})
	/go/pkg/mod/k8s.io/[email protected]/controllers/service/controller.go:283 +0x119
k8s.io/cloud-provider/controllers/service.(*Controller).serviceWorker(...)
	/go/pkg/mod/k8s.io/[email protected]/controllers/service/controller.go:250
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:190 +0x22
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:157 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x30?, {0x23c1b60, 0xc000a9e630}, 0x1, 0xc00027e3c0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:158 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x410745?, 0x3b9aca00, 0x0, 0x1?, 0xc0004cdf70?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:135 +0x7f
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x23e4fc8, 0xc00022b5e0}, 0xc000e900b0, 0x3b9aca00?, 0x0?, 0x0?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:190 +0x93
k8s.io/apimachinery/pkg/util/wait.UntilWithContext({0x23e4fc8?, 0xc00022b5e0?}, 0x3ee00000000?, 0x404?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:101 +0x25
created by k8s.io/cloud-provider/controllers/service.(*Controller).Run in goroutine 362
	/go/pkg/mod/k8s.io/[email protected]/controllers/service/controller.go:237 +0x43b

The panic comes from this line: https://github.com/kubernetes-sigs/cloud-provider-equinix-metal/blob/main/metal/loadbalancers/emlb/emlb.go#L93

Based on the error message, it sounds like svc.Annotations is nil; is that expected in some cases? If so, we need to code around that.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 31, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 1, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 31, 2024
@cprivitere
Copy link
Member

/reopen

@k8s-ci-robot
Copy link
Contributor

@cprivitere: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot reopened this May 14, 2024
@cprivitere
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 14, 2024
@cprivitere
Copy link
Member

/triage accepted

@k8s-ci-robot k8s-ci-robot added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

4 participants