Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] panic with invalid hpa definition #1378

Open
fpicot opened this issue Nov 21, 2023 · 0 comments
Open

[BUG] panic with invalid hpa definition #1378

fpicot opened this issue Nov 21, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@fpicot
Copy link

fpicot commented Nov 21, 2023

What did you do

  • How was the cluster created?
    Cluster was created with k3d 5.5.2 without any parameters : k3d cluster create

  • What did you do afterwards?
    I deployed a POC to scale rabbitmq consumers based on queue size, using kube-prometheus-stack, prometheus-adapter, rabbitmq-operator, with default values.

The HPA created had an invalid spec :

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: rabbitmq-receiver-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: rabbitmq-receiver
  minReplicas: 1
  maxReplicas: 2
  metrics:
  - type: Object
    object:
      target:
        type: Value
        averageValue: 5
      metric:
        name: rabbitmq_queue_messages_ready
      describedObject:
        kind: Service
        name: rabbitmqcluster

The correct target definition would be :

      target:
        type: Value
        value: 5

A few seconds after deploying the invalid hpa, the k3d-k3s-default-server-0 docker container crashes with a panic log:

E1121 15:36:05.088584      23 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 33467 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x4f323a0?, 0x9051ac0})
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/runtime/runtime.go:75 +0x99
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc01fafe680?})
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/runtime/runtime.go:49 +0x75
panic({0x4f323a0, 0x9051ac0})
        /usr/local/go/src/runtime/panic.go:884 +0x213
k8s.io/apimachinery/pkg/api/resource.(*Quantity).ScaledValue(0xc012898de8?, 0x7b703c?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/api/resource/quantity.go:758 +0x14
k8s.io/apimachinery/pkg/api/resource.(*Quantity).MilliValue(...)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/api/resource/quantity.go:750
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).computeStatusForObjectMetric(0xc00a584900, 0x0?, 0x63e7ca24?, {{0xc0161156f0, 0x6}, 0xc01eb0b110, 0x0, 0x0, 0x0, 0x0}, ...)
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:538 +0xee
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).computeReplicasForMetric(0xc00a584900, {0x62f5280, 0xc000120000}, 0xc0195fee00, {{0xc0161156f0, 0x6}, 0xc01eb0b110, 0x0, 0x0, 0x0, ...}, ...)
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:458 +0x90e
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).computeReplicasForMetrics(0xc01ebcd280?, {0x62f5280, 0xc000120000}, 0xc0195fee00, 0xc000927540, {0xc01a03d3c0, 0x1, 0xb?})
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:315 +0x2f2
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).reconcileAutoscaler(0xc00a584900, {0x62f5280, 0xc000120000}, 0xc00d954540, {0xc01ebcd280, 0x1b})
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:835 +0xb9d
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).reconcileKey(0xc00a584900, {0x62f5280, 0xc000120000}, {0xc01ebcd280, 0x1b})
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:532 +0x165
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).processNextWorkItem(0xc00a584900, {0x62f5280, 0xc000120000})
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:272 +0x115
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).worker(0xc008231760?, {0x62f5280, 0xc000120000})
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:259 +0x39
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/wait/backoff.go:259 +0x25
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/wait/backoff.go:226 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x62b2220, 0xc00a4a81b0}, 0x1, 0x0)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/wait/backoff.go:227 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/wait/backoff.go:204 +0x89
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x62f5280, 0xc000120000}, 0xc00b94a900, 0x0?, 0x0?, 0x0?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/wait/backoff.go:259 +0x99
k8s.io/apimachinery/pkg/util/wait.UntilWithContext({0x62f5280?, 0xc000120000?}, 0x0?, 0x0?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/wait/backoff.go:170 +0x2b
created by k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).Run
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:208 +0x226
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0xfba0d4]

goroutine 33467 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc01fafe680?})
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/runtime/runtime.go:56 +0xd7
panic({0x4f323a0, 0x9051ac0})
        /usr/local/go/src/runtime/panic.go:884 +0x213
k8s.io/apimachinery/pkg/api/resource.(*Quantity).ScaledValue(0xc012898de8?, 0x7b703c?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/api/resource/quantity.go:758 +0x14
k8s.io/apimachinery/pkg/api/resource.(*Quantity).MilliValue(...)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/api/resource/quantity.go:750
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).computeStatusForObjectMetric(0xc00a584900, 0x0?, 0x63e7ca24?, {{0xc0161156f0, 0x6}, 0xc01eb0b110, 0x0, 0x0, 0x0, 0x0}, ...)
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:538 +0xee
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).computeReplicasForMetric(0xc00a584900, {0x62f5280, 0xc000120000}, 0xc0195fee00, {{0xc0161156f0, 0x6}, 0xc01eb0b110, 0x0, 0x0, 0x0, ...}, ...)
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:458 +0x90e
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).computeReplicasForMetrics(0xc01ebcd280?, {0x62f5280, 0xc000120000}, 0xc0195fee00, 0xc000927540, {0xc01a03d3c0, 0x1, 0xb?})
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:315 +0x2f2
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).reconcileAutoscaler(0xc00a584900, {0x62f5280, 0xc000120000}, 0xc00d954540, {0xc01ebcd280, 0x1b})
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:835 +0xb9d
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).reconcileKey(0xc00a584900, {0x62f5280, 0xc000120000}, {0xc01ebcd280, 0x1b})
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:532 +0x165
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).processNextWorkItem(0xc00a584900, {0x62f5280, 0xc000120000})
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:272 +0x115
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).worker(0xc008231760?, {0x62f5280, 0xc000120000})
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:259 +0x39
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/wait/backoff.go:259 +0x25
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/wait/backoff.go:226 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x62b2220, 0xc00a4a81b0}, 0x1, 0x0)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/wait/backoff.go:227 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/wait/backoff.go:204 +0x89
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x62f5280, 0xc000120000}, 0xc00b94a900, 0x0?, 0x0?, 0x0?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/wait/backoff.go:259 +0x99
k8s.io/apimachinery/pkg/util/wait.UntilWithContext({0x62f5280?, 0xc000120000?}, 0x0?, 0x0?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/wait/backoff.go:170 +0x2b
created by k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).Run
        /go/pkg/mod/github.com/k3s-io/[email protected]/pkg/controller/podautoscaler/horizontal.go:208 +0x226

With a correct HPA definition, the expected behavior is observed, and my deployment is rescaled when the queue rises above the threshold

What did you expect to happen

I would have expected an error when applying the invalid object rather than a crash.

Which OS & Architecture

arch: x86_64
cgroupdriver: systemd
cgroupversion: "2"
endpoint: /var/run/docker.sock
filesystem: btrfs
infoname: XXXX
name: docker
os: EndeavourOS
ostype: linux
version: 24.0.7

Which version of k3d

k3d version v5.5.2
k3s version v1.27.4-k3s1 (default)

Which version of docker

Client:
 Version:           24.0.7
 API version:       1.43
 Go version:        go1.21.3
 Git commit:        afdd53b4e3
 Built:             Sun Oct 29 15:42:02 2023
 OS/Arch:           linux/amd64
 Context:           default

Server:
 Engine:
  Version:          24.0.7
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.21.3
  Git commit:       311b9ff0aa
  Built:            Sun Oct 29 15:42:02 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.7.9
  GitCommit:        4f03e100cb967922bec7459a78d16ccbac9bb81d.m
 runc:
  Version:          1.1.10
  GitCommit:        
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
Client:
 Version:    24.0.7
 Context:    default
 Debug Mode: false

Server:
 Containers: 2
  Running: 2
  Paused: 0
  Stopped: 0
 Images: 3
 Server Version: 24.0.7
 Storage Driver: overlay2
  Backing Filesystem: btrfs
  Supports d_type: true
  Using metacopy: true
  Native Overlay Diff: false
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 4f03e100cb967922bec7459a78d16ccbac9bb81d.m
 runc version: 
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.6.2-arch1-1
 Operating System: EndeavourOS
 OSType: linux
 Architecture: x86_64
 CPUs: 12
 Total Memory: 15.31GiB
 Name: XXXX
 ID: 8d2ba7a7-239f-4e2d-b012-c1f781d99d16
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
@fpicot fpicot added the bug Something isn't working label Nov 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant