You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I simply created the node-exporter with default configuration, only adding tolerations of all of my nodegroups.
What did you expect to see?
I expected to see one node-exporter pod per node, and a pod only gets spun up in case a new node comes up.
What did you see instead?
I see loads of node-exporter pods being created on and on
daemonset output:
k get daemonset -n monitoring -o wide
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
prometheus-prometheus-node-exporter 12 12 12 12 12 kubernetes.io/os=linux 138m node-exporter quay.io/prometheus/node-exporter:v1.7.0 app.kubernetes.io/instance=prometheus,app.kubernetes.io/name=prometheus-node-exporter
k describe daemonset prometheus-prometheus-node-exporter -n monitoring
Name: prometheus-prometheus-node-exporter
Selector: app.kubernetes.io/instance=prometheus,app.kubernetes.io/name=prometheus-node-exporter
Node-Selector: kubernetes.io/os=linux
Labels: app.kubernetes.io/component=metrics
app.kubernetes.io/instance=prometheus
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=prometheus-node-exporter
app.kubernetes.io/part-of=prometheus-node-exporter
app.kubernetes.io/version=1.7.0
helm.sh/chart=prometheus-node-exporter-4.24.0
Annotations: deprecated.daemonset.template.generation: 1
meta.helm.sh/release-name: prometheus
meta.helm.sh/release-namespace: monitoring
policies.kyverno.io/last-applied-patches:
autogen-tolerations-node-selectors.monitoring-tolerations-node-selectors.kyverno.io: added
/spec/template/spec/tolerations/2
Desired Number of Nodes Scheduled: 12
Current Number of Nodes Scheduled: 12
Number of Nodes Scheduled with Up-to-date Pods: 12
Number of Nodes Scheduled with Available Pods: 12
Number of Nodes Misscheduled: 0
Pods Status: 12 Running / 63 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app.kubernetes.io/component=metrics
app.kubernetes.io/instance=prometheus
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=prometheus-node-exporter
app.kubernetes.io/part-of=prometheus-node-exporter
app.kubernetes.io/version=1.7.0
helm.sh/chart=prometheus-node-exporter-4.24.0
Annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: true
Service Account: prometheus-prometheus-node-exporter
Containers:
node-exporter:
Image: quay.io/prometheus/node-exporter:v1.7.0
Port: 9100/TCP
Host Port: 0/TCP
Args:
--path.procfs=/host/proc
--path.sysfs=/host/sys
--path.rootfs=/host/root
--path.udev.data=/host/root/run/udev/data
--web.listen-address=[$(HOST_IP)]:9100
Liveness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
HOST_IP: 0.0.0.0
Mounts:
/host/proc from proc (ro)
/host/root from root (ro)
/host/sys from sys (ro)
Volumes:
proc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
sys:
Type: HostPath (bare host directory volume)
Path: /sys
HostPathType:
root:
Type: HostPath (bare host directory volume)
Path: /
HostPathType:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 56m (x13 over 139m) daemonset-controller (combined from similar events): Created pod: prometheus-prometheus-node-exporter-jjkrl
Normal SuccessfulCreate 56m daemonset-controller Created pod: prometheus-prometheus-node-exporter-twrkg
Normal SuccessfulCreate 56m daemonset-controller Created pod: prometheus-prometheus-node-exporter-m5cc5
Normal SuccessfulCreate 56m daemonset-controller Created pod: prometheus-prometheus-node-exporter-6t5ld
Normal SuccessfulCreate 56m daemonset-controller Created pod: prometheus-prometheus-node-exporter-sdcb9
Normal SuccessfulCreate 56m daemonset-controller Created pod: prometheus-prometheus-node-exporter-gmhlw
Normal SuccessfulCreate 56m daemonset-controller Created pod: prometheus-prometheus-node-exporter-96b9z
Normal SuccessfulCreate 56m daemonset-controller Created pod: prometheus-prometheus-node-exporter-98qnm
Normal SuccessfulCreate 56m daemonset-controller Created pod: prometheus-prometheus-node-exporter-dwlbh
Normal SuccessfulCreate 56m daemonset-controller Created pod: prometheus-prometheus-node-exporter-5rjrt
Events from pods in Pending state
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 57m (x2 over 57m) default-scheduler 0/13 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 12 node(s) didn't have free ports for the requested pod ports. preemption: 0/13 nodes are available: 1 Preemption is not helpful for scheduling, 12 No preemption victims found for incoming pod.. Warning FailedScheduling 51m (x4 over 57m) default-scheduler 0/13 nodes are available: 13 node(s) didn't have free ports for the requested pod ports. preemption: 0/13 nodes are available: 13 No preemption victims found for incoming pod..
Warning FailedScheduling 46m default-scheduler 0/13 nodes are available: 1 node(s) were unschedulable, 12 node(s) didn't have free ports for the requested pod ports. preemption: 0/13 nodes are available: 1 Preemption is not helpful for scheduling, 12 No preemption victims found for incoming pod.. Warning FailedScheduling 86s (x13 over 45m) default-scheduler 0/12 nodes are available: 12 node(s) didn't have free ports for the requested pod ports. preemption: 0/12 nodes are available: 12 No preemption victims found for incoming pod..
Host operating system: output of
uname -a
AL2_x86_64
andAL2_x86_64_GPU
node_exporter version: output of
node_exporter --version
node-exporter
container has the following image:quay.io/prometheus/node-exporter:v1.7.0
node_exporter command line flags
ARGS
from the containerArgs: --path.procfs=/host/proc --path.sysfs=/host/sys --path.rootfs=/host/root --path.udev.data=/host/root/run/udev/data --web.listen-address=[$(HOST_IP)]:9100
node_exporter log output
Are you running node_exporter in Docker?
v25.8.0
What did you do that produced an error?
node-exporter
with default configuration, only adding tolerations of all of my nodegroups.What did you expect to see?
node-exporter
pod per node, and a pod only gets spun up in case a new node comes up.What did you see instead?
node-exporter
pods being created on and onPending
statenode-exporter
pods in the namespaceThe text was updated successfully, but these errors were encountered: