Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I adjust the resources of the config-reloader container in alertmanager? #2333

Open
JokerDevops opened this issue Jan 17, 2024 · 4 comments

Comments

@JokerDevops
Copy link

How can I adjust the resources of the config-reloader container in alertmanager?

apiVersion: monitoring.coreos.com/v1
kind: Alertmanager
metadata:
  labels:
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/instance: main
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.26.0
  name: main
  namespace: monitoring
spec:
  image: quay.io/prometheus/alertmanager:v0.26.0
  nodeSelector:
    kubernetes.io/os: linux
  podMetadata:
    labels:
      app.kubernetes.io/component: alert-router
      app.kubernetes.io/instance: main
      app.kubernetes.io/name: alertmanager
      app.kubernetes.io/part-of: kube-prometheus
      app.kubernetes.io/version: 0.26.0
  replicas: 3
  resources:
    limits:
      cpu: 100m
      memory: 100Mi
    requests:
      cpu: 40m
      memory: 100Mi
  secrets: []
  securityContext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  serviceAccountName: alertmanager-main
  version: 0.26.0

It is always 10m and 50Mi

...
      - args:
        - --listen-address=:8080
        - --reload-url=http://localhost:9093/-/reload
        - --config-file=/etc/alertmanager/config/alertmanager.yaml.gz
        - --config-envsubst-file=/etc/alertmanager/config_out/alertmanager.env.yaml
        - --watched-dir=/etc/alertmanager/config
        command:
        - /bin/prometheus-config-reloader
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: SHARD
          value: "-1"
        image: quay.io/prometheus-operator/prometheus-config-reloader:v0.70.0
        imagePullPolicy: IfNotPresent
        name: config-reloader
        ports:
        - containerPort: 8080
          name: reloader-web
          protocol: TCP
        resources:
          limits:
            cpu: 10m
            memory: 50Mi
          requests:
            cpu: 10m
            memory: 50Mi
...
@simonpasquier
Copy link
Contributor

simonpasquier commented Jan 18, 2024

You need to use the -config-reloader-cpu-* and -config-reloader-memory-* args of the Prometheus operator.

Copy link

This issue has been automatically marked as stale because it has not had any activity in the last 60 days. Thank you for your contributions.

@github-actions github-actions bot added the stale label Mar 19, 2024
@joshbranham
Copy link

You need to use the -config-reloader-cpu-* and -config-reloader-memory-* args of the Prometheus operator.

Does this only work after a certain version? I am running v0.67.1 and have the args set on my prometheus-operator deployment but our alertmanager config reloader container is not honoring the values. I skimmed the changelog for the operator and no mention of a bugfix or anything related to these args afaict.

  - args:
    - --kubelet-service=kube-system/kubelet
    - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1
    - --config-reloader-cpu-limit=20m
    - --config-reloader-memory-limit=50Mi
    - --config-reloader-cpu-request=20m
    - --config-reloader-memory-request=50Mi
    image: quay.io/prometheus-operator/prometheus-operator:v0.67.1

@joshbranham
Copy link

Nevermind, found the bugfix! prometheus-operator/prometheus-operator#5971

@github-actions github-actions bot removed the stale label May 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants