Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

By default, the Helm ingestion configuration console does not display logs. #366

Open
An0nymous0 opened this issue Dec 28, 2023 · 1 comment

Comments

@An0nymous0
Copy link

An0nymous0 commented Dec 28, 2023

helm chart version

0.32.0

Steps

  1. helm values.yaml
global:
    storageClass: cbs-hp
clickhouse:
    persistence:
        storageClass: cbs-ssd
        size: 500Gi
    zookeeper:
          persistence:
            size: 10Gi
            dataLogDir:
                size: 10Gi
queryService:
    persistence:
        size: 10Gi
alertmanager:
    persistence:
        size: 10Gi
  1. k get pods -n platform
NAME                                                READY   STATUS      RESTARTS       AGE
chi-signoz-clickhouse-cluster-0-0-0                 1/1     Running     0              82m
signoz-alertmanager-0                               1/1     Running     0              105m
signoz-clickhouse-operator-76dc957b9-fq7wk          2/2     Running     2 (105m ago)   105m
signoz-frontend-776959dcdb-ggdss                    1/1     Running     0              105m
signoz-k8s-infra-otel-agent-7rldk                   1/1     Running     0              105m
signoz-k8s-infra-otel-agent-9lh6w                   1/1     Running     0              105m
signoz-k8s-infra-otel-agent-c5br9                   1/1     Running     0              53m
signoz-k8s-infra-otel-agent-dmfjt                   1/1     Running     0              24m
signoz-k8s-infra-otel-agent-gjkkb                   1/1     Running     0              105m
signoz-k8s-infra-otel-agent-mqpqn                   1/1     Running     0              105m
signoz-k8s-infra-otel-agent-q5gvf                   1/1     Running     0              105m
signoz-k8s-infra-otel-agent-slr6s                   1/1     Running     0              105m
signoz-k8s-infra-otel-agent-tl78r                   1/1     Running     0              105m
signoz-k8s-infra-otel-agent-zv7zn                   1/1     Running     0              105m
signoz-k8s-infra-otel-deployment-54948c9657-zc2pb   1/1     Running     0              105m
signoz-otel-collector-5fd986f685-946r6              1/1     Running     0              105m
signoz-otel-collector-metrics-55fc78dd8-tkbrm       1/1     Running     0              105m
signoz-query-service-0                              1/1     Running     0              105m
signoz-schema-migrator-x4zs2                        0/1     Completed   0              105m
signoz-zookeeper-0                                  1/1     Running     0              105m
  1. k logs -f -n platform signoz-k8s-infra-otel-agent-c5br9
2023-12-28T10:31:02.441Z        info    [email protected]/telemetry.go:84 Setting up own telemetry...
2023-12-28T10:31:02.441Z        info    [email protected]/telemetry.go:201        Serving Prometheus metrics      {"address": "0.0.0.0:8888", "level": "Basic"}
2023-12-28T10:31:02.442Z        info    kube/client.go:107      k8s filtering   {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics/internal", "labelSelector": "", "fieldSelector": "spec.nodeName=172.18.150.147"}
2023-12-28T10:31:02.443Z        warn    filesystemscraper/factory.go:60 No `root_path` config set when running in docker environment, will report container filesystem stats. See https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/hostmetricsreceiver#collecting-host-metrics-from-inside-a-container-linux-only   {"kind": "receiver", "name": "hostmetrics", "data_type": "metrics"}
2023-12-28T10:31:02.443Z        info    kube/client.go:107      k8s filtering   {"kind": "processor", "name": "k8sattributes", "pipeline": "logs", "labelSelector": "", "fieldSelector": "spec.nodeName=172.18.150.147"}
2023-12-28T10:31:02.443Z        info    kube/client.go:107      k8s filtering   {"kind": "processor", "name": "k8sattributes", "pipeline": "traces", "labelSelector": "", "fieldSelector": "spec.nodeName=172.18.150.147"}
2023-12-28T10:31:02.452Z        info    kube/client.go:107      k8s filtering   {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics", "labelSelector": "", "fieldSelector": "spec.nodeName=172.18.150.147"}
2023-12-28T10:31:02.453Z        info    [email protected]/service.go:143  Starting otelcol-contrib...     {"Version": "0.88.0", "NumCPU": 8}
2023-12-28T10:31:02.453Z        info    extensions/extensions.go:33     Starting extensions...
2023-12-28T10:31:02.453Z        info    extensions/extensions.go:36     Extension is starting...        {"kind": "extension", "name": "health_check"}
2023-12-28T10:31:02.453Z        info    [email protected]/healthcheckextension.go:35 Starting health_check extension     {"kind": "extension", "name": "health_check", "config": {"Endpoint":"0.0.0.0:13133","TLSSetting":null,"CORS":null,"Auth":null,"MaxRequestBodySize":0,"IncludeMetadata":false,"ResponseHeaders":null,"Path":"/","ResponseBody":null,"CheckCollectorPipeline":{"Enabled":false,"Interval":"5m","ExporterFailureThreshold":5}}}
2023-12-28T10:31:02.453Z        warn    [email protected]/warning.go:40  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks    {"kind": "extension", "name": "health_check", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2023-12-28T10:31:02.453Z        info    extensions/extensions.go:43     Extension started.      {"kind": "extension", "name": "health_check"}
2023-12-28T10:31:02.453Z        info    extensions/extensions.go:36     Extension is starting...        {"kind": "extension", "name": "zpages"}
2023-12-28T10:31:02.453Z        info    [email protected]/zpagesextension.go:53   Registered zPages span processor on tracer provider {"kind": "extension", "name": "zpages"}
2023-12-28T10:31:02.454Z        info    [email protected]/zpagesextension.go:63   Registered Host's zPages   {"kind": "extension", "name": "zpages"}
2023-12-28T10:31:02.454Z        info    [email protected]/zpagesextension.go:75   Starting zPages extension  {"kind": "extension", "name": "zpages", "config": {"TCPAddr":{"Endpoint":"localhost:55679"}}}
2023-12-28T10:31:02.454Z        info    extensions/extensions.go:43     Extension started.      {"kind": "extension", "name": "zpages"}
2023-12-28T10:31:02.454Z        info    extensions/extensions.go:36     Extension is starting...        {"kind": "extension", "name": "pprof"}
2023-12-28T10:31:02.454Z        info    [email protected]/pprofextension.go:60     Starting net/http/pprof server      {"kind": "extension", "name": "pprof", "config": {"TCPAddr":{"Endpoint":"localhost:1777"},"BlockProfileFraction":0,"MutexProfileFraction":0,"SaveToFile":""}}
2023-12-28T10:31:02.454Z        info    extensions/extensions.go:43     Extension started.      {"kind": "extension", "name": "pprof"}
2023-12-28T10:31:02.455Z        warn    [email protected]/processor.go:54  k8s.pod.start_time value will be changed to use RFC3339 format in v0.83.0. see https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/24016 for more information. enable feature-gate k8sattr.rfc3339 to opt into this change.  {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics/internal"}
2023-12-28T10:31:02.455Z        info    internal/resourcedetection.go:125       began detecting resource information{"kind": "processor", "name": "resourcedetection", "pipeline": "metrics/internal"}
2023-12-28T10:31:02.455Z        info    internal/resourcedetection.go:139       detected resource information   {"kind": "processor", "name": "resourcedetection", "pipeline": "metrics/internal", "resource": {"host.name":"signoz-k8s-infra-otel-agent-c5br9","os.type":"linux"}}
2023-12-28T10:31:02.455Z        info    internal/resourcedetection.go:125       began detecting resource information{"kind": "processor", "name": "resourcedetection/internal", "pipeline": "metrics/internal"}
2023-12-28T10:31:02.456Z        info    internal/resourcedetection.go:139       detected resource information   {"kind": "processor", "name": "resourcedetection/internal", "pipeline": "metrics/internal", "resource": {"k8s.cluster.name":"","k8s.pod.ip":"10.1.0.94","k8s.pod.uid":"56d1664c-53af-4a1b-bbce-89512ce9bf9a","signoz.component":"otel-agent"}}
2023-12-28T10:31:02.456Z        warn    [email protected]/processor.go:54  k8s.pod.start_time value will be changed to use RFC3339 format in v0.83.0. see https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/24016 for more information. enable feature-gate k8sattr.rfc3339 to opt into this change.  {"kind": "processor", "name": "k8sattributes", "pipeline": "logs"}
2023-12-28T10:31:02.456Z        warn    [email protected]/warning.go:40  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks    {"kind": "receiver", "name": "otlp", "data_type": "logs", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2023-12-28T10:31:02.457Z        info    [email protected]/otlp.go:83 Starting GRPC server    {"kind": "receiver", "name": "otlp", "data_type": "logs", "endpoint": "0.0.0.0:4317"}
2023-12-28T10:31:02.457Z        warn    [email protected]/warning.go:40  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks    {"kind": "receiver", "name": "otlp", "data_type": "logs", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2023-12-28T10:31:02.457Z        info    [email protected]/otlp.go:101        Starting HTTP server    {"kind": "receiver", "name": "otlp", "data_type": "logs", "endpoint": "0.0.0.0:4318"}
2023-12-28T10:31:02.457Z        warn    [email protected]/processor.go:54  k8s.pod.start_time value will be changed to use RFC3339 format in v0.83.0. see https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/24016 for more information. enable feature-gate k8sattr.rfc3339 to opt into this change.  {"kind": "processor", "name": "k8sattributes", "pipeline": "traces"}
2023-12-28T10:31:02.457Z        info    adapter/receiver.go:45  Starting stanza receiver        {"kind": "receiver", "name": "filelog/k8s", "data_type": "logs"}
2023-12-28T10:31:02.457Z        warn    fileconsumer/file.go:61 finding files: no files match the configured criteria       {"kind": "receiver", "name": "filelog/k8s", "data_type": "logs", "component": "fileconsumer"}
2023-12-28T10:31:02.457Z        warn    [email protected]/processor.go:54  k8s.pod.start_time value will be changed to use RFC3339 format in v0.83.0. see https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/24016 for more information. enable feature-gate k8sattr.rfc3339 to opt into this change.  {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics"}
2023-12-28T10:31:02.457Z        info    healthcheck/handler.go:132      Health Check state change       {"kind": "extension", "name": "health_check", "status": "ready"}
2023-12-28T10:31:02.457Z        info    [email protected]/service.go:169  Everything is ready. Begin running and processing data.
  1. Visiting the console, it was found that no logs were captured.
image

Other

Full YAML values

alertmanager:
  additionalPeers: []
  affinity: {}
  command: []
  configmapReload:
    enabled: false
    image:
      pullPolicy: IfNotPresent
      repository: jimmidyson/configmap-reload
      tag: v0.5.0
    name: configmap-reload
    resources:
      requests:
        cpu: 100m
        memory: 100Mi
  dnsConfig: {}
  extraArgs: {}
  image:
    pullPolicy: IfNotPresent
    registry: docker.io
    repository: signoz/alertmanager
    tag: 0.23.4
  imagePullSecrets: []
  ingress:
    annotations: {}
    className: ""
    enabled: false
    hosts:
    - host: alertmanager.domain.com
      paths:
      - path: /
        pathType: ImplementationSpecific
        port: 9093
    tls: []
  initContainers:
    init:
      command:
        delay: 5
        doneMessage: query-service ready, starting alertmanager now
        endpoint: /api/v1/health?live=1
        waitMessage: waiting for query-service
      enabled: true
      image:
        pullPolicy: IfNotPresent
        registry: docker.io
        repository: busybox
        tag: 1.35
      resources: {}
  livenessProbe:
    httpGet:
      path: /
      port: http
  name: alertmanager
  nodeSelector: {}
  persistence:
    accessModes:
    - ReadWriteOnce
    enabled: true
    existingClaim: ""
    size: 10Gi
    storageClass: null
  podAnnotations: {}
  podDisruptionBudget: {}
  podLabels: {}
  podSecurityContext:
    fsGroup: 65534
  priorityClassName: ""
  readinessProbe:
    httpGet:
      path: /
      port: http
  replicaCount: 1
  resources:
    requests:
      cpu: 100m
      memory: 100Mi
  securityContext:
    runAsGroup: 65534
    runAsNonRoot: true
    runAsUser: 65534
  service:
    annotations: {}
    clusterPort: 9094
    nodePort: null
    port: 9093
    type: ClusterIP
  serviceAccount:
    annotations: {}
    create: true
    name: null
  statefulSet:
    annotations:
      helm.sh/hook-weight: "4"
  tolerations: []
  topologySpreadConstraints: []
cert-manager:
  email: null
  enabled: false
  ingressClassName: nginx
  installCRDs: false
  letsencrypt: null
clickhouse:
  affinity: {}
  allowedNetworkIps:
  - 10.0.0.0/8
  - 100.64.0.0/10
  - 172.16.0.0/12
  - 192.0.0.0/24
  - 198.18.0.0/15
  - 192.168.0.0/16
  annotations: {}
  clickhouseOperator:
    affinity: {}
    configs:
      confdFiles: null
    env: []
    image:
      pullPolicy: IfNotPresent
      registry: docker.io
      repository: altinity/clickhouse-operator
      tag: 0.21.2
    imagePullSecrets: []
    logger:
      console: 1
      count: 10
      level: information
      size: 1000M
    metricsExporter:
      env: []
      image:
        pullPolicy: IfNotPresent
        registry: docker.io
        repository: altinity/metrics-exporter
        tag: 0.21.2
      name: metrics-exporter
      service:
        annotations: {}
        port: 8888
        type: ClusterIP
    name: operator
    nodeSelector: {}
    partLog:
      flushInterval: 7500
      ttl: 30
    podAnnotations:
      signoz.io/port: "8888"
      signoz.io/scrape: "true"
    podSecurityContext: {}
    priorityClassName: ""
    queryLog:
      flushInterval: 7500
      ttl: 30
    secret:
      create: true
      password: clickhouse_operator_password
      username: clickhouse_operator
    serviceAccount:
      annotations: {}
      create: true
      name: null
    tolerations: []
    topologySpreadConstraints: []
    traceLog:
      flushInterval: 7500
      ttl: 30
    version: 0.21.2
  cluster: cluster
  coldStorage:
    accessKey: <access_key_id>
    defaultKeepFreeSpaceBytes: "10485760"
    enabled: false
    endpoint: https://<bucket-name>.s3-<region>.amazonaws.com/data/
    role:
      annotations:
        eks.amazonaws.com/role-arn: arn:aws:iam::******:role/*****
      enabled: false
    secretAccess: <secret_access_key>
    type: s3
  database: signoz_metrics
  defaultProfiles:
    default/allow_experimental_window_functions: "1"
    default/allow_nondeterministic_mutations: "1"
  defaultSettings:
    format_schema_path: /etc/clickhouse-server/config.d/
    user_defined_executable_functions_config: /etc/clickhouse-server/functions/custom-functions.xml
    user_scripts_path: /var/lib/clickhouse/user_scripts/
  enabled: true
  externalZookeeper: {}
  files: {}
  fullnameOverride: ""
  global:
    cloud: other
    clusterDomain: cluster.local
    clusterName: ""
    imagePullSecrets: []
    imageRegistry: null
    storageClass: null
  image:
    pullPolicy: IfNotPresent
    registry: docker.io
    repository: clickhouse/clickhouse-server
    tag: 23.11.1-alpine
  imagePullSecrets: []
  initContainers:
    enabled: true
    init:
      command:
      - /bin/sh
      - -c
      - |
        set -e
        until curl -s -o /dev/null http://signoz-clickhouse:8123/
        do sleep 1
        done
      enabled: false
      image:
        pullPolicy: IfNotPresent
        registry: docker.io
        repository: busybox
        tag: 1.35
    udf:
      enabled: true
      image:
        pullPolicy: IfNotPresent
        registry: docker.io
        repository: alpine
        tag: 3.18.2
  installCustomStorageClass: false
  layout:
    replicasCount: 1
    shardsCount: 1
  name: clickhouse
  nameOverride: ""
  namespace: ""
  nodeSelector: {}
  password: 27ff0399-0d3a-4bd8-919d-17c2181e6fb9
  persistence:
    accessModes:
    - ReadWriteOnce
    enabled: true
    existingClaim: ""
    size: 500Gi
    storageClass: cbs-ssd
  podAnnotations:
    signoz.io/path: /metrics
    signoz.io/port: "9363"
    signoz.io/scrape: "true"
  podDistribution: []
  priorityClassName: ""
  profiles: {}
  replicasCount: 1
  resources:
    requests:
      cpu: 100m
      memory: 200Mi
  secure: false
  securityContext:
    enabled: true
    fsGroup: 101
    runAsGroup: 101
    runAsUser: 101
  service:
    annotations: {}
    httpPort: 8123
    tcpPort: 9000
    type: ClusterIP
  serviceAccount:
    annotations: {}
    create: true
    name: null
  settings:
    prometheus/endpoint: /metrics
    prometheus/port: 9363
  shardsCount: 1
  tolerations: []
  traceDatabase: signoz_traces
  user: admin
  verify: false
  zookeeper:
    affinity: {}
    args: []
    auth:
      client:
        clientPassword: ""
        clientUser: ""
        enabled: false
        existingSecret: ""
        serverPasswords: ""
        serverUsers: ""
      quorum:
        enabled: false
        existingSecret: ""
        learnerPassword: ""
        learnerUser: ""
        serverPasswords: ""
        serverUsers: ""
    autopurge:
      purgeInterval: 1
      snapRetainCount: 3
    clusterDomain: cluster.local
    command:
    - /scripts/setup.sh
    common:
      exampleValue: common-chart
      global:
        cloud: other
        clusterDomain: cluster.local
        clusterName: ""
        imagePullSecrets: []
        imageRegistry: null
        storageClass: null
    commonAnnotations: {}
    commonLabels: {}
    configuration: ""
    containerPorts:
      client: 2181
      election: 3888
      follower: 2888
      tls: 3181
    containerSecurityContext:
      allowPrivilegeEscalation: false
      enabled: true
      runAsNonRoot: true
      runAsUser: 1001
    customLivenessProbe: {}
    customReadinessProbe: {}
    customStartupProbe: {}
    dataLogDir: ""
    diagnosticMode:
      args:
      - infinity
      command:
      - sleep
      enabled: false
    enabled: true
    existingConfigmap: ""
    extraDeploy: []
    extraEnvVars: []
    extraEnvVarsCM: ""
    extraEnvVarsSecret: ""
    extraVolumeMounts: []
    extraVolumes: []
    fourlwCommandsWhitelist: srvr, mntr, ruok
    fullnameOverride: ""
    global:
      cloud: other
      clusterDomain: cluster.local
      clusterName: ""
      imagePullSecrets: []
      imageRegistry: null
      storageClass: null
    heapSize: 1024
    hostAliases: []
    image:
      debug: false
      digest: ""
      pullPolicy: IfNotPresent
      pullSecrets: []
      registry: null
      repository: bitnami/zookeeper
      tag: 3.7.1
    initContainers: []
    initLimit: 10
    jvmFlags: ""
    kubeVersion: ""
    lifecycleHooks: {}
    listenOnAllIPs: false
    livenessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 30
      periodSeconds: 10
      probeCommandTimeout: 2
      successThreshold: 1
      timeoutSeconds: 5
    logLevel: ERROR
    maxClientCnxns: 60
    maxSessionTimeout: 40000
    metrics:
      containerPort: 9141
      enabled: false
      prometheusRule:
        additionalLabels: {}
        enabled: false
        namespace: ""
        rules: []
      service:
        annotations:
          prometheus.io/path: /metrics
          prometheus.io/port: '{{ .Values.metrics.service.port }}'
          prometheus.io/scrape: "true"
        port: 9141
        type: ClusterIP
      serviceMonitor:
        additionalLabels: {}
        enabled: false
        honorLabels: false
        interval: ""
        jobLabel: ""
        metricRelabelings: []
        namespace: ""
        relabelings: []
        scrapeTimeout: ""
        selector: {}
    minServerId: 1
    nameOverride: ""
    namespaceOverride: ""
    networkPolicy:
      allowExternal: true
      enabled: false
    nodeAffinityPreset:
      key: ""
      type: ""
      values: []
    nodeSelector: {}
    pdb:
      create: false
      maxUnavailable: 1
      minAvailable: ""
    persistence:
      accessModes:
      - ReadWriteOnce
      annotations: {}
      dataLogDir:
        existingClaim: ""
        selector: {}
        size: 10Gi
      enabled: true
      existingClaim: ""
      labels: {}
      selector: {}
      size: 10Gi
      storageClass: null
    podAffinityPreset: ""
    podAnnotations: {}
    podAntiAffinityPreset: soft
    podLabels: {}
    podManagementPolicy: Parallel
    podSecurityContext:
      enabled: true
      fsGroup: 1001
    preAllocSize: 65536
    priorityClassName: ""
    readinessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 5
      periodSeconds: 10
      probeCommandTimeout: 2
      successThreshold: 1
      timeoutSeconds: 5
    replicaCount: 1
    resources:
      limits: {}
      requests:
        cpu: 100m
        memory: 256Mi
    schedulerName: ""
    service:
      annotations: {}
      clusterIP: ""
      disableBaseClientPort: false
      externalTrafficPolicy: Cluster
      extraPorts: []
      headless:
        annotations: {}
        publishNotReadyAddresses: true
        servicenameOverride: ""
      loadBalancerIP: ""
      loadBalancerSourceRanges: []
      nodePorts:
        client: ""
        tls: ""
      ports:
        client: 2181
        election: 3888
        follower: 2888
        tls: 3181
      sessionAffinity: None
      sessionAffinityConfig: {}
      type: ClusterIP
    serviceAccount:
      annotations: {}
      automountServiceAccountToken: true
      create: false
      name: ""
    sidecars: []
    snapCount: 100000
    startupProbe:
      enabled: false
      failureThreshold: 15
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    syncLimit: 5
    tickTime: 2000
    tls:
      client:
        auth: none
        autoGenerated: false
        enabled: false
        existingSecret: ""
        existingSecretKeystoreKey: ""
        existingSecretTruststoreKey: ""
        keystorePassword: ""
        keystorePath: /opt/bitnami/zookeeper/config/certs/client/zookeeper.keystore.jks
        passwordsSecretKeystoreKey: ""
        passwordsSecretName: ""
        passwordsSecretTruststoreKey: ""
        truststorePassword: ""
        truststorePath: /opt/bitnami/zookeeper/config/certs/client/zookeeper.truststore.jks
      quorum:
        auth: none
        autoGenerated: false
        enabled: false
        existingSecret: ""
        existingSecretKeystoreKey: ""
        existingSecretTruststoreKey: ""
        keystorePassword: ""
        keystorePath: /opt/bitnami/zookeeper/config/certs/quorum/zookeeper.keystore.jks
        passwordsSecretKeystoreKey: ""
        passwordsSecretName: ""
        passwordsSecretTruststoreKey: ""
        truststorePassword: ""
        truststorePath: /opt/bitnami/zookeeper/config/certs/quorum/zookeeper.truststore.jks
      resources:
        limits: {}
        requests: {}
    tolerations: []
    topologySpreadConstraints: []
    updateStrategy:
      rollingUpdate: {}
      type: RollingUpdate
    volumePermissions:
      containerSecurityContext:
        enabled: true
        runAsUser: 0
      enabled: false
      image:
        digest: ""
        pullPolicy: IfNotPresent
        pullSecrets: []
        registry: docker.io
        repository: bitnami/bitnami-shell
        tag: 11-debian-11-r118
      resources:
        limits: {}
        requests: {}
clusterName: ""
externalClickhouse:
  cluster: cluster
  database: signoz_metrics
  existingSecret: null
  existingSecretPasswordKey: null
  host: null
  httpPort: 8123
  password: ""
  secure: false
  tcpPort: 9000
  traceDatabase: signoz_traces
  user: ""
  verify: false
frontend:
  affinity: {}
  annotations:
    helm.sh/hook-weight: "5"
  autoscaling:
    autoscalingTemplate: []
    behavior: {}
    enabled: false
    keda:
      cooldownPeriod: "300"
      enabled: false
      maxReplicaCount: "5"
      minReplicaCount: "1"
      pollingInterval: "30"
      triggers:
      - metadata:
          type: Utilization
          value: "80"
        type: memory
      - metadata:
          type: Utilization
          value: "80"
        type: cpu
    maxReplicas: 11
    minReplicas: 1
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
  configVars: {}
  image:
    pullPolicy: IfNotPresent
    registry: docker.io
    repository: signoz/frontend
    tag: 0.36.0
  imagePullSecrets: []
  ingress:
    annotations: {}
    className: ""
    enabled: false
    hosts:
    - host: frontend.domain.com
      paths:
      - path: /
        pathType: ImplementationSpecific
        port: 3301
    tls: []
  initContainers:
    init:
      command:
        delay: 5
        doneMessage: query-service ready, starting frontend now
        endpoint: /api/v1/health?live=1
        waitMessage: waiting for query-service
      enabled: true
      image:
        pullPolicy: IfNotPresent
        registry: docker.io
        repository: busybox
        tag: 1.35
      resources: {}
  name: frontend
  nginxExtraConfig: |
    client_max_body_size 24M;
    large_client_header_buffers 8 16k;
  nodeSelector: {}
  podSecurityContext: {}
  priorityClassName: ""
  replicaCount: 1
  resources:
    requests:
      cpu: 100m
      memory: 100Mi
  securityContext: {}
  service:
    annotations: {}
    port: 3301
    type: ClusterIP
  serviceAccount:
    annotations: {}
    create: true
    name: null
  tolerations: []
  topologySpreadConstraints: []
fullnameOverride: ""
global:
  cloud: other
  clusterDomain: cluster.local
  clusterName: ""
  imagePullSecrets: []
  imageRegistry: null
  storageClass: cbs-hp
imagePullSecrets: []
ingress-nginx:
  enabled: false
k8s-infra:
  clusterName: ""
  enabled: true
  fullnameOverride: ""
  global:
    cloud: other
    clusterDomain: cluster.local
    clusterName: ""
    imagePullSecrets: []
    imageRegistry: null
    storageClass: null
  insecureSkipVerify: false
  nameOverride: ""
  namespace: ""
  otelAgent:
    additionalEnvs: {}
    affinity: {}
    annotations: {}
    clusterRole:
      annotations: {}
      clusterRoleBinding:
        annotations: {}
        name: ""
      create: true
      name: ""
      rules:
      - apiGroups:
        - ""
        resources:
        - pods
        - namespaces
        - nodes
        verbs:
        - get
        - list
        - watch
      - apiGroups:
        - apps
        resources:
        - replicasets
        verbs:
        - get
        - list
        - watch
      - apiGroups:
        - extensions
        resources:
        - replicasets
        verbs:
        - get
        - list
        - watch
      - apiGroups:
        - ""
        resources:
        - nodes
        - endpoints
        verbs:
        - list
        - watch
      - apiGroups:
        - batch
        resources:
        - jobs
        verbs:
        - list
        - watch
      - apiGroups:
        - ""
        resources:
        - nodes/proxy
        verbs:
        - get
      - apiGroups:
        - ""
        resources:
        - nodes/stats
        - configmaps
        - events
        verbs:
        - create
        - get
      - apiGroups:
        - ""
        resourceNames:
        - otel-container-insight-clusterleader
        resources:
        - configmaps
        verbs:
        - get
        - update
    command:
      extraArgs: []
      name: /otelcol-contrib
    config:
      exporters: {}
      extensions:
        health_check:
          endpoint: 0.0.0.0:13133
        pprof:
          endpoint: localhost:1777
        zpages:
          endpoint: localhost:55679
      processors:
        batch:
          send_batch_size: 10000
          timeout: 200ms
        memory_limiter: null
      receivers:
        otlp:
          protocols:
            grpc:
              endpoint: 0.0.0.0:4317
              max_recv_msg_size_mib: 4
            http:
              endpoint: 0.0.0.0:4318
      service:
        extensions:
        - health_check
        - zpages
        - pprof
        pipelines:
          logs:
            exporters: []
            processors:
            - batch
            receivers:
            - otlp
          metrics:
            exporters: []
            processors:
            - batch
            receivers:
            - otlp
          metrics/internal:
            exporters: []
            processors:
            - batch
            receivers: []
          traces:
            exporters: []
            processors:
            - batch
            receivers:
            - otlp
        telemetry:
          metrics:
            address: 0.0.0.0:8888
    configMap:
      create: true
    customLivenessProbe: {}
    customReadinessProbe: {}
    image:
      pullPolicy: IfNotPresent
      registry: docker.io
      repository: otel/opentelemetry-collector-contrib
      tag: 0.88.0
    imagePullSecrets: []
    ingress:
      annotations: {}
      className: ""
      enabled: false
      hosts:
      - host: otel-agent.domain.com
        paths:
        - path: /
          pathType: ImplementationSpecific
          port: 4317
      tls: []
    livenessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 10
      path: /
      periodSeconds: 10
      port: 13133
      successThreshold: 1
      timeoutSeconds: 5
    minReadySeconds: 5
    name: otel-agent
    nodeSelector: {}
    podAnnotations:
      signoz.io/path: /metrics
      signoz.io/port: "8888"
      signoz.io/scrape: "true"
    podSecurityContext: {}
    ports:
      health-check:
        containerPort: 13133
        enabled: true
        hostPort: 13133
        nodePort: ""
        protocol: TCP
        servicePort: 13133
      metrics:
        containerPort: 8888
        enabled: true
        hostPort: 8888
        nodePort: ""
        protocol: TCP
        servicePort: 8888
      otlp:
        containerPort: 4317
        enabled: true
        hostPort: 4317
        nodePort: ""
        protocol: TCP
        servicePort: 4317
      otlp-http:
        containerPort: 4318
        enabled: true
        hostPort: 4318
        nodePort: ""
        protocol: TCP
        servicePort: 4318
      pprof:
        containerPort: 1777
        enabled: false
        hostPort: 1777
        nodePort: ""
        protocol: TCP
        servicePort: 1777
      zipkin:
        containerPort: 9411
        enabled: false
        hostPort: 9411
        nodePort: ""
        protocol: TCP
        servicePort: 9411
      zpages:
        containerPort: 55679
        enabled: false
        hostPort: 55679
        nodePort: ""
        protocol: TCP
        servicePort: 55679
    priorityClassName: ""
    readinessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 10
      path: /
      periodSeconds: 10
      port: 13133
      successThreshold: 1
      timeoutSeconds: 5
    resources:
      requests:
        cpu: 100m
        memory: 100Mi
    securityContext: {}
    service:
      annotations: {}
      type: ClusterIP
    serviceAccount:
      annotations: {}
      create: true
      name: null
    tolerations: []
  otelCollectorEndpoint: null
  otelDeployment:
    additionalEnvs: {}
    affinity: {}
    annotations: {}
    clusterRole:
      annotations: {}
      clusterRoleBinding:
        annotations: {}
        name: ""
      create: true
      name: ""
      rules:
      - apiGroups:
        - ""
        resources:
        - events
        - namespaces
        - namespaces/status
        - nodes
        - nodes/spec
        - pods
        - pods/status
        - replicationcontrollers
        - replicationcontrollers/status
        - resourcequotas
        - services
        verbs:
        - get
        - list
        - watch
      - apiGroups:
        - apps
        resources:
        - daemonsets
        - deployments
        - replicasets
        - statefulsets
        verbs:
        - get
        - list
        - watch
      - apiGroups:
        - extensions
        resources:
        - daemonsets
        - deployments
        - replicasets
        verbs:
        - get
        - list
        - watch
      - apiGroups:
        - batch
        resources:
        - jobs
        - cronjobs
        verbs:
        - get
        - list
        - watch
      - apiGroups:
        - autoscaling
        resources:
        - horizontalpodautoscalers
        verbs:
        - get
        - list
        - watch
    command:
      extraArgs: []
      name: /otelcol-contrib
    config:
      exporters: {}
      extensions:
        health_check:
          endpoint: 0.0.0.0:13133
        pprof:
          endpoint: localhost:1777
        zpages:
          endpoint: localhost:55679
      processors:
        batch:
          send_batch_size: 10000
          timeout: 1s
        memory_limiter: null
      receivers: {}
      service:
        extensions:
        - health_check
        - zpages
        - pprof
        pipelines:
          metrics/internal:
            exporters: []
            processors:
            - batch
            receivers: []
        telemetry:
          metrics:
            address: 0.0.0.0:8888
    configMap:
      create: true
    customLivenessProbe: {}
    customReadinessProbe: {}
    image:
      pullPolicy: IfNotPresent
      registry: docker.io
      repository: otel/opentelemetry-collector-contrib
      tag: 0.88.0
    imagePullSecrets: []
    ingress:
      annotations: {}
      className: ""
      enabled: false
      hosts:
      - host: otel-deployment.domain.com
        paths:
        - path: /
          pathType: ImplementationSpecific
          port: 13133
      tls: []
    livenessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 10
      path: /
      periodSeconds: 10
      port: 13133
      successThreshold: 1
      timeoutSeconds: 5
    minReadySeconds: 5
    name: otel-deployment
    nodeSelector: {}
    podAnnotations:
      signoz.io/path: /metrics
      signoz.io/port: "8888"
      signoz.io/scrape: "true"
    podSecurityContext: {}
    ports:
      health-check:
        containerPort: 13133
        enabled: true
        nodePort: ""
        protocol: TCP
        servicePort: 13133
      metrics:
        containerPort: 8888
        enabled: false
        nodePort: ""
        protocol: TCP
        servicePort: 8888
      pprof:
        containerPort: 1777
        enabled: false
        nodePort: ""
        protocol: TCP
        servicePort: 1777
      zpages:
        containerPort: 55679
        enabled: false
        nodePort: ""
        protocol: TCP
        servicePort: 55679
    priorityClassName: ""
    progressDeadlineSeconds: 120
    readinessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 10
      path: /
      periodSeconds: 10
      port: 13133
      successThreshold: 1
      timeoutSeconds: 5
    resources:
      requests:
        cpu: 100m
        memory: 100Mi
    securityContext: {}
    service:
      annotations: {}
      type: ClusterIP
    serviceAccount:
      annotations: {}
      create: true
      name: null
    tolerations: []
  otelInsecure: true
  otelTlsSecrets:
    ca: ""
    certificate: |
      <INCLUDE_CERTIFICATE_HERE>
    enabled: false
    existingSecretName: null
    key: |
      <INCLUDE_PRIVATE_KEY_HERE>
    path: /secrets
  presets:
    clusterMetrics:
      allocatableTypesToReport:
      - cpu
      - memory
      collectionInterval: 30s
      enabled: true
      nodeConditionsToReport:
      - Ready
      - MemoryPressure
    hostMetrics:
      collectionInterval: 30s
      enabled: true
      scrapers:
        cpu: {}
        disk: {}
        filesystem: {}
        load: {}
        memory: {}
        network: {}
    kubeletMetrics:
      authType: serviceAccount
      collectionInterval: 30s
      enabled: true
      endpoint: ${K8S_NODE_NAME}:10250
      extraMetadataLabels:
      - container.id
      - k8s.volume.type
      insecureSkipVerify: true
      metricGroups:
      - container
      - pod
      - node
      - volume
    kubernetesAttributes:
      enabled: true
      extractMetadatas:
      - k8s.namespace.name
      - k8s.pod.name
      - k8s.pod.uid
      - k8s.pod.start_time
      - k8s.deployment.name
      - k8s.node.name
      filter:
        node_from_env_var: K8S_NODE_NAME
      passthrough: false
      podAssociation:
      - sources:
        - from: resource_attribute
          name: k8s.pod.ip
      - sources:
        - from: resource_attribute
          name: k8s.pod.uid
      - sources:
        - from: connection
    loggingExporter:
      enabled: false
      samplingInitial: 2
      samplingThereafter: 500
      verbosity: basic
    logsCollection:
      blacklist:
        additionalExclude: []
        containers: []
        enabled: true
        namespaces:
        - kube-system
        pods:
        - hotrod
        - locust
        signozLogs: false
      enabled: true
      include:
      - /var/log/pods/*/*/*.log
      includeFileName: false
      includeFilePath: true
      operators:
      - id: get-format
        routes:
        - expr: body matches "^\\{"
          output: parser-docker
        - expr: body matches "^[^ Z]+ "
          output: parser-crio
        - expr: body matches "^[^ Z]+Z"
          output: parser-containerd
        type: router
      - id: parser-crio
        output: extract_metadata_from_filepath
        regex: ^(?P<time>[^ Z]+) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$
        timestamp:
          layout: "2006-01-02T15:04:05.000000000-07:00"
          layout_type: gotime
          parse_from: attributes.time
        type: regex_parser
      - id: parser-containerd
        output: extract_metadata_from_filepath
        regex: ^(?P<time>[^ ^Z]+Z) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$
        timestamp:
          layout: '%Y-%m-%dT%H:%M:%S.%LZ'
          parse_from: attributes.time
        type: regex_parser
      - id: parser-docker
        output: extract_metadata_from_filepath
        timestamp:
          layout: '%Y-%m-%dT%H:%M:%S.%LZ'
          parse_from: attributes.time
        type: json_parser
      - id: extract_metadata_from_filepath
        output: add_cluster_name
        parse_from: attributes["log.file.path"]
        regex: ^.*\/(?P<namespace>[^_]+)_(?P<pod_name>[^_]+)_(?P<uid>[a-f0-9\-]+)\/(?P<container_name>[^\._]+)\/(?P<restart_count>\d+)\.log$
        type: regex_parser
      - field: resource["k8s.cluster.name"]
        id: add_cluster_name
        output: move_stream
        type: add
        value: EXPR(env("K8S_CLUSTER_NAME"))
      - from: attributes.stream
        id: move_stream
        output: move_container_name
        to: attributes["log.iostream"]
        type: move
      - from: attributes.container_name
        id: move_container_name
        output: move_namespace
        to: resource["k8s.container.name"]
        type: move
      - from: attributes.namespace
        id: move_namespace
        output: move_pod_name
        to: resource["k8s.namespace.name"]
        type: move
      - from: attributes.pod_name
        id: move_pod_name
        output: move_restart_count
        to: resource["k8s.pod.name"]
        type: move
      - from: attributes.restart_count
        id: move_restart_count
        output: move_uid
        to: resource["k8s.container.restart_count"]
        type: move
      - from: attributes.uid
        id: move_uid
        output: move_log
        to: resource["k8s.pod.uid"]
        type: move
      - from: attributes.log
        id: move_log
        to: body
        type: move
      startAt: beginning
      whitelist:
        additionalInclude: []
        containers: []
        enabled: false
        namespaces: []
        pods: []
        signozLogs: true
    otlpExporter:
      enabled: true
    resourceDetection:
      detectors:
      - system
      enabled: true
      envResourceAttributes: ""
      override: true
      systemHostnameSources:
      - dns
      - os
      timeout: 2s
    resourceDetectionInternal:
      enabled: true
      override: true
      timeout: 2s
  signozApiKey: ""
keycloak:
  auth:
    adminPassword: adminpass123
    adminUser: admin
  enabled: false
  ingress:
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
    enabled: false
    hostname: keycloak.domain.com
    ingressClassName: nginx
    pathType: Prefix
    selfSigned: false
    servicePort: http
    tls: true
  postgresql:
    auth:
      database: bitnami_keycloak
      existingSecret: ""
      password: bn_keycloak@123
      postgresPassword: pgadminpass123
      username: bn_keycloak
  service:
    type: ClusterIP
minio:
  drivesPerNode: 1
  enabled: false
  persistence:
    VolumeName: ""
    accessMode: ReadWriteOnce
    annotations: {}
    enabled: true
    existingClaim: ""
    size: 50Gi
    storageClass: ""
  pools: 1
  replicas: 1
  resources:
    requests:
      cpu: 100m
      memory: 200Mi
  rootPassword: rootpass123
  rootUser: rootuser
nameOverride: ""
otelCollector:
  additionalEnvs: {}
  affinity: {}
  annotations:
    helm.sh/hook-weight: "3"
  autoscaling:
    autoscalingTemplate: []
    behavior: {}
    enabled: false
    keda:
      cooldownPeriod: "300"
      enabled: false
      maxReplicaCount: "5"
      minReplicaCount: "1"
      pollingInterval: "30"
      triggers:
      - metadata:
          type: Utilization
          value: "80"
        type: memory
      - metadata:
          type: Utilization
          value: "80"
        type: cpu
    maxReplicas: 11
    minReplicas: 1
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
  clusterRole:
    annotations: {}
    clusterRoleBinding:
      annotations: {}
      name: ""
    create: true
    name: ""
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      - namespaces
      - nodes
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - apps
      resources:
      - replicasets
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - extensions
      resources:
      - replicasets
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - batch
      resources:
      - jobs
      verbs:
      - get
      - list
      - watch
  command:
    extraArgs:
    - --feature-gates=-pkg.translator.prometheus.NormalizeName
    name: /signoz-collector
  config:
    exporters:
      clickhouselogsexporter:
        dsn: tcp://${CLICKHOUSE_HOST}:${CLICKHOUSE_PORT}/?username=${CLICKHOUSE_USER}&password=${CLICKHOUSE_PASSWORD}
        timeout: 10s
      clickhousemetricswrite:
        endpoint: tcp://${CLICKHOUSE_HOST}:${CLICKHOUSE_PORT}/?database=${CLICKHOUSE_DATABASE}&username=${CLICKHOUSE_USER}&password=${CLICKHOUSE_PASSWORD}
        resource_to_telemetry_conversion:
          enabled: true
      clickhousetraces:
        datasource: tcp://${CLICKHOUSE_HOST}:${CLICKHOUSE_PORT}/?database=${CLICKHOUSE_TRACE_DATABASE}&username=${CLICKHOUSE_USER}&password=${CLICKHOUSE_PASSWORD}
        low_cardinal_exception_grouping: ${LOW_CARDINAL_EXCEPTION_GROUPING}
      prometheus:
        endpoint: 0.0.0.0:8889
    extensions:
      health_check:
        endpoint: 0.0.0.0:13133
      pprof:
        endpoint: localhost:1777
      zpages:
        endpoint: localhost:55679
    processors:
      batch:
        send_batch_size: 50000
        timeout: 1s
      k8sattributes:
        extract:
          metadata:
          - k8s.namespace.name
          - k8s.pod.name
          - k8s.pod.uid
          - k8s.pod.start_time
          - k8s.deployment.name
          - k8s.node.name
        filter:
          node_from_env_var: K8S_NODE_NAME
        passthrough: false
        pod_association:
        - sources:
          - from: resource_attribute
            name: k8s.pod.ip
        - sources:
          - from: resource_attribute
            name: k8s.pod.uid
        - sources:
          - from: connection
      memory_limiter: null
      resourcedetection:
        detectors:
        - env
        - system
        system:
          hostname_sources:
          - dns
          - os
        timeout: 2s
      signozspanmetrics/cumulative:
        dimensions:
        - default: default
          name: service.namespace
        - default: default
          name: deployment.environment
        - name: signoz.collector.id
        dimensions_cache_size: 100000
        latency_histogram_buckets:
        - 100us
        - 1ms
        - 2ms
        - 6ms
        - 10ms
        - 50ms
        - 100ms
        - 250ms
        - 500ms
        - 1000ms
        - 1400ms
        - 2000ms
        - 5s
        - 10s
        - 20s
        - 40s
        - 60s
        metrics_exporter: clickhousemetricswrite
      signozspanmetrics/delta:
        aggregation_temporality: AGGREGATION_TEMPORALITY_DELTA
        dimensions:
        - default: default
          name: service.namespace
        - default: default
          name: deployment.environment
        - name: signoz.collector.id
        dimensions_cache_size: 100000
        latency_histogram_buckets:
        - 100us
        - 1ms
        - 2ms
        - 6ms
        - 10ms
        - 50ms
        - 100ms
        - 250ms
        - 500ms
        - 1000ms
        - 1400ms
        - 2000ms
        - 5s
        - 10s
        - 20s
        - 40s
        - 60s
        metrics_exporter: clickhousemetricswrite
    receivers:
      hostmetrics:
        collection_interval: 30s
        scrapers:
          cpu: {}
          disk: {}
          filesystem: {}
          load: {}
          memory: {}
          network: {}
      httplogreceiver/heroku:
        endpoint: 0.0.0.0:8081
        source: heroku
      httplogreceiver/json:
        endpoint: 0.0.0.0:8082
        source: json
      jaeger:
        protocols:
          grpc:
            endpoint: 0.0.0.0:14250
          thrift_http:
            endpoint: 0.0.0.0:14268
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
            max_recv_msg_size_mib: 16
          http:
            endpoint: 0.0.0.0:4318
      otlp/spanmetrics:
        protocols:
          grpc:
            endpoint: localhost:12345
    service:
      extensions:
      - health_check
      - zpages
      - pprof
      pipelines:
        logs:
          exporters:
          - clickhouselogsexporter
          processors:
          - batch
          receivers:
          - otlp
          - httplogreceiver/heroku
          - httplogreceiver/json
        metrics:
          exporters:
          - clickhousemetricswrite
          processors:
          - batch
          receivers:
          - otlp
        metrics/internal:
          exporters:
          - clickhousemetricswrite
          processors:
          - resourcedetection
          - k8sattributes
          - batch
          receivers:
          - hostmetrics
        traces:
          exporters:
          - clickhousetraces
          processors:
          - signozspanmetrics/cumulative
          - signozspanmetrics/delta
          - batch
          receivers:
          - otlp
          - jaeger
      telemetry:
        metrics:
          address: 0.0.0.0:8888
  configMap:
    create: true
  customLivenessProbe: {}
  customReadinessProbe: {}
  extraVolumeMounts: []
  extraVolumes: []
  image:
    pullPolicy: IfNotPresent
    registry: docker.io
    repository: signoz/signoz-otel-collector
    tag: 0.88.4
  imagePullSecrets: []
  ingress:
    annotations: {}
    className: ""
    enabled: false
    hosts:
    - host: otelcollector.domain.com
      paths:
      - path: /
        pathType: ImplementationSpecific
        port: 4318
    tls: []
  initContainers:
    init:
      command:
        delay: 5
        doneMessage: clickhouse ready, starting otel collector now
        endpoint: /ping
        waitMessage: waiting for clickhouseDB
      enabled: false
      image:
        pullPolicy: IfNotPresent
        registry: docker.io
        repository: busybox
        tag: 1.35
      resources: {}
  livenessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 5
    path: /
    periodSeconds: 10
    port: 13133
    successThreshold: 1
    timeoutSeconds: 5
  lowCardinalityExceptionGrouping: false
  minReadySeconds: 5
  name: otel-collector
  nodeSelector: {}
  podAnnotations:
    signoz.io/port: "8888"
    signoz.io/scrape: "true"
  podLabels: {}
  podSecurityContext: {}
  ports:
    jaeger-compact:
      containerPort: 6831
      enabled: false
      nodePort: ""
      protocol: UDP
      servicePort: 6831
    jaeger-grpc:
      containerPort: 14250
      enabled: true
      nodePort: ""
      protocol: TCP
      servicePort: 14250
    jaeger-thrift:
      containerPort: 14268
      enabled: true
      nodePort: ""
      protocol: TCP
      servicePort: 14268
    logsheroku:
      containerPort: 8081
      enabled: true
      nodePort: ""
      protocol: TCP
      servicePort: 8081
    logsjson:
      containerPort: 8082
      enabled: true
      nodePort: ""
      protocol: TCP
      servicePort: 8082
    metrics:
      containerPort: 8888
      enabled: true
      nodePort: ""
      protocol: TCP
      servicePort: 8888
    otlp:
      containerPort: 4317
      enabled: true
      nodePort: ""
      protocol: TCP
      servicePort: 4317
    otlp-http:
      containerPort: 4318
      enabled: true
      nodePort: ""
      protocol: TCP
      servicePort: 4318
    pprof:
      containerPort: 1777
      enabled: false
      nodePort: ""
      protocol: TCP
      servicePort: 1777
    prometheus:
      containerPort: 8889
      enabled: false
      nodePort: ""
      protocol: TCP
      servicePort: 8889
    zipkin:
      containerPort: 9411
      enabled: false
      nodePort: ""
      protocol: TCP
      servicePort: 9411
    zpages:
      containerPort: 55679
      enabled: false
      nodePort: ""
      protocol: TCP
      servicePort: 55679
  priorityClassName: ""
  progressDeadlineSeconds: 120
  readinessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 5
    path: /
    periodSeconds: 10
    port: 13133
    successThreshold: 1
    timeoutSeconds: 5
  replicaCount: 1
  resources:
    requests:
      cpu: 100m
      memory: 200Mi
  securityContext: {}
  service:
    annotations: {}
    type: ClusterIP
  serviceAccount:
    annotations: {}
    create: true
    name: null
  tolerations: []
  topologySpreadConstraints:
  - labelSelector:
      matchLabels:
        app.kubernetes.io/component: otel-collector
    maxSkew: 1
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: ScheduleAnyway
otelCollectorMetrics:
  additionalEnvs: {}
  affinity: {}
  annotations:
    helm.sh/hook-weight: "3"
  clusterRole:
    annotations: {}
    clusterRoleBinding:
      annotations: {}
      name: ""
    create: true
    name: ""
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      - namespaces
      - nodes
      verbs:
      - get
      - watch
      - list
    - apiGroups:
      - batch
      resources:
      - jobs
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - apps
      resources:
      - replicasets
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - extensions
      resources:
      - replicasets
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - ""
      resources:
      - nodes
      - nodes/proxy
      - services
      - endpoints
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - extensions
      resources:
      - ingresses
      verbs:
      - get
      - list
      - watch
    - nonResourceURLs:
      - /metrics
      verbs:
      - get
  command:
    extraArgs:
    - --feature-gates=-pkg.translator.prometheus.NormalizeName
    name: /signoz-collector
  config:
    exporters:
      clickhousemetricswrite:
        endpoint: tcp://${CLICKHOUSE_HOST}:${CLICKHOUSE_PORT}/?database=${CLICKHOUSE_DATABASE}&username=${CLICKHOUSE_USER}&password=${CLICKHOUSE_PASSWORD}
      clickhousemetricswrite/hostmetrics:
        endpoint: tcp://${CLICKHOUSE_HOST}:${CLICKHOUSE_PORT}/?database=${CLICKHOUSE_DATABASE}&username=${CLICKHOUSE_USER}&password=${CLICKHOUSE_PASSWORD}
        resource_to_telemetry_conversion:
          enabled: true
    extensions:
      health_check:
        endpoint: 0.0.0.0:13133
      pprof:
        endpoint: localhost:1777
      zpages:
        endpoint: localhost:55679
    processors:
      batch:
        send_batch_size: 10000
        timeout: 1s
      k8sattributes/hostmetrics:
        extract:
          metadata:
          - k8s.namespace.name
          - k8s.pod.name
          - k8s.pod.uid
          - k8s.pod.start_time
          - k8s.deployment.name
          - k8s.node.name
        filter:
          node_from_env_var: K8S_NODE_NAME
        passthrough: false
        pod_association:
        - sources:
          - from: resource_attribute
            name: k8s.pod.ip
        - sources:
          - from: resource_attribute
            name: k8s.pod.uid
        - sources:
          - from: connection
      memory_limiter: null
      resourcedetection:
        detectors:
        - env
        - system
        system:
          hostname_sources:
          - dns
          - os
        timeout: 2s
    receivers:
      hostmetrics:
        collection_interval: 30s
        scrapers:
          cpu: {}
          disk: {}
          filesystem: {}
          load: {}
          memory: {}
          network: {}
      prometheus:
        config:
          scrape_configs:
          - job_name: generic-collector
            kubernetes_sd_configs:
            - role: pod
            relabel_configs:
            - action: keep
              regex: true
              source_labels:
              - __meta_kubernetes_pod_annotation_signoz_io_scrape
            - action: replace
              regex: (.+)
              source_labels:
              - __meta_kubernetes_pod_annotation_signoz_io_path
              target_label: __metrics_path__
            - action: replace
              separator: ':'
              source_labels:
              - __meta_kubernetes_pod_ip
              - __meta_kubernetes_pod_annotation_signoz_io_port
              target_label: __address__
            - replacement: generic-collector
              target_label: job_name
            - action: replace
              source_labels:
              - __meta_kubernetes_pod_label_app_kubernetes_io_name
              target_label: signoz_k8s_name
            - action: replace
              source_labels:
              - __meta_kubernetes_pod_label_app_kubernetes_io_instance
              target_label: signoz_k8s_instance
            - action: replace
              source_labels:
              - __meta_kubernetes_pod_label_app_kubernetes_io_component
              target_label: signoz_k8s_component
            - action: replace
              source_labels:
              - __meta_kubernetes_namespace
              target_label: k8s_namespace_name
            - action: replace
              source_labels:
              - __meta_kubernetes_pod_name
              target_label: k8s_pod_name
            - action: replace
              source_labels:
              - __meta_kubernetes_pod_uid
              target_label: k8s_pod_uid
            - action: replace
              source_labels:
              - __meta_kubernetes_pod_container_name
              target_label: k8s_container_name
            - action: drop
              regex: (.+)-init
              source_labels:
              - __meta_kubernetes_pod_container_name
            - action: replace
              source_labels:
              - __meta_kubernetes_pod_node_name
              target_label: k8s_node_name
            - action: replace
              source_labels:
              - __meta_kubernetes_pod_ready
              target_label: k8s_pod_ready
            - action: replace
              source_labels:
              - __meta_kubernetes_pod_phase
              target_label: k8s_pod_phase
            scrape_interval: 60s
    service:
      extensions:
      - health_check
      - zpages
      - pprof
      pipelines:
        metrics:
          exporters:
          - clickhousemetricswrite
          processors:
          - batch
          receivers:
          - prometheus
        metrics/hostmetrics:
          exporters:
          - clickhousemetricswrite/hostmetrics
          processors:
          - resourcedetection
          - k8sattributes/hostmetrics
          - batch
          receivers:
          - hostmetrics
      telemetry:
        metrics:
          address: 0.0.0.0:8888
  configMap:
    create: true
  customLivenessProbe: {}
  customReadinessProbe: {}
  extraVolumeMounts: []
  extraVolumes: []
  image:
    pullPolicy: IfNotPresent
    registry: docker.io
    repository: signoz/signoz-otel-collector
    tag: 0.88.4
  imagePullSecrets: []
  ingress:
    annotations: {}
    className: ""
    enabled: false
    hosts:
    - host: otelcollector-metrics.domain.com
      paths:
      - path: /
        pathType: ImplementationSpecific
        port: 13133
    tls: []
  initContainers:
    init:
      command:
        delay: 5
        doneMessage: clickhouse ready, starting otel collector metrics now
        endpoint: /ping
        waitMessage: waiting for clickhouseDB
      enabled: false
      image:
        pullPolicy: IfNotPresent
        registry: docker.io
        repository: busybox
        tag: 1.35
      resources: {}
  livenessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 5
    path: /
    periodSeconds: 10
    port: 13133
    successThreshold: 1
    timeoutSeconds: 5
  minReadySeconds: 5
  name: otel-collector-metrics
  nodeSelector: {}
  podAnnotations:
    signoz.io/port: "8888"
    signoz.io/scrape: "true"
  podSecurityContext: {}
  ports:
    health-check:
      containerPort: 13133
      enabled: true
      protocol: TCP
      servicePort: 13133
    metrics:
      containerPort: 8888
      enabled: false
      protocol: TCP
      servicePort: 8888
    pprof:
      containerPort: 1777
      enabled: false
      protocol: TCP
      servicePort: 1777
    zpages:
      containerPort: 55679
      enabled: false
      protocol: TCP
      servicePort: 55679
  priorityClassName: ""
  progressDeadlineSeconds: 120
  readinessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 5
    path: /
    periodSeconds: 10
    port: 13133
    successThreshold: 1
    timeoutSeconds: 5
  replicaCount: 1
  resources:
    requests:
      cpu: 100m
      memory: 100Mi
  securityContext: {}
  service:
    annotations: {}
    type: ClusterIP
  serviceAccount:
    annotations: {}
    create: true
    name: null
  tolerations: []
  topologySpreadConstraints: []
queryService:
  additionalArgs: []
  additionalEnvs: {}
  affinity: {}
  annotations:
    helm.sh/hook-weight: "2"
  configVars:
    deploymentType: kubernetes-helm
    goDebug: netdns=go
    storage: clickhouse
    telemetryEnabled: true
  customLivenessProbe: {}
  customReadinessProbe: {}
  image:
    pullPolicy: IfNotPresent
    registry: docker.io
    repository: signoz/query-service
    tag: 0.36.0
  imagePullSecrets: []
  ingress:
    annotations: {}
    className: ""
    enabled: false
    hosts:
    - host: query-service.domain.com
      paths:
      - path: /
        pathType: ImplementationSpecific
        port: 8080
    tls: []
  initContainers:
    init:
      command:
        delay: 5
        doneMessage: clickhouse ready, starting query service now
        endpoint: /ping
        waitMessage: waiting for clickhouseDB
      enabled: true
      image:
        pullPolicy: IfNotPresent
        registry: docker.io
        repository: busybox
        tag: 1.35
      resources: {}
    migration:
      command:
      - sh
      - -c
      - |
        echo "Running migration"
        sleep 10  # Replace with actual migration command
        echo "Migration completed"
      enabled: false
      image:
        pullPolicy: IfNotPresent
        registry: docker.io
        repository: busybox
        tag: 1.35
      resources: {}
  livenessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 5
    path: /api/v1/health
    periodSeconds: 10
    port: http
    successThreshold: 1
    timeoutSeconds: 5
  name: query-service
  nodeSelector: {}
  persistence:
    accessModes:
    - ReadWriteOnce
    enabled: true
    existingClaim: ""
    size: 10Gi
    storageClass: null
  podSecurityContext: {}
  priorityClassName: ""
  readinessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 5
    path: /api/v1/health?live=1
    periodSeconds: 10
    port: http
    successThreshold: 1
    timeoutSeconds: 5
  replicaCount: 1
  resources:
    requests:
      cpu: 100m
      memory: 100Mi
  securityContext: {}
  service:
    annotations: {}
    internalNodePort: null
    internalPort: 8085
    nodePort: null
    opampPort: 4320
    port: 8080
    type: ClusterIP
  serviceAccount:
    annotations: {}
    create: true
    name: null
  tolerations: []
  topologySpreadConstraints: []
schemaMigrator:
  annotations:
    helm.sh/hook: post-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "1"
  args: {}
  enabled: true
  image:
    pullPolicy: IfNotPresent
    registry: docker.io
    repository: signoz/signoz-schema-migrator
    tag: 0.88.4
  initContainers:
    init:
      command:
        delay: 5
        doneMessage: clickhouse ready, starting schema migrator now
        endpoint: /ping
        waitMessage: waiting for clickhouseDB
      enabled: true
      image:
        pullPolicy: IfNotPresent
        registry: docker.io
        repository: busybox
        tag: 1.35
      resources: {}
    wait:
      image:
        pullPolicy: IfNotPresent
        registry: docker.io
        repository: groundnuty/k8s-wait-for
        tag: v2.0
  name: schema-migrator

clickhouse log table

image

The cluster host directory

image
@An0nymous0
Copy link
Author

k describe pods -n platform signoz-k8s-infra-otel-agent-sr2b4

Volumes:
  otel-agent-config-vol:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      signoz-k8s-infra-otel-agent
    Optional:  false
  varlog:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:  
  varlibdockercontainers:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/docker/containers
    HostPathType:  
  kube-api-access-bgt48:

The only difference I found is that my containerd root directory is not in /var/lib/docker/containers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant