Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Grafana not reachable after setup #2417

Open
nandokawka opened this issue May 8, 2024 · 0 comments
Open

Grafana not reachable after setup #2417

nandokawka opened this issue May 8, 2024 · 0 comments
Labels

Comments

@nandokawka
Copy link

What happened?

I followed the installation guide and the Grafana Pod is not reachable after installation:

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  13m                default-scheduler  Successfully assigned monitoring/grafana-79cc6f7f6b-psdtb to lsk3smaster03
  Normal   Pulled     13m                kubelet            Container image "grafana/grafana:10.4.1" already present on machine
  Normal   Created    13m                kubelet            Created container grafana
  Normal   Started    13m                kubelet            Started container grafana
  Warning  Unhealthy  13m (x3 over 13m)  kubelet            Readiness probe failed: Get "http://10.42.1.103:3000/api/health": dial tcp 10.42.1.103:3000: connect: connection refused

The logs show the following:

k logs -n monitoring grafana-79cc6f7f6b-psdtb | grep error
logger=provisioning.plugins t=2024-05-08T15:06:53.689927211Z level=error msg="Failed to read plugin provisioning files from directory" path=/etc/grafana/provisioning/plugins error="open /etc/grafana/provisioning/plugins: no such file or directory"
logger=provisioning.notifiers t=2024-05-08T15:06:53.689977304Z level=error msg="Can't read alert notification provisioning files from directory" path=/etc/grafana/provisioning/notifiers error="open /etc/grafana/provisioning/notifiers: no such file or directory"
logger=provisioning.alerting t=2024-05-08T15:06:53.690017989Z level=error msg="can't read alerting provisioning files from directory" path=/etc/grafana/provisioning/alerting error="open /etc/grafana/provisioning/alerting: no such file or directory"
logger=folder-service t=2024-05-08T15:06:53.98012215Z level=warn msg="failed to parse user ID" namespaceID=user userID=0 error="identifier is not initialized"
logger=sqlstore.transactions t=2024-05-08T15:06:54.973313353Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
logger=sqlstore.transactions t=2024-05-08T15:06:55.273191247Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
logger=sqlstore.transactions t=2024-05-08T15:06:55.284252344Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"

Did you expect to see some different?

I expected to be able access the Grafana Dashboard

How to reproduce it (as minimally and precisely as possible):

Followed the quick start guide.

Environment

k3s cluster with 3 Nodes on RPi 5

  • Prometheus Operator version:

    Insert image tag or Git SHA here

    quay.io/prometheus-operator/prometheus-operator:v0.73.1

  • Kubernetes version information:

    Client Version: v1.28.6+k3s2
    Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
    Server Version: v1.28.6+k3s2

  • Kubernetes cluster kind:

    insert how you created your cluster: kops, bootkube, tectonic-installer, etc.
    Followed https://docs.k3s.io/quick-start

  • Manifests:

insert manifests relevant to the issue
  • Prometheus Operator Logs:
Insert Prometheus Operator logs relevant to the issue here
  • Prometheus Logs:
Insert Prometheus logs relevant to the issue here

Anything else we need to know?:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant