You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am testing Prometheus if it could replace our Carbon TSDB.
So far it seems adequate, but I'm not familiar with Prometheus queries and have this strange problem with it.
First, I have set scrape interval as one second in Grafana data source settings, because it is our shortest used interval.
Second, when querying using last_over_time(foo[$__interval]) I can get those inconsistent datapoints.
TL;DR: Problem now is, that each data point is followed by duplicate data point one second apart from it as you can see here in screenshot. If I set longer scrape interval it got more far apart.
Top panel is query from Carbon: groupByNodes(removeEmptySeries(foo.bar.info.http_server_requests.*.GET.200.SUCCESS.None.value_mean), 'average', 2, 4, 5)
And bottom one from Mimir: avg(last_over_time(graphite_untagged{__n000__="foo", __n001__="bar", __n002__="info", __n003__="http_server_requests", __n005__="GET", __n006__="200", __n007__="SUCCESS", __n008__="None", __n009__="value_mean"}[$__interval])) by(__n002__, __n004__, __n005__)
Any hints how to solve this problem?
These metrics are coming from carbon-relay-ng through graphite-proxy-writes to Grafana Mimir if thats matter.
I also opened this question in Grafana Mimir. I will update answer here if it get solved. If question is unfitting here, please let me know and I close it.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am testing Prometheus if it could replace our Carbon TSDB.
So far it seems adequate, but I'm not familiar with Prometheus queries and have this strange problem with it.
First, I have set scrape interval as one second in Grafana data source settings, because it is our shortest used interval.
Second, when querying using
last_over_time(foo[$__interval])
I can get those inconsistent datapoints.TL;DR: Problem now is, that each data point is followed by duplicate data point one second apart from it as you can see here in screenshot. If I set longer scrape interval it got more far apart.
Top panel is query from Carbon:
groupByNodes(removeEmptySeries(foo.bar.info.http_server_requests.*.GET.200.SUCCESS.None.value_mean), 'average', 2, 4, 5)
And bottom one from Mimir:
avg(last_over_time(graphite_untagged{__n000__="foo", __n001__="bar", __n002__="info", __n003__="http_server_requests", __n005__="GET", __n006__="200", __n007__="SUCCESS", __n008__="None", __n009__="value_mean"}[$__interval])) by(__n002__, __n004__, __n005__)
Any hints how to solve this problem?
These metrics are coming from carbon-relay-ng through graphite-proxy-writes to Grafana Mimir if thats matter.
I also opened this question in Grafana Mimir. I will update answer here if it get solved. If question is unfitting here, please let me know and I close it.
Beta Was this translation helpful? Give feedback.
All reactions