Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include PodSpec.nodeName as a label in goldpinger_peers_response_time_s_* metrics #40

Open
dannyk81 opened this issue Dec 31, 2018 · 10 comments
Labels
enhancement New feature or request

Comments

@dannyk81
Copy link
Contributor

dannyk81 commented Dec 31, 2018

Hey guys!

Happy New Year!!! 馃帀

Super cool project, we've recently deployed this in our clusters and it's proving to be super useful.

I was wondering if it would be possible to add a nodeName label to these (goldpinger_peers_response_time_s_*) metrics so it would be easier to identify the target nodes being probed.

the node name should be available in the PodSpec.nodeName

@seeker89
Copy link
Contributor

seeker89 commented Jan 2, 2019

Super cool project, we've recently deployed this in our clusters and it's proving to be super useful.

Glad we could help !

I don't see why not - it would put some extra strain on Prometheus with the extra time series, but it would indeed make for better graphs and alerts.

馃憤

@akhy
Copy link
Contributor

akhy commented Jan 4, 2019

馃憤

seeing at the source, it seems the code only care about HostIP of ping targets (at least in the prometheus metrics). I believe it'd be more useful when debugging issues if we can somehow give more context of the target nodes.

In the monitoring dashboard/alerting, instead of this

"node A and B are reporting that some other unknown nodes are down"

this would be better

"node X and Y are reported down by 2 nodes"

@seeker89 seeker89 added the enhancement New feature or request label Feb 20, 2019
@dannyk81
Copy link
Contributor Author

Hey @seeker89 馃槃

Do you think this is something you would consider implementing in the near future? we particularly feel the this is missing for alerting, where we know some node(s) are unhealthy but can't pinpoint from the metrics.

@seeker89
Copy link
Contributor

Hey @dannyk81 !

Now, that we've merged #53 in I think we can add some more context to the metrics.

So, just to make sure I understand what your use case is - what extra labels on which metrics do you need ?

@dannyk81
Copy link
Contributor Author

dannyk81 commented Mar 17, 2019

Great news!

I'll try and summarize our current experience:

  1. goldpinger_peers_response_time_* - these metrics have the pod_ip and host_ip labels of the pod being probed, but not the host_name.

  2. goldpinger_nodes_health_total - this metric is useful, however currently very difficult to use (for alerting and troubleshooting), since it provides a way to alert when nodes are misbehaving but the challenge is how to identify that node? actually, right now we have an alert in one of our clusters but we can't deduct from the metrics which node is actually misbehaving, we just know that one node is unhealthy (this relates to @akhy's comment I believe)

/edit: I think it would also be great if the host name can be used on Goldpinger's UI and log instead of the host IP, it would make things much easier when trying to troubleshoot things.

@dannyk81
Copy link
Contributor Author

wdyt @seeker89?

I tried going through the code yesterday to see if I could work out how to implement this, but to be honest getting a bit lost... it seems like the podIPs (and hostIPs) are used as primary identifiers almost everywhere and passed around in ad-hoc maps, I suppose we'll need to define a struct for that and use it throughout the code?

Generally, I think it would be great if we could substitute pod IPs with pod names and host IPs with host names (probably keeping the IP data as well though, but make it secondary) because having just the IP addresses displayed is rather confusing.

It would be (at least in my pov) so much more human friendly if the metrics and the UI (graph, heatmap, etc..) referenced the pod names + host(node) names, instead of plain IPs.

Sorry if I'm going out of scope here.

@seeker89
Copy link
Contributor

Some good points here. I'll have a look when I get some free bandwidth. 馃憤

@dannyk81
Copy link
Contributor Author

dannyk81 commented Aug 9, 2019

@seeker89 just curious if you had a chance to look into this? 馃檹

@seeker89
Copy link
Contributor

Sorry, not yet. I did set some time for goldpinger next week, might be able to get into this.

@ifelsefi
Copy link

An example latency check that works well!

    - alert: goldpinger-node-latency
      expr: |
        sum (rate(goldpinger_peers_response_time_s_sum{call_type="ping"}[1m]))
        by (goldpinger_instance, host_ip, pod) > 0.040
      for: 1m
      annotations:
        description: |
           Goldpinger pod {{ "{{ $labels.pod }}" }} on node {{ "{{ $labels.goldpinger_instance }}" }} cannot reach remote node at IP {{ "{{ $labels.host_ip }}" }} in less than 40 ms!
        summary: Node {{ "{{ $labels.host_ip }}" }} likely fubar.  Overlay network latency should be less than 40ms!
      labels:
        severity: critical

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants