This repository has been archived by the owner on Apr 19, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 101
Peer list update bug in K8s cluster #189
Labels
help wanted
Extra attention is needed
Comments
The problematic pod keeps trying to get rate limits from the shutdown peer according to the log:
|
I don't run a k8s cluster, so I really don't have a way to test this. I rely on the community to provide support for k8s. |
FYI, this isn't limited to k8s. We run on ECS and see something similar. These logs seem to coincide with our deployments.
I need to do some research on my end to see if it's a bug on our service or on this library. |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
We ran a three-replica gubernator setup in our k8s cluster. When one pod was shut down gracefully by K8s, another pod (not all, just one) kept reporting
in the log.
Apparently, it didn't update its peer list accordingly. What may be the cause of this problem?
The text was updated successfully, but these errors were encountered: