-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting frequent restarts for Prometheus-msteams[BUG] #267
Comments
Can you try to increase the CPU limits? |
Sure we can do that, but what we have understood is that prometheus-msteams is a really lightweight Go Web Server that does only the api calls and it shouldn't be consuming that much. Even we started with the default values mentioned here and then increased to the value mentioned above. So we want to understand if there are any memory leakages happening that might lead to this issue. Regards, |
The resource consumption depends on the load and your setup. In general it should be low, but as I said that depends on your specific setup. Therefore the default values from the chart are only a suggestion. When the OOMs are now gone with the increased memory, then I don't see where there should be a massive memory leak. The probe-failures may be because of the low cpu limit, as I described above. |
We had a similar problem with prometheus-msteams pods restarting occasionally and made adjustments to the container resources and haven't had an issue since. We overrode default resources in our values.yml file to
|
We are seeing the prometheus-msteams being restarted at times and it is showing OOM errors.
We have increased cpu and memory to below as well. Still, the pods are getting restarted.
Interestingly we don't see the pod hitting the limits anywhere.
Here is some of the event logs
Version
1.5.0
Expected behavior
A stable prometheus-msteams
The text was updated successfully, but these errors were encountered: