-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[scalability] a script to automate threshold estimations #2077
Comments
I don't think we should be changing the values that often. |
Right, there would be some maintenance cost, which makes me hesitant as well. The idea is that currently is there is a failure it is not clear it would be not clear to me how much to bump the thresholds, and collecting the data manually takes time. All In all, I'm ok to park it for now and see how it plays out, and invest only if bumping becomes to much chore. |
What would you like to be added:
A script to automate the estimation for scalability thresholds, based on the experiment here: #2066 (comment).
In order to make the estimations more reliable we should only use
main
branch. For that, it would be good to setup the periodic build of the main branch for perf tests.Note: Once the script is added, and we have data for the main branch, use it to set new values.
Why is this needed:
There is no easy and reliable way to set the thresholds, it leads to flakes, and unnecessary chore.
Having an easy way to collect the data per build will also enable us to see visually if there is a drop in performance on the main branch, which isn't picked by the current automation.
Also, we could use it to confirm our wins when we optimize the performance, by visualizing easily that there
is better performance based on historical main builds.
The text was updated successfully, but these errors were encountered: