Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[scalability] a script to automate threshold estimations #2077

Open
mimowo opened this issue Apr 26, 2024 · 3 comments
Open

[scalability] a script to automate threshold estimations #2077

mimowo opened this issue Apr 26, 2024 · 3 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@mimowo
Copy link
Contributor

mimowo commented Apr 26, 2024

What would you like to be added:

A script to automate the estimation for scalability thresholds, based on the experiment here: #2066 (comment).

In order to make the estimations more reliable we should only use main branch. For that, it would be good to setup the periodic build of the main branch for perf tests.

Note: Once the script is added, and we have data for the main branch, use it to set new values.

Why is this needed:

There is no easy and reliable way to set the thresholds, it leads to flakes, and unnecessary chore.

Having an easy way to collect the data per build will also enable us to see visually if there is a drop in performance on the main branch, which isn't picked by the current automation.

Also, we could use it to confirm our wins when we optimize the performance, by visualizing easily that there
is better performance based on historical main builds.

@mimowo mimowo added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 26, 2024
@mimowo
Copy link
Contributor Author

mimowo commented Apr 26, 2024

/cc @alculquicondor @trasc

@alculquicondor
Copy link
Contributor

I don't think we should be changing the values that often.
I prefer we don't have to maintain yet another script.

@mimowo
Copy link
Contributor Author

mimowo commented Apr 26, 2024

Right, there would be some maintenance cost, which makes me hesitant as well.

The idea is that currently is there is a failure it is not clear it would be not clear to me how much to bump the thresholds, and collecting the data manually takes time.

All In all, I'm ok to park it for now and see how it plays out, and invest only if bumping becomes to much chore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

2 participants