-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking success rate of benchmarked functions #235
Comments
This would be great! Is there currently a way to omit failed tests from the timing statistics? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I have a use case for tracking the performance and success rate of non-deterministic functions.
The following function serves to outline the scenario:
I have played around and arrived at the following result:
To get the new column
succ
actually displayed, I had to also:succ
topytest_benchmark.utils.ALLOWED_COLUMNS
.pytest_benchmark.table.display
so it showssucc
.(How exactly to achieve those two things is left an an exercise for the reader.)
While this does work, I am unsure if my solution could be upstreamed easily.
How should I do it if I want my solution to be merged into
pytest-benchmark
?Alternate and related approaches:
benchmark.pedantic
that makes it continue on exceptions, but gives it an argument of the list of exceptions caught (like[None, None, RuntimeError, None, RuntimeError]
).benchmark.pedantic
to change the return type to a list of all results, then set up the benchmarked function so that it catches relevant exceptions and returns whatever I want.extra_info
keys in the terminal table.The text was updated successfully, but these errors were encountered: