-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: Capturing QA processes, results and artifacts #9020
Comments
While I would also not list all automated test runs here, it would still be good to capture what automation is in place, and when (or for what version/commit) it was put into place. Of course it would also be good to note tests that had to be disabled, etc. and why. |
As discussed with @josef-widder earlier this week and with @marbar3778 and @cmwaters yesterday, it would be helpful, both to the team and users, to have an easy way to know how different aspects of Tendermint are tested, what tests have been run, and what the results of those tests have been (especially for large-scale testnet executions, and for model-based testing).
Some of the work in #8786 touches on this, and this document for Interchain Security is a good example of the structure of the sort of document aimed for here.
The rough overall structure of such a document for Tendermint would be:
The text was updated successfully, but these errors were encountered: