-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Monitor Debt: Which metrics? #236
Comments
Thats a HUGE topic, potentially worth writing an article about... additionally, I suggest to correlate various metrics |
Probably nitpicking: MTTR is not how long it takes to rollout a new release to production but rather how long it takes to make a system behaving correctly again. (the nitpick is that you may do a new release but this does not fix the problem) But I totally agree with Alex. Since availability is defined as A = MTBF / (MTBF + MTTR), these three variables correlate and are worth monitoring. |
The number of times you excuse yourself for flaws in the system when you introduce a new developer. |
Interesting point of view on this subject from an interview with Robert Martin:
So a good measure for technical dept could be the time required to implement some sort of function points. If this number remains stable, all is fine. If this number goes down the might indicate technical dept. |
The problem with this kind of metrics is however that management is very likely to misuse those numbers to insinuate developers a lack of productivity or commitment. |
There are several aspects to this, like measure of
Technical debt sort of tries to adress the first three of the above while remaining management friendly. Besides technical debt, there's a plethora of metrics to choose from, while only a handful still see widespread use, like cyclomatic complexity. CC is the only software metric I know that has been argued in court wrt. to software quality - in the class action against toyota's cruise control system that killed several people if I remember correctly. Efficacy is a difficult topic, as the cost infered from bad quality are hard to put exactly. Jones and as far as I remember also Fenton argued for an approach based defect removal efficiancy or some derivative thereof. Coverage is an intereseting one, as there still a lot of different opinions on best practices in the industry. General rule of thumb "Do at least 80% coverage and you are fine" are still popular, despite the fact that Skynet could sit in the remaining 20% and wait for its turn to destroy humankind. |
I haven't bought the book yet, but it looks like a good reading regarding this topic:
|
I read the book - it's well worth it!! Great ideas. |
I would like to brainstorm with you a list of the metrics one should monitor in order to keep an eye on the (technical) debt of a software system. IMPOV there are different metrics for different phases of the software lifecycle, so let's just start collecting and sort it out afterwards.
I'll get started with 3 of them from the top of my head:
The text was updated successfully, but these errors were encountered: