This is a fantastic idea, very well explained. I'm actually going to implement a form of this for my team.
There are a couple of possible improvements, depending on team needs:
Origin Usefulness
For some teams, the origin may not be particularly useful. It certainly can take a good deal of work to track down in some cases ("Wait, this bug got into prod, was it from coding or design stage, or perhaps a bug fix?"). The point is, wherever the bug came from, it didn't get caught until prod.
Now, it may be useful to figure out where one bug, or even a swarm of bugs, originated, so we can figure out what part of the workflow sprung a leak. However, as a rule, it might not always be worth tracking down at the time of reporting. Given a halfway decent workflow and VCS, you can always figure out origin later.
That said, some teams might need the full origin/detected scheme. More power to 'em.
Weight
On the scorecard, one might get a slightly more useful single-number metric by assigning different weights to the various "Detected" stages.
For example, catching in the "Requirements" or "Design" stage should only have a multiplier of x0 [explained in a moment]. It indicates that the development team was able to anticipate the issue before coding even began.
By contrast, catching in "Production" should have a multiplier of, say, x3.
This might make more practical sense given an example. Let's look at two (simplified, no origin) scorecards.
Both have a TOTAL of 10. Sure, we can tell that Project 2 is having a harder time because of all the bugs getting to Prod (4!!) But imagine if we applied the weight multipliers.
Yow! Project 2 is obviously having serious issues - its bugs are being caught far later than Project 1. We have the scorecard handy for the analysis, but we don't need to read it through to know the overall health. We now have a quick two-number metric that summarizes the critical information.
So, why the 0-multiplier in practice? Easy - what would s score of "10/1" tell you? Simply that, of the 10 bugs caught in the project, only one survived out of design phase.
BTW, you could clearly expand this to fit the origin/detected format, for the teams that need that.
I like the idea of introducing the Weights. The need of the origin would still be needed. Let me explain in a bit. From my experience of managing teams, I look for ways to praise my team too. For example, if the design bug has caught in the development phase but not in the design phase, it needs significant rework but rework would still be less then when a bug caught in the SQA or the PROD phase. Did you see the idea? I'm fine with bug until they are found early enough in the development lifecycle. Finding a requirement or a design level issue late in the game may warrant a lot of rework or lead towards the product or a feature failure.
For large products, origin help with identifying a right team (could be design team or any other team) for fine-tuning to uplift the product quality.
Need to see, how we may expand the Weights on origin in terms of the level of effort; it usually takes longer and hard to fix design or requirement level bug while sitting in prod.
Ahh, I see the inherent difference between our teams! I can agree that origins would be crucial for yours, since you have different Design, Coding, and SQA teams. By contrast, at my company, one team covers the entire process.
I can also see how origin might be useful to my team in limited situations, but in general, we don't have the time to figure out origin on every bug.That aside, I'll make room in my bug tracker for an optional origin field.
As to weights, if I'm understanding this, the longer a bug lives, the higher the score it needs, yes? If so, it's actually pretty simple, although I can't really set up the table here.
Using the same multipliers...
| | Req | Des | Code | SQA | Prod | Total |
| MULT | x0 | x0 | x1 | x2 | x3 | |
If a bug is caught in Prod (3), and origin at Code (1), then SCORE = CAUGHT - ORIGIN = 3 - 1 = 2.
Similarly, if a code is caught in Prod (3) and origin at Design (0), then 3 - 0 = 3.
This is a fantastic idea, very well explained. I'm actually going to implement a form of this for my team.
There are a couple of possible improvements, depending on team needs:
Origin Usefulness
For some teams, the origin may not be particularly useful. It certainly can take a good deal of work to track down in some cases ("Wait, this bug got into prod, was it from coding or design stage, or perhaps a bug fix?"). The point is, wherever the bug came from, it didn't get caught until prod.
Now, it may be useful to figure out where one bug, or even a swarm of bugs, originated, so we can figure out what part of the workflow sprung a leak. However, as a rule, it might not always be worth tracking down at the time of reporting. Given a halfway decent workflow and VCS, you can always figure out origin later.
That said, some teams might need the full origin/detected scheme. More power to 'em.
Weight
On the scorecard, one might get a slightly more useful single-number metric by assigning different weights to the various "Detected" stages.
For example, catching in the "Requirements" or "Design" stage should only have a multiplier of x0 [explained in a moment]. It indicates that the development team was able to anticipate the issue before coding even began.
By contrast, catching in "Production" should have a multiplier of, say, x3.
This might make more practical sense given an example. Let's look at two (simplified, no origin) scorecards.
PROJECT 1
PROJECT 2
Both have a TOTAL of 10. Sure, we can tell that Project 2 is having a harder time because of all the bugs getting to Prod (4!!) But imagine if we applied the weight multipliers.
PROJECT 1
PROJECT 2
To boil that down...
PROJECT 1: 10/10
PROJECT 2: 10/16
Yow! Project 2 is obviously having serious issues - its bugs are being caught far later than Project 1. We have the scorecard handy for the analysis, but we don't need to read it through to know the overall health. We now have a quick two-number metric that summarizes the critical information.
So, why the 0-multiplier in practice? Easy - what would s score of "10/1" tell you? Simply that, of the 10 bugs caught in the project, only one survived out of design phase.
BTW, you could clearly expand this to fit the origin/detected format, for the teams that need that.
I like the idea of introducing the Weights. The need of the origin would still be needed. Let me explain in a bit. From my experience of managing teams, I look for ways to praise my team too. For example, if the design bug has caught in the development phase but not in the design phase, it needs significant rework but rework would still be less then when a bug caught in the SQA or the PROD phase. Did you see the idea? I'm fine with bug until they are found early enough in the development lifecycle. Finding a requirement or a design level issue late in the game may warrant a lot of rework or lead towards the product or a feature failure.
For large products, origin help with identifying a right team (could be design team or any other team) for fine-tuning to uplift the product quality.
Need to see, how we may expand the Weights on origin in terms of the level of effort; it usually takes longer and hard to fix design or requirement level bug while sitting in prod.
Ahh, I see the inherent difference between our teams! I can agree that origins would be crucial for yours, since you have different Design, Coding, and SQA teams. By contrast, at my company, one team covers the entire process.
I can also see how origin might be useful to my team in limited situations, but in general, we don't have the time to figure out origin on every bug.That aside, I'll make room in my bug tracker for an optional origin field.
As to weights, if I'm understanding this, the longer a bug lives, the higher the score it needs, yes? If so, it's actually pretty simple, although I can't really set up the table here.
Using the same multipliers...
If a bug is caught in Prod (3), and origin at Code (1), then
SCORE = CAUGHT - ORIGIN = 3 - 1 = 2
.Similarly, if a code is caught in Prod (3) and origin at Design (0), then
3 - 0 = 3
.Like that?
Ahh, what a math!
Yes, the longer a bug lives, the higher the score
I don't know if I ever told you, but I absorbed this idea into Quantified Task Management: standards.mousepawmedia.com/qtm.html
I also talked about it here recently: legacycode.rocks/podcast-1/episode...