DEV Community

ajditto
ajditto

Posted on

How can you measure quality?

How do you measure quality? It’s a question that not just software companies seem to struggle with, but many quality professionals have a hard time answering well. It’s pretty simple to just ask how many bugs do we have in production? That’s a good metric for quality right?

It. Is. Not.

Metrics tend to be a tricky problem in general. Experience says that too much time focusing on a metric makes the metric the goal, when its true purpose should be an indicator of how progress toward a goal is going. On the flip side, if there isn’t a goal associated with a metric it quickly fades into obscurity and serves no purpose.

The latter is the exact reason ‘bugs on production’ is a bad measure of quality. There is no realistic goal that can be tied to this metric: The number should be zero? Is there a software company out there that has ever managed this? I doubt it.

What about “less” production bugs? This is barely better, because there are so many ways around it. Tell the QA team to log less issues and boom; problem solved. Unless of course your QA team is being evaluated by how many issues the log…

All of this is to say that the metric “bugs on production” doesn’t help because it doesn’t represent work towards a goal of higher quality. Every team I’ve ever worked with has willingly and knowingly shipped products with bugs at some point, the reasons for this are as varied as the teams I’ve worked with, and more often than not, I’m giving a thumbs up to releasing known bugs.

If your only quality metric is bugs on production, then this represents a failure of quality plain and simple. In the end it’s unrealistic to expect to deliver 100% bug free code, so you will always get a failing quality grade.

So how to do better?

What kinds of metrics show that a team is producing quality work? The best answer I’ve come up with is to have actionable metrics. Actionable here meaning that each metric has a realistic goal attached to it, and the team is willing to take action to work toward that goal.

The power behind actionable metrics is immense. When team members have a goal and a way to measure that goal buyin to the goal becomes easy. So what are some actionable metrics?

Some of the metrics that I have experience to be highly effective are:

Mean time to resolution of critical issues
Customer reported bugs (measured weekly)
Issues related to recently released features

Each one of these metrics showcase a different aspect of the product, and the team working on them.

Mean time to resolution of critical issues

Tracking MTTR gives a lot of insight into a team’s responsiveness, and ability to handle problems that come up. Each critical issue that teams encounter will (hopefully) be unique, where some will be simple fixes, and some will be immensely complex. Each one, however, represents the potential to have a major impact on customers' experience, and requires immediate attention.

This metric serves to help the team learn the value of tracking down the underlying issue as quickly as possible, and figure out the fastest way to reverse the impact to users, sometimes that’s a fix, and sometimes that’s a rollback. Knowing a team's ability to respond to major issues and outages serves to boost the confidence of the whole company in the engineers ability to solve customer problems.

Weekly customer reported bugs

It seems pretty clear that this is a metric that should be kept track of. The question to ask is what action is tied to this metric? This is the metric that should drive improvement initiatives. Research can be done all day about what features customers want, but reported bugs show what customers use.

When joined with other metrics, knowing how many bugs are being reported by customers can paint a picture of how the product’s quality is being perceived:

If customer bug reports jumped one week, was there a correlated spike in weekly users? Then there’s probably no reason to be alarmed. Did bug reports spike following a major feature release? It looks like our customers are trying to use the new feature, which is great. (Not the bugs, but we’ll talk about that next.)

Issues related to recently released features

It’s fairly common to have a handful of issues crop up after a feature release. Earlier I mentioned that sometimes bugs are released on purpose. More often than not customers are using the software in a way nobody imagined it would be used, and yes, I’m admitting that as a quality professional. Why, then, should this be tracked, and what action can be taken?

Tracking post release issues helps development teams discover the holes in their process or assumptions. By tracking this, teams can discover that they’re spending as much time post release in fixing a feature as they did in developing the feature. It empowers teams to better estimate time needed to complete features, as well as challenge their assumptions and in the end, release a better product. That is, after all, what quality assurance is all about.

Top comments (0)