“The product failed spectacularly. But the requirements coverage was 100%, the dashboard was green, and the manager got a bonus. So really, who’s the failure here?”
🛏️ The Comfort Blanket of Coverage
Coverage metrics are the selfie of software quality. They show the angle you choose, not the messy reality behind it. Cropped, filtered, perfect for a report. Meaningless when the product hits the road.
Managers love selfies. A green dashboard tells a good story. And good stories pay bonuses.
Coverage metrics are the selfie of software quality (Gemini generated image)
🔄 How It Usually Works
The routine is painfully familiar. Requirements are written, handed to testers, and traced to test cases. Coverage is measured. Someone announces: “We’re at 100%!” 🎉
But here’s the truth: coverage only means we tested what we said we’d test. It doesn’t mean we tested the right things.
Requirements describe intended functionality. They rarely cover what can go wrong, the nasty edge cases, or real-world usage.
It’s like a smiling beach selfie 🏖️. Looks great. Doesn’t show the jellyfish sting or the sunburn ☀️ brewing.
💸 The Core Problem: Economics
Testing follows the same economic rules as everything else: maximum quality statement at minimum cost. Procurement often treats it like buying paperclips: cheapest bid wins.
So testing gets outsourced. Contracts demand "100% requirements coverage". The result? Testers who may lack domain knowledge, never touch the real product, and operate with limited context.
Coverage is delivered, but only in the narrowest sense.
It's like hiring a food critic 🍽️ who's never tasted the dish, but can confirm the menu was spelled correctly.
🏢 The Organizational Trap
Here's the darker twist. In big companies, management bonuses are tied to metrics. Green coverage ✅ + low cost 💰 = hero manager. Real quality doesn't factor in.
By the time issues surface, the manager has been promoted 🚀, or updated their LinkedIn. Root cause analysis becomes corporate archaeology 🏺: digging through layers of PowerPoint and polished deniability.
The Organizational Trap (Gemini generated image)
📐 Enter ISO/IEC 25010
If coverage is a flawed, self-serving metric, what should we measure instead?
The answer isn't a single metric, but a complete framework. This is where ISO/IEC 25010 comes in, defining eight dimensions (quality characteristics) of software quality:
- Functional suitability ⚙️
- Performance efficiency ⚡
- Compatibility 🔗
- Usability 👤
- Reliability 🔄
- Security 🔒
- Maintainability 🛠️
- Portability 🌍
Coverage usually touches one: functional suitability. The rest? Invisible.
A requirements selfie shows “Functional Suitability” smiling 😃. It says nothing about the other seven, lurking just outside the frame.
- Usability? Only if the “user” is a requirements engineer with infinite patience.
- Security? Left as an exercise for the customer.
- Reliability? Tested in production, apparently.
Selfies don’t show the full picture. Neither do coverage metrics.
🧑🔬 The Role of Testers
If testers only validate what’s written down, they become clerks of compliance 📋—not engineers of quality.
Breaking the cycle means giving testers:
- Knowledge: ISO 25010, ISTQB, ISO 26262, cybersecurity basics, etc.
- Exposure: Real hardware, real systems, real users—not just PDFs 📑 and Jira tickets 🗂️
Otherwise, it’s like asking a film critic 🎬 to review a movie based on the trailer alone.
🤝 The Contractor’s Dilemma
For contractors, the challenge is honesty. The lowest bid doesn’t buy good testing—it buys selfies at scale. The highest bid doesn’t guarantee substance, either.
The real question is: What level of quality are you actually buying?
Procurement misses this. They optimize for coverage % because it’s easy to measure. But quality isn’t a commodity. If testing fails, it isn’t just the chair that breaks—it’s the whole product 🪑💥.
🧭 Closing Thought
Coverage tells you where you’ve been.
ISO 25010 asks whether you’ve been anywhere worth going.
Users don’t care about your coverage selfies 🤳.
They care whether the product actually works.
So the next time someone boasts about 100% requirements coverage, ask them:
👉 “Which of the eight quality characteristics from ISO 25010 are we not covering?”
Coverage tells you where you've been. ISO 25010 asks whether you've been anywhere worth going. (Gemini generated image)
🔖 If you found this perspective helpful, follow me for more insights on software quality, testing strategies, and ASPICE in practice.
Top comments (0)