Why do we do QA?
It's a simple enough question and really most folks would say it self explanatory: we do it because we want to assure the quality of some feature/product. To be sure, that is a perfectly fine answer in most contexts but in posing the question I want us to interrogate it more. "Assure" who of what "quality"? Similarly, not just "what does it mean" but "why" we do it is important to this.
Recently I've been thinking about this question a bit more as my team is trying to do an upgrade and various refactors on our codebase. In this processes we have updated or created lots of automated unit tests but have also delved deep into manual testing. Our application size, as balanced against team size, is such that we can do a full manual test by everyone over the course of sprint but this isn't really efficient and it raises the question "was all that manual testing necessary?" This is why I asked myself "why do we do QA" and the answer I come to is this: QA is best understood as a means to arrive at your team's Confidence Quotient. Put another way, given your teams risk tolerances, are you confident to deploy?
QA vs CQ: whats the difference and why it matters
We've all been there: you develop a new feature, QA it, release it, and before too long someone reports a bug. Could be a trivial visual glitch or a serious security error. In retrospect we might even be able to point out how obvious of an issue it was and kick ourselves for not noticing it. So, does that mean we didn't do QA? Obviously the feature had an issue so we didn't truly "assure" its quality. What we did do is build up our confidence in release, aka our Confidence Quotient.
I don't think there is a north-start formula that will guide you on this decision for your team. As with everything else in Agile development, this is a contextual value that relies on deep knowledge of your team and product. However there are factors that should be taken into account in the decision making:
- Impact: Is this a feature that is used regularly, is a driver of revenue, or a major reducer of costs?
- Time to Fix: If a major bug was found, is it possible to resolve easily? This has to take into account the code complexity, human resources, and the rollout pipeline.
- Quality of Automated tests: Are there automated unit, integration, and/or UI tests that are both broad and deep? This is also a measure of history: how many past bugs escaped test automation?
- Manual Test Depth: Are the manual tests being done mere UI or are they functional tests that have numerous impacts on the database or such?
- Cost to Delay: Very similar to Impact but explicitly focused on what happens if the feature isn't rolled out.
With these 5 factors in mind, consider how many manual QA tests one must run to feel confident to deploy; not how many are needed achieve a mythical 100% confidence on all features.
Can you isolate some aspects as more impactful, costing more time to fix, or lacking automated coverage? Are there correlated items where you'd feel confident that if part ABC is good, then part DEF are also good without needing further checking?
Top comments (1)