DEV Community

Dillon
Dillon

Posted on

Why do QA

Why do we do QA?

It's a simple enough question and really most folks would say it self explanatory: we do it because we want to assure the quality of some feature/product. To be sure, that is a perfectly fine answer in most contexts but in posing the question I want us to interrogate it more. "Assure" who of what "quality"? Similarly, not just "what does it mean" but "why" we do it is important to this.

Recently I've been thinking about this question a bit more as my team is trying to do an upgrade and various refactors on our codebase. In this processes we have updated or created lots of automated unit tests but have also delved deep into manual testing. Our application size, as balanced against team size, is such that we can do a full manual test by everyone over the course of sprint but this isn't really efficient and it raises the question "was all that manual testing necessary?" This is why I asked myself "why do we do QA" and the answer I come to is this: QA is best understood as a means to arrive at your team's Confidence Quotient. Put another way, given your teams risk tolerances, are you confident to deploy?

QA vs CQ: whats the difference and why it matters

We've all been there: you develop a new feature, QA it, release it, and before too long someone reports a bug. Could be a trivial visual glitch or a serious security error. In retrospect we might even be able to point out how obvious of an issue it was and kick ourselves for not noticing it. So, does that mean we didn't do QA? Obviously the feature had an issue so we didn't truly "assure" its quality. What we did do is build up our confidence in release, aka our Confidence Quotient.

I don't think there is a north-start formula that will guide you on this decision for your team. As with everything else in Agile development, this is a contextual value that relies on deep knowledge of your team and product. However there are factors that should be taken into account in the decision making:

  • Impact: Is this a feature that is used regularly, is a driver of revenue, or a major reducer of costs?
  • Time to Fix: If a major bug was found, is it possible to resolve easily? This has to take into account the code complexity, human resources, and the rollout pipeline.
  • Quality of Automated tests: Are there automated unit, integration, and/or UI tests that are both broad and deep? This is also a measure of history: how many past bugs escaped test automation?
  • Manual Test Depth: Are the manual tests being done mere UI or are they functional tests that have numerous impacts on the database or such?
  • Cost to Delay: Very similar to Impact but explicitly focused on what happens if the feature isn't rolled out.

With these 5 factors in mind, consider how many manual QA tests one must run to feel confident to deploy; not how many are needed achieve a mythical 100% confidence on all features.

Can you isolate some aspects as more impactful, costing more time to fix, or lacking automated coverage? Are there correlated items where you'd feel confident that if part ABC is good, then part DEF are also good without needing further checking?

Top comments (1)

Collapse
 
phlash profile image
Phil Ashby • Edited

Excellent summary, thank you! If I may, I'd like to add a couple of thoughts:

  • Tests often fall into two useful categories: internal (typically unit, integration) that provide confidence within the team; external (end-to-end, UI, customer acceptance) that provide confidence to the business. Personally I woud focus on external tests, using the feedback from those (eg: fragile areas that suffer more failures over time, how failures couple across services / APIs) to direct the attention of the internal testing, eg: retiring internal tests that never fail / apply to reliable external services, reducing the cost & cognitive load of maintenance, allowing addition of new tests where services are fragile or over-coupled, until they stabilise.

  • Often a team cannot directly assess the business impact of their services, especially in large organisations, so it can be valuable to introduce consumer contract tests as the minimum set of external tests: where other teams who consume a service, provide the tests (for them these are internal tests of an API they depend on), and the service provider team ensure those tests always pass before going to production. This couples teams through minimal interfaces, and shares responsibility appropriately. Note also that the provider team never have to second guess what matters, particularly when they have several consuming teams, all of whom provide tests for what matters to them, all of which must pass. Indeed the provider team could descope everything they provide that is not in a contract test, minimizing unecessary work.

Hope this was interesting 🙏