DEV Community

Gregory Paciga
Gregory Paciga

Posted on • Originally published at gerg.dev on

Three ways that “Manual Testing” is a waste*

* when done at the wrong time, and depending on how you define “manual testing”.

Callum Akehurst-Ryan had a post recently that broke down much better ways of thinking about manual testing, but I’m using it here how it’s used at my current company and presumably many others: scripted tests that have to be run manually to see that they pass before a feature can be considered done. But that’s not the whole context: let’s first consider how manual and automated testing relate to each other.

Context: models for when automation happens

I noticed last year when conducting dozens of interviews for test automation specialists (we call them “QA Engineers”, you might call them SDETs), people seemed to fall pretty cleanly into one of two camps:

  1. You have to manually test something first so that you know it works before you can automate it. Typically, people in this camp will only tackle test automation after a feature is done and “stable”, which usually means that the tests are automated one sprint after after development and manual testing are done.
  2. Automation (and indeed, testing as a whole) should be treated as part of development itself, so a feature can not be considered done until the automated tests for it are implemented.

Colloquially people refer to these as “N+1 automation” or “In-sprint automation”, respectively. It is certainly possible that the first mode all takes place within one sprint, but it’s usually unrealistic to simply condense the timeline that much. (People against “in-sprint” often mistakenly think that is the only difference and therefore throw it out as impossible for just that reason.) The difference really comes in the order things occur:


N+1 automation: a feature is considered done after all manual tests pass, and now automated tests can be implemented without fear that the feature will change. Usually, this is done in a new sprint.


In-sprint automation: manual tests cover what automation can’t, and the feature isn’t considered done until both are complete. The steps don’t have to happen serially as shown, as long as all three are part of the definition of done.

The reason that the second model looks to some like doing 2 sprints worth of work in 1 is that N+1 automation is inherently wasteful.

Three wastes of manually testing before automating

(You’re now free to drag me in the comments for putting the onus of waste entirely on manually testing in the post title.)

1. The automated tests will always be out of date

Consider what happens when all manual tests pass, so the feature is marked done and deployed one afternoon. That night, your automated test runs against this new code. On Thursday morning when you come in to review the results, what do you find?

Any automated tests that depended on the way your code behaved before this new change will have failed. Your tests are out of date. Can you rely on the results of your tests to tell you anything about the health of the system now? Are you sure that the tests that passed should have passed with the new code? These automated tests aren’t any good for regression anymore. They may be better than nothing, but it’s a lot harder to interpret.

Since it may take a sprint of work to update all those tests based on a sprint’s worth of dev work, you can expect your tests to be broken for a full sprint on average. Then, of course, the cycle repeats because in that time a new batch of dev work is done.

Congratulations, you’re now spending the rest of your life updating “broken” tests and will never expect to see your automated test suite pass in full again. I’m not exaggerating here — I’ve talked to people who literally expect 20-50% of their test suite to be failing at all times.

2. You’re doing something manually that could have been automated

Since you’re comfortable with calling the feature done after manually testing it, you must have manually tested every scenario that was important to you. These are likely going to be the first things that are worth automating, so you don’t have to manually test them again. But then, why didn’t you just automate it in the first place? If you claim that writing and running a manual test is faster than automating it (a dubious claim to begin with), how many times did you repeat that manual test during the development sprint when something changed? How much extra overhead is it going to be to figure out which manual tests have to be repeated after automation is in place and which don’t?

You’re testing something with automation that you already tested manually. You’re wasting time testing it again. You’ll get the benefit of being able to re-run it or include it in regressions going forward, but you’d have had that from automating it up-front as well. You’ve only just delayed that benefit by a full sprint and gotten nothing in return.

3. Everything is more expensive

By the time you get around to automating a feature from last sprint, your developers have moved on to something else. When you inevitably need help, need some behaviour clarified, or even find a problem that manual tests missed, the context switch required for the rest of your team is much larger than if that story was still part of the current sprint. People have to drop work they committed to in the current sprint to remember what happened last sprint. Often the “completed” feature has already gone to production and the cost of fixing new issues has also gone way up. Something that might have taken a 5-minute back-and-forth with a developer to fix while the feature was in progress will now require a lot more effort. Easy but low-priority fixes will be dropped as a result, and your overall quality suffers.

One answer to this may just be that you should have caught it while manually testing. And that may be true, but you didn’t, so now what? Automated and manual tests have different strengths and will always raise different bugs. The value you get from automation is now much lower because even if you find all the same bugs as if you had automated earlier, the cost to do anything about them is higher.

Automate what should be automated

In my experience, these forms of waste far outweigh any benefit you get from only automating “stable” features. Instead of updating and re-running automated tests when code changes, you’re performing the same manual tests multiple times. Instead of getting the confidence of a reliably passing test suite, your tests are always failing and require extra analysis time as a result. Instead of working with developers and improving quality, automation becomes a drag against the team’s dev productivity.

By automating tests first, as part of the development work, manual test effort can focus on the sorts of tests that humans are good at. Automate 100% of the tests that should be automated, as Alan Page might say, rather than wasting time pretending that you’re a better robot than a robot is. You’ll get more value from both your automated tests and your manual efforts that way.

If you’re worried that you won’t have eyes on every path if some are automated up-front, remember that most automation still requires stepping through the application and observing its behaviour while you write the code to execute those same interactions. A good automator will watch each new tests and make sure they can all fail, thus writing automation also gives you manual—or rather, human—coverage.

The change in sequence alone may not be enough to get you all the way to in-sprint automation, but you should see it as much more attainable without the extra waste holding you back.

Top comments (0)