DEV Community

roesslerj
roesslerj

Posted on • Updated on

Test Automation is not Automated Testing

Testing as a craft is a highly complex endeavour, an interactive cognitive process. Humans are able to evaluate hundreds of problem patterns, some of which can only be specified in purely subjective terms. Many others are complex, ambiguous, and volatile. Therefore, we can only automate very narrow spectra of testing, such as searching for technical bugs (i.e. crashes).

What is more important is that testing is not only about finding bugs. As the Testing Manifesto from Growing Agile summarises very illustratively and to the point, testing is about getting to understand the product and the problem(s) it tries to solve and finding areas where the product or the underlying process can be improved. It is about preventing bugs, rather than finding bugs and building the best system by iteratively questioning each and every aspect and underlying assumption, rather than breaking the system. A good tester is a highly skilled professional, constantly communicating with customers, stakeholders and developers. So talking about automated testing is abstruse to the point of being comical.

Test automation on the other hand is the automated execution of predefined tests. A test in that context is a sequence of predefined actions interspersed with evaluations, that James Bach calls checks. These checks are manually defined algorithmic decision rules that are evaluated on specific and predefined observation points of a software product. And herein lies the problem. If, for instance, you define an automated test of a website, you might define a check that ascertains a specific text (e.g. the headline) is shown on that website. When executing that test, this is exactly what is checked—and only this. So if your website looks like shown in the picture, your test still passes, making you think everything is ok.

Broken website

A human on the other hand recognises with a single glimpse that something has gone awry.

But if test automation is so limited, why do we do it in the first place? Because we have to, there is simply no other way. Because development adds up, testing doesn’t. Each iteration and release adds new features to the software (or so it should). And they need to be tested, manually. But new features also usually cause changes in the software that can break existing functionality. So existing functionality has to be tested, too. Ideally, you even want existing functionality to be tested continuously, so you recognise fast if changes break existing functionality and need some rework. But even if you only test before releases, in a team with a fixed number of developers and testers, over time, the testers are bound to fall behind. This is why at some point, testing has to be automated.

Work of developers adds up, work of testers not

Considering all of its shortcomings, we are lucky that testing existing functionality isn’t really testing. As we said before, real testing is questioning each and every aspect and underlying assumption of the product. Existing functionality has already endured that sort of testing. Although it might be necessary to re-evaluate assumptions that were considered valid at the time of testing, this is typically not necessary before every release and certainly not continuously. Testing existing functionality is not really testing. It is called regression testing, and although it sounds the same, regression testing is to testing like pet is to carpet—not at all related. The goal of regression testing is merely to recheck that existing functionality still works as it did at the time of the actual testing. So regression testing is about controlling the changes of the behaviour of the software. In that regard it has more to do with version control than with testing. In fact, one could say that regression testing is the missing link between controlling changes of the static properties of the software (configuration and code) and controlling changes of the dynamic properties of the software (the look and behaviour). Automated tests simply pin those dynamic properties down and transform them to a static artefact (e.g. a test script), which again can be governed by current version control systems.

This sort of testing (I’d rather call it checking) can be automated. And it should be automated for several reasons:

  1. In the long run, it is cheaper to automate it.
  2. It can be done continuously, giving you faster feedback whether a change has broken existing functionality.
  3. As the software grows, your testers will not be able to perform it to the full extent necessary anymore, because development adds up—testing doesn’t.
  4. It is a trivial, yet in its repetitiveness boring and exhausting task, that insults the intelligence and abilities of any decent tester and keeps them from their actual work.
  5. Worse yet, testing the same functionality over and over again makes testers routine-blinded and makes them loose their ability to question assumptions and spot improvement potentials.

Test automation is an important part of overall quality control, but since it is not really testing, the term “automated testing is very misleading and should be avoided. This also emphasises that test automation and manual testing do complement each other, not replace each other.

Many people have tried to make this point in different ways (e.g. this is also the quintessence of the discussion about testing vs. checking, started by James Bach and Michael Bolton). But the emotionally loaded discussions (because it is about peoples self-image and their jobs) often split discussants into two broad camps: those that think test automation is “snake oil and should be used sparsely and with caution, and those that think it is a silver bullet and the solution to all of our quality problems. Test automation is an indispensable tool of today’s quality assurance but as every tool it can also be misused.

TL;DR: Testing is a sophisticated task that requires a broad set of skills and with the means currently available cannot be automated. What can (and should) be automated is regression testing. This is what we usually refer to when we say test automation. Regression testing is not testing, but merely rechecking existing functionality. So regression testing is more like version control of the dynamic properties of the software.

Top comments (3)

Collapse
 
erichjz profile image
Erich Zimmerman

While I tend to agree that automated testing is in fact "checking," I think there is an element of getting hung up on words here. Glenford Myers' original book and the ISTQB certifcation people (which we may nor may not value) describe software testing as an action whose purpose is to find software defects.

If you broaden testing to more a scientific definition: executing a hypothesis regarding the expected behavior of the software, then regression testing is the act of verifying whether older hypotheses are still valid. It's only regression if we assume that all prior hypotheses are true, but regression can also tell you that former expectations are no longer valid because of recent changes.

Collapse
 
roesslerj profile image
roesslerj

I agree, but I see this seldom executed like that in practice...

Collapse
 
jessekphillips profile image
Jesse Phillips

While a broader definition can be used to incorporate all of this "testing" I think that the distinction Boch is making is an important one.