DEV Community

Saša Zejnilović
Saša Zejnilović

Posted on

5 things to watch out for in automated regression tests

What are regression tests? (in short)

You design regression tests to detect issues that might come as side-effects of implementing other features you've already tested.

The problems

The biggest problem facing regression tests are:

1. Change of the output formats

Most common. The changes may be so minor that manual testers would barely notice. Automated tests, however, are sensitive and brittle, unable to differentiate between improvements and bugs. The whole suite could have to be updated if we only change metadata from some Map[String, String] to Map[String, Any] to keep all formats.

2. Designed-in assumptions about the test environment

Test suites may break when moved to different environments or when the configuration is changed (when they are not masters of the environment).

3. Errors in maintenance

Writers of automation tests repairing tests make mistakes, introducing bugs into the test suites. Regression test suites then develop regression bugs themselves, which can show after some time.

4. Changing operators.

Test suites may require up-skilled people and knowledge to run and maintain. People change positions and jobs. If person X disables some test and then is let go, person Y is just running tests unaware there might be a problem.

5. Not treating your tests as any other codebase

Very often, and not only in regression testing, but in general, people treat their tests as they treat their documentation. Tests should be treated as any other codebase, there should be standards and design principles applied. You just stop just shy of creating tests for your tests. It actually should work in unison. You can view it as your Test Code testing your Product Code and vice versa.


In conclusion, people tend to invest in regression automation a lot. Sadly, they often find that the tests stopped working sooner than later. The tests are out of sync with the product. They demand repair. They're no longer helping find bugs. Testers respond by updating the tests or just adding new ones ending with 5-6 000 test cases that no one knows what they are doing, and everyone just prays it is ok. I know a company where two full-time SDETs were needed for patching them every day and adding more. Hundreds of tests disabled because they don't have time.

From all this, uncontrolled maintenance cost is probably the most common outcome. It results in companies then rather "forgetting" the regression suite and testing than repairing it.

Top comments (0)