DEV Community

ajditto
ajditto

Posted on

Pitfalls of QA automation

It’s time to talk about qa automation. At this point it’s more than an industry buzzword, it’s something that every company insists they need. At my level of experience nearly every professional connection I’ve made has talked to me about automation, whether it’s my efforts in automation, what my current company’s focus on automated qa is, or just which test frameworks I’m familiar with.

Yet even with all of this buzz around qa automation and the promises it makes, I’ve seen very few companies that have found a way to rely on their automated tests, and that’s if they’ve even managed to write any automated tests.

Why?

The ideas behind automated testing are great, and when it works it’s amazing. More often than not, however, I’ve experienced companies that have big dreams and hopes for automation, but fail when it comes time to execute on the strategies and cost of actually automating their testing. Here are some common pitfalls I’ve seen.

It is not a silver bullet

When first introducing automated testing, the conversation most often starts with discussion of how great it would be to automate boring expensive regression testing. It’s easy to understand why, the promise of automated testing is that it can do the boring stuff quickly and with less errors, and since manual regression testing is expensive and boring, it would be a win for everyone. The problems start to pile up fast though when actually trying to implement this magical fix all.

Right off the bat there’s some big expenses that come into play: Do you hire a senior qa automation engineer who can hit the ground running? It’s a large added expense, but experience usually is. Or do you push your existing team to learn what they need for automation? You’re going to see a major drop in output from your team while they get going if you do things this way. On top of that, learning by doing is a path paved with mistakes, and while that’s not a bad thing, it is, again, going to be expensive.

Then, once you’ve chosen a way forward, tests need to be written, reviewed, and run. Writing is a thing that pretty much everyone understands, so we’ll skip that for now.

Reviewing tests is usually a cost that’s less often considered. As a friend of mine likes to say: “who tests the tests? Who tests the tests that test the tests?” This is, unfortunately, only half as silly as it sounds. Tests need to be accurate to the desired result, which sounds obvious until you find out that a test is failing because it relies on an api call that isn’t supported (this is not a made up scenario, I’ve seen it happen).

Then tests have to run. Again, it may seem obvious, but I’ve personally experienced writing automated tests only to be told that there is no possible way to get them running in the current pipeline, and doing so would require an infrastructure re-write.

So no, automated testing is not a silver bullet, it’s an expensive endeavor. One that is usually underestimated to the detriment of the team.

Even if all of that gets properly accounted for, another common error is starting in the wrong place.

Ignoring the testing pyramid

For those unfamiliar with the automation testing pyramid, the idea is that, like a pyramid, you get smaller as you go up. In the testing pyramid the base (largest portion) is made up of unit testing. Moving up, the next largest section is integration tests, ending with UI or end to end testing as the smallest section.

Following the logic of the pyramid, it would seem folly to invest in UI testing until a solid base of unit tests and integration tests has been built. Unfortunately, many companies that decide to automate their quality make the mistake of jumping with both feet into the UI test automation pool without building a foundation to stand on.

It’s an understandable mistake to make. UI tests are flashy, and can be shown off at demo day. Modern UI testing frameworks are relatively easy to pick up and learn, meaning manual qa professionals can get an easier introduction to automation and its potential power. What every engineer will quickly learn though, is that those easy to write tests are flaky, and fail often. Which leads to the next point.

Tests as a standard

To illustrate my point, allow me to share another experience. When working as an automated qa engineer, my team realized that one of our api endpoints needed a total refactor. It was an endpoint that handled far too many things, and over the years had become so complicated, nobody felt safe making any changes to the code.

As a team we cataloged every action the endpoint handled, including errors. Then I started to write automated tests for each and every one of those actions. We agreed that once the tests were written, we could begin the effort of refactoring the endpoint, knowing that if the tests all passed on the other end, we could release with confidence.

The main takeaway from this scenario, is that when tests failed, nobody tried to blame the tests. They became the standard, and anything that didn’t pass the standard was incorrect.

Implementing automated tests that instill that level of confidence is a very difficult task, but it is supremely important to the ability of automated tests to actually speed up development. You can’t go fast if every time a test breaks everybody has to stop and figure out if it’s the test that’s broken, or the code, because in the end that’s just manual testing with extra steps, and wasn’t the point of test automation to remove the need for manual testing?

What are your thoughts? Share them below!

Top comments (0)