DEV Community

Cover image for All About Automated Tests - Part 1 - Excuses
Mark Walsh
Mark Walsh

Posted on

All About Automated Tests - Part 1 - Excuses

TL;WR

The year is 2021, and yet somehow, there is still an ungodly amount of people not(!) writing tests for production code and even worse, trying to excuse themselves of it. This article will attempt to dispel some common excuses I've heard in the wild. I am labelling these as excuses because I believe automated tests to be a hard requirement for production grade code.

Excuse 1 - 🗣️ "Tests will make X story/feature/defect longer to complete"

Probably in terms of implementation-to-deployment time this statement is probably true...however writing software isn't like making burger patties, the product lifecycle doesn't finish when it's consumed - software is never "complete". For all code, you have a debt as soon as that story/feature/defect is implemented - if you have untested production code you not only have to maintain that debt but also incur the debt of protecting that code from ever breaking its own or other related functionality.

For arguments sake, if you take a simple scenario of an estimated dispatch date endpoint which takes a product identifier and generates an estimated dispatch date. Let's also pretend for arguments sake that there's quite a bit of logic around several remote data sources, perhaps you query a distribution API, supplier APIs and then you perform some complex date related calculations. Code like this will commonly require changes, perhaps some business logic changes or perhaps there's other suppliers to add later on etc.

The cumulative effort for the maintenance e.g. either bug fixes or changes to business logic will take you longer than if you just originally wrote tests - it will also take you longer to make future changes. Obviously there are also the added benefits of the tests accurately describing the intent of the code and preventing you from breaking related functionality but there's a whole other article's worth of testing benefits.

Excuse 2 - 🗣️ "It's only a small change, we don't need a test for this"

If you are making changes to production code which will be executed, you should absolutely have automated tests for this and especially if you making a Super Small Fix™ (and we all know how that goes typically). A Super Small Fix™ will surely require a Super Small Test™ therefore there shouldn't be a problem, should it? Ironically, in systems with low amounts of or no tests, you typically find fixes being deployed more regularly because of the lack of tests which leads to yet more dangerous Super Small Fix™'s and so it snowballs and snowballs until you find yourself in a position of dealing with ever-growing amounts of bugs (a typical sign of this is having meetings specifically about bugs) instead of delivering value to your stakeholders.

I've seen countless times, in systems with low or no test, fixes be deployed, then another fix to patch over a regression bug and then another fix to patch something else which has broken because of the previous fix. The very least you can do is construct a test to signal the intent of the change and protect it from regression.

Excuse 3 - 🗣️ "We only need to test the critical areas of the system"

I do think, if you are adding automated tests to an existing production system, the critical areas should be
addressed first. However, a customers interaction with your software doesn't just fall under the remit of what you consider critical. I am of the opinion that every executed piece of code which modifies state is a candidate for at least some sort of test, even internal tools. Basically - anything you are executing which could impact your customer or stakeholders, should be tested.

When it comes to introducing tests to untested codebases, I tend to adopt a touch-and-test approach; you add tests to that which you touch and only that. I have previously made the mistake of spending a lot of time writing tests for code which I just ended up replacing wholesale with fully tested code. In order to remain pragmatic and productive you have to assume, unless told otherwise, that the current code is correct and work in your tests around the change.

Excuse 4 - 🗣️ "It's not our job, we have testers/QA people"

Typically this is uttered by what I like to define as "coaster developers" (which really deserves its own article as well). It's the entire responsibility of the development team, which includes testers and product to ensure the correctness of the application. Everyone should pull their weight; product in terms of identifying criteria and backlog refinement, testers for their manual/automated tests and developers for writing tests for their own code.

It's a sad reality that there are many developers with slanty-responsibility-shoulders within the industry who will perform very acrobatic gymnastics in order to abstain from any further responsibilities bar "coding". This self pigeon-holing is what sets apart mediocre developers from truly great developers. Bashing out code as if you are boxing orders for Amazon will only get you so far in your career and only enhance the product to a point. Learn to appreciate that you and your stakeholders all share a common goal.

Excuse 5 - 🗣️ "Frontend code doesn't need to be tested"

This is mostly related to Single Page Applications, if you've said this whilst writing or maintaining an SPA, congratulations, you've just created the worlds first SPA without any logic (and in this instance, you probably don't need a SPA). That being said, even if you've managed to create a SPA application without any logic you still have to test against visual regression bugs (Things like Percy can help with this)

If there's logic in place which is important to your customers and stakeholders (whether they know about it or not), it should be tested.

Excuse 6 - 🗣️ "That means we will have to write tests for everything we do! I can't be bothered"

This is a fallacy. I can think of four perfectly reasonable scenarios off the top of my head (and there are probably more) in which I don't think tests are required:

  1. Non-production code e.g. Spikes/Research Stories/Proof of Concepts/Anything you will not deploy to production
  2. Internal tools (which don't mutate state or you don't base business decisions off) - I am thinking of monitoring dashboards, any internally surfaced logs
  3. In general writing code for language/framework specific functionality e.g. you wouldn't write a test to ensure that slice (JavaScript) actually splices or that a List<string> (C#) enumerates correctly
  4. Dependencies which have their own tests - I wouldn't unit test a Nuget package I'd installed if it had it's own suite of tests providing proper coverage

Don't test things that don't add value to stakeholders.

Excuse 7 - 🗣️ "Customers don't care if we write tests or not, they just care about the product"

Customers would never usually say they care about whether you write tests, that bit is true and they should never be exposed to that information. They do care about the quality and stability of the product they are using - and without tests, every change you make is compromising that.

Summary

These are just a selection of the most common excuses I've heard from various people along my journey. There are many others I've heard along the way, some of them are too ridiculous to even note. Please note that I've specifically not mentioned the types of tests - this is intentional as to not provoke any discussions around the usefulness of unit, integration and acceptance tests.

Top comments (0)