DEV Community

Beekey Cheung
Beekey Cheung

Posted on • Originally published at blog.professorbeekums.com on

Why Are Tests Not Written?

It is hard to argue against writing tests. They save a huge amount of time relative to the how much effort it takes to write them. The increased quality of the product also makes a good impression with users. Yet all too often, tests are not written. Even for advocates of automated testing like myself, there’s either a temptation to skip writing tests “just this once” or “this feature doesn’t need tests”.

Why is this? With such clear rewards, why do excuses come so easily?

While I suspect there are probably too many possible answers for one blog post, I think I have a handle on some of the more common ones.

The most obvious one is that tests don’t contribute to working software. Users don’t care whether you have tests or not. They care your product works. While tests can help that goal in the long run, if you see your product working then it is easy to say it is done.

There are process hacks to try and make writing tests as part of the “definition of done”. Yet when push comes to shove and a deadline has to be met, the first thing that gets cut are the tests. Deadlines are for getting working software in the hands of users, who once again don’t care if you have tests.

And we like to say “oh we’ll write the tests after the release.” Problem is, many features also get cut to make deadlines. What gets worked on after the release? Features. Not tests. The tests just keep getting pushed.

Another problem is that there is a big delay in gratification when it comes to automated testing. It isn’t enough to just write a test and have it catch a bug later. There needs to be a critical mass of tests for you to feel comfortable enough to not need to manually test every possible regression. If you only have a handful of tests when there are dozens of cases, then the tests don’t provide that confidence. You still need to do a lot of manual testing and the tests have saved you very little time. Having a few tests is actually pretty close to having no tests. Unfortunately, it takes time to build enough tests to hit a tipping point where you can trust your test suite.

Along similar lines, the hardest part about writing tests is starting. Once someone has established the initial test suite with the patterns used for writing tests, adding another test is fairly painless. Creating all of that in the first place requires a large amount of effort. It takes time to figure out what testing strategy will actually provide a high degree of confidence. What should be in unit tests? What should be in integration tests? Should we have separate front end and back end integration tests? Do we go with a more full testing suite using Selenium? How do we go about seeding the database for those kinds of tests?

All of those questions are much easier to resolve with someone who has answered them before. But not every team has access to someone with this knowledge. And even with this knowledge, all software systems have their own quirks and require some tweaking to testing strategies. Figuring that out takes a bunch of time upfront and increases the delay in gratification.

There isn’t an easy answer to overcoming these problems. At some point it comes down to a matter of will in wanting to have tests. Maleega's initial MVP was built with zero tests, as many MVPs probably are. As much as I wanted to keep going, I knew I had to bite the bullet at some point and take a couple of weeks to get tests in place. Having seen the benefits at previous companies definitely helped overcome the curse of delayed gratification.

Top comments (1)

Collapse
 
pbeekums profile image
Beekey Cheung

I'm embarrassed to say that I was like that in the past.