Testing is one of these topics that is talked about at conferences a lot. There are books. There are podcasts. There are blog posts. There are a myriad of sources, from which you can learn about it. Yet people still seem to struggle. In this article I am going to tackle problems that we need to face as software developers when testing our software.
There are companies where people in charge claim that there is no money for writing tests and that customers pay for features, not for tests.
I think this belief comes from misunderstanding of the value that tests bring to the business. Sure, no one is going to pay for lines of code of tests. But also no one is likely to pay for lines of production code.
Business pays software developers to build products which provide certain value for customers of that business. How this value is brought shouldn't be enforced on individual developers.
I assume that business expects not only to add new features at the moment, but most probably that amount of time spent on adding features to the system won't increase over time. If so - you as a software developer - need to take measures to ensure that software you create is going to be easy to change. Perfect tool for that is an automated test suite that decreases the possibility of introducing bugs when developing new features.
I also assume business expects that software bringing them value won't break. When it breaks, business loses money. We can decrease the possibility of broken software with tests.
Very often when you hear that there is no money for tests you can also hear that there is no time for writing tests. I think this is a very similar issue, which also stems from the lack of understanding of tests’ value.
So, what can you do?
It’s not that easy. If you’re working on a greenfield project with other people that want to write tests too - it’s great, but very often you are building new features on top of existing software. My advice here is to start writing tests in new modules that you own and don’t let untested code into that module. I wouldn't spend a lot of energy on writing tests for already existing software, because frequently it’s written in the way that makes writing tests painful. If you want to learn more about testable code, I highly recommend Miško Hevery's talk talk on the subject.
But what happens if someone explicitly forbids you to write tests?
I believe that if someone forces you to do things against your conscience, then your time working in such a company isn’t worth your time. You could also try to convince your superiors that tests bring value, but my experience with that is that it sucks a lot of energy out of you and is not very effective at the end of the day.
You quit your previous job and arrived at the new one - where everyone knows that tests bring value. You don’t need to convince anyone anymore that software needs to be tested. The problem is that people complain that the test suite takes half an hour to execute and whenever you change a single line of implementation, 350 tests break. We have two different problems to tackle here.
Tests are for developers. They give them certainty that things work the way they think they do. Thanks to tests, you are able to execute any piece of code in the code base and examine what it does. They should be blazingly fast! Some of the tests are fast by their nature, like unit tests, and some of them are quite slow. End to end tests are slow because setting up all the dependencies like databases or web servers is slow as well.
Very often when the test suite is slow, it's because there is no balance between slow tests and fast tests. You should have a lot of fast tests, and very few slow tests.
Slow tests should be there in the test suite just to make sure that you put all of the bricks together in a way that works. You don’t need to test all 86 edge cases with slow tests. You should use fast unit tests for that!
If you want to fight slow tests in your team, I have some advice. Make sure that people understand that adding more slow tests to the system is counterproductive. If your test suite is already long, chances are it is getting longer every week. Slow tests mean frustrated developers, long CI/CD pipelines and wasted time.
When people are aware that they shouldn’t increase tests’ time, it’s already very good. In order to make them aware, create a metric that shows how tests’ execution time on the pipeline changes in time. Put that metric on a dashboard where everyone sees it. Also, be active on the code review and advocate for using more fast tests and fewer slow tests.
Apart from not making the situation worse, you can try to make it better, but this can be tricky because it involves touching old stuff, and most probably your team has a ton of features to deliver. My advice here is to clean a little bit every time you touch old pieces of the code base. Little change here and there will pay off in the long run. Your colleagues will be grateful.
If changing a small part of production code breaks a lot of tests, chances are your tests are testing the implementation, whereas they should be testing the behavior of the system.
Write tests that are focused on behavior. Instead of checking whether a method was called or that your repository changed state in a certain way, test your application only in a way that is possible through the public API.
Let me give you an example of the CRUD. How can we test that deletion of an object was successful? We can add an object to the underlying repository to set up the stage, then invoke the
delete(object) function and verify that the repository is empty. It will verify our implementation - no doubt. But what happens when your underlying repository changes? It certainly shouldn’t break the tests. Your tests should tell you whether you switched implementation correctly. If, instead, they break - it means that something went terribly wrong.
Testing behavior here would mean that first you
delete() it, and at the end you try to retrieve it and it’s not there. This is your unit of behavior. This is what you want to test.
Good test verifies that changed implementation still works.
Bad test breaks when you change implementation.
Did any of the situations I described above sound familiar? Do you want to improve it? Act on it today!
- Educate others on the value of tests,
- write tests for new modules,
- don't let that test suite lengthen in time,
- practice writing good behavioral tests.
Make your test suite solid and don't be afraid to touch that dusty old function anymore!