Love it or hate it, Test-Driven Development (TDD) is a very popular software development method used within the industry. Here I share my experiences with TDD and some neat ideas I learned along the way.
Introduction
I was aware of TDD, mostly from people talking about it negatively, claiming that TDD does not leave space for design. Popular software developer streamer, The Primeagen, talked on his podcast about trying out TDD for his game and finding it difficult to integrate his Test-Driven developed module to the game he was developing. On the other hand, well-known software developers like Kent Beck have written extensively about TDD's positive effects on the software development process.
With this in mind, I decided to give TDD a go for my day-to-day coding tasks. At first I found it frustrating, feeling the friction of writing tests before the code itself. But when I got through the initial phase, and I started to see my test pass, it gave me a great feeling of accomplishments and satisfaction. I noticed that because of the initial friction, I did end up spending more time on the task that I would normally have, but that extra time spent was reflected in the higher quality tests. Not only did the tests cover more edge-cases, but they were easier to read. When I used to write tests after writing the code, I found myself rushing them to get my code deployed sooner. This lead to lower-quality tests, that did not test many edge cases, and were difficult for other developers to read and understand. Funnily enough, in some rare cases, the test cases were so badly written that they would succeed when ran, even when the module they were testing failed! I managed to catch those scenarios accidentally when I was updating the module weeks later. This could have been avoided if I used TDD sooner.
Applying TDD has been beneficial for me, and it has led me to discover ideas, practices, tricks, and tips for writing more maintainable code that compliment TDD. In this post I would like to share those with you.
Tests as Documentation
The well-known manifesto for agile well development is composed of 4 values, one of them being valuing "working software before comprehensive documentation". The main idea is that we are in the business to create software, not comprehensive documentation, therefore working software should take main stage. The main reason is that documentation can only work if it gets constantly updated, so it reflects the source of truth, and maintaining documentation is work intensive. Maintaining an up-to-date documentation would require developers to update the documentation every time that they change their code. On the other hand, documenting what your software does is beneficial because the more your software grows, and the more people get involved in the maintenance and construction of such software, the need to have some kind of software documentation gets stronger.
Some developers may claim that "well-written code is self-documenting", but this kind of idealism is unrealistic. Tech debt builds up quite quickly and naming software modules is very easy to get wrong. Fortunately, I found that well-written test make for great documentation. Tests are supposed to convey how a code module behaves, and that makes for great documentation, provided that tests are written with a good structure and using descriptive test cases' names. Additionally, writing tests as if they were documentation results in more maintainable and readable test code. To achieve this I found useful following these tips:
1. Make use of nested test cases (if you framework allows it)
Many testing frameworks allow for nesting your test cases. This makes it easier to write a permutation of test scenarios without repeating yourself. It also allows you to organize all your test case's setup code, so it is kept lean and descriptive. Here is an example written with the jest framework for a user deletion use case:
describe('Given a user with the admin role', () => {
beforeEach(() => {
UserFactory.create({id: 123, role: 'admin'})
})
it('should not be able to get deleted', () => {
const useCase = new UserDeletionUseCase()
await expect(() => { useCase.execute({id: 123}) }).toReject()
})
})
2. Keep your test cases focused on behavior rather than implementation
Test cases that describe how a module should work from the point of view of a consumer, are the easiest to read and understand. When you create and describe your test cases with a focus on implementation, then it is easily understandable only for the person who wrote the module. With this in mind, I like to try to write my test cases as if my product owner were the one reading the tests themselves.
Keeping tests focused on the functionality has additional advantages that I will describe below
Gray Box Testing
Black box testing is the method of testing where you evaluate the functionality of a module without knowledge of its internal structure or implementation. White box testing is the opposite, where the internal structure is known and used to derive test cases. While black boxing can only be done with high-level tests such as end-to-end tests, or certain integration tests, I believe that unit tests should be approached with a gray box testing method. This means that we mock as little as we need to, and we try to avoid assertion on a module's internal workings. By doing this, we are focusing on the function of the module rather than its implementation.
Tests that are focused too much on the implementation are brittle and harder to read for the person not writing the module. One of the biggest advantages of writing tests is that it is supposed to make refactoring easier by reassuring the developer that the refactoring does not change the behavior in an unexpected way. By having brittle tests, that advantage is lost. This is because the developer refactoring code is forced to change 20 different test cases, and it becomes difficult to distinguish if the failing test is due to an unexpected result of the test, or due to a test that is too coupled with a specific implementation. Focusing on the behavior make tests more robust to small changes, and makes the test failure more meaningful.
London Stye: Outside-In Testing
Described in many places as the "London Style" of TDD, I like to start my tests from the "outside-in". This means I start writing tests for the higher level modules, mocking interfaces for modules that don't exist yet, and work myself down to the lower modules. For this to work, I need to define my interfaces beforehand, which adds to the friction, but this prevents me from building the wrong interfaces by forcing a consumer-first view. If I start the other way around, I risk running into the same issue that the Primeagen complained about. Writing interfaces that are good for testing but don't integrate well with other modules.
My website: https://www.fuchter.dev
Top comments (0)