Originally posted in personal blog.
Overall I think too strong push from TDD enthusiasts does more harm than good. It's great when you are excited about good coding practice, but when you start forcing others, it can have the opposite reaction. There is even a psychological term for it - Reactance theory:
Reactances can occur when someone is heavily pressured to accept a certain view or attitude. Reactance can cause the person to adopt or strengthen a view or attitude that is contrary to what was intended, and also increases resistance to persuasion.
I think this somewhat affects that developers are more likely to express and share an opinion on why TDD is not working for them.
TDD is a specific technique and it is naive to think that it can be applied for everyone. And nobody should feel bad or unprofessional for not using TDD. However maybe TDD is the right technique for you, but multiple coincidences lead you to the conclusion that it's too hard and impractical. Here I will share a few ideas that might help you rethink your TDD experience. Some of them are ideas about how to apply the rules alternatively and some of them how to improve test experience, even if you are not using TDD.
This comes from all tutorials and books, where the rules are laid out strictly. First law of TDD states:
You are not allowed to write any production code unless it is to make a failing unit test pass.
From such strong wording, it might seem that following the rules strictly is a must. While all 3 laws describe TDD process accurately and some developers do follow them strictly, it should be treated as material for initial learning.
Learn the rules, THEN break them
For TDD it means that once you grasp how it works, you can apply it whenever you feel it works. Initially, strict process teaches you how to write testable code. But later you might find it more comfortable to build structure first and start writing TDD tests for logic then.
It's a common misconception, that TDD == unit tests. Actually, even the name is "Test Driven Development", not "Unit Test Driven Development". Unit tests are mostly used in TDD examples and tutorials, and usually, TDD works really great with pure unit testable functions. But in reality, the technique works with any kind of tests.
For example, I have written a lot of SQL queries using TDD. Sometimes I am not sure what SQL I need to write, but I just need to create a test that inserts few database rows and expects some result. And then I can try out different SQL queries until it works. And same with API tests. You might need to fix bug where JSON output is a wrong format. Instead of testing manually with Postman, you can write an API test case and then start fixing the code.
The great benefit of TDD is that it actually can make you faster. Here is how.
Writing a new functionality, or fixing existing one process could be oversimplified into two steps:
- Create/Update code
- Check if it works as intended
TDD lets you optimize the speed of the "Check if it works as intended". You need not to wait for an app to be rebuilt, you need not open a browser, create test data and see if the code works as you expected. With TDD you see the result instantaneously.
However, if tests execution is slow, that just breaks the flow. Some say tests should run in 10 seconds or less.
And I can partly agree because TDD felt best for me when a test suite was fast. However, I would not agree that it's bad to run part of a test suite in order to be faster if you have a big test suite. But before using that technique, there are other ways to improve speed.
First, it’s essential to find out why tests are slow. Integration tests might slow down if there is redundant data initialization, or if the database is being rebuilt each time. Each case is unique. And it is the same with unit tests, it varies depending on the stack or tools you are using. Find out their limitations and best practices and see if you can improve the speed in any way.
And if there is no other way to improve speed (for example IT tests are naturally slower than unit). You can apply this technique, which allows running tests fast. Run a different amount of tests in a specific frequency:
- Execute a single test most often, when you do small code changes
- After a certain time, or if you think you have made a large change, run all tests for a particular function/method
- After a certain time, run multiple related tests (might be all tests for the same class/module)
- Finally, run all test suit
This is not an exact science and you should find what works best. But try not to think too much about which tests you should need to run.
It closely relates to the previous statement about speed. To be efficient at TDD, you have to use comfortable tools (same applies to be efficient at writing tests without TDD as well).
As an example I can mention two qualities of testing infrastructure, that I think are really important:
- How easy is to execute all tests vs single test? Best case is that you can execute needed tests with a single button click or keyboard shortcut. Going to console and typing regex to match test you want to run is just tedious.
- How clear are assertion errors? You should not need to decipher why the test failed. Failure message like "Expected true to be false", when you were expecting certain item in an array is no good.
The list can go on, but the main goal is to minimize thinking about routine actions. This instead allows thinking more about the actual problem that you need to solve. If testing infrastructure gets in the way and makes you think too much about how to start writing or write test, it will be really hard to be efficient at TDD. This way tests will become annoying chore no matter if you write them after or before.
Writing tests for simple logic, especially if mocking is involved, can feel a bit awkward. You need to write setup code, initialize mock objects, add assertions for method calls, etc. And finally, the actual code is just "if" statement and few function calls. And this can feel a bit underwhelming.
I am not saying you should not use TDD for simple cases. While learning it certainly helps. But applying for simple cases only might not reveal it's full potential. When I started using TDD, it seemed that I can only apply it to a new code, and only to certain functions. But I have started to use it when fixing new bugs or adding new functionality to existing code and it really helped me to understand how to apply it in different cases.
TDD feels best when you have complex logic to implement. You do not even know how to implement it, but it allows you to chip away requirements and finally get fully working solution. And also it adds a lot of confidence to refactor while you do it.
Good tests are clear and easy to understand. And with TDD it should be simple to think of what test to write upfront. However, if you are trying to cover internal behavior too much, it might be too hard to write test upfront.
For example that happens a lot when a lot of stubbing and mocking involved. These kind of tests are hard to write because they involve assertions like "this method should be called 3 times with these arguments", etc.
Another example I can remember is writing React tests using Enzyme. I sometimes ended up with tests that were checking internal state, checking how component renders children components and etc. And when I have started using React Testing Library it become much easier to use TDD. You do not have access to internal details and it forces you to think about components testing differently. You do not care what child components are being rendered and what props are passed to them. Instead, you can write tests that expect certain text being rendered, and such tests are definitely easier to write upfront.