Automated testing is important to build software in a sustainable way. No one wants to manually test their entire application on every change. That would be slow and a waste of time.
On the other hand, covering an application with all possible kinds of automated tests also has a significant cost. If not done right tests can make application architecture rigid and slow to change - the exact things we aim to improve with automated testing in the first place.
It's an important part of our job as software developers to make calls about what needs to be tested and to what degree.
During my career, I had changed my views on testing from just writing tests because it's a "best practice" to a more practical approach about when and which type of tests to write.
In this article, I'd like to share my thoughts on software testing with the hope it can be beneficial for someone who is on the same journey of writing better software.
This is not meant to be a definitive testing guide, it's meant to be a starting point for a more critical view on testing practices.
Context
The context in which application is built and used is very important when thinking about testing. Are you an early-stage startup or a big fintech company? Are you working on a new experimental way display content to users or are you adding a feature to an existing product which is going to be rolled out to millions of users?
Kent Beck has an excellent talk "3x Explore, Expand, Extract"
I highly recommend it. The gist of it is this - your approach to testing and software quality, in general, should depend on at what stage of product maturity you are. If you are just starting out, exploring, and experimenting then high automated testing coverage is not needed. You might even go for a while without any automated tests at all until you know what works and what needs to be solidified.
Once you know which parts of your product are going to stay you should improve test coverage and code quality in those areas.
TDD
Test-Driven Development is a bit like a religion. There are people who believe in its core values and there are fundamentalists which preach the "one true way" of doing it.
The most popular understanding of TDD seems to be: code should have 100% test coverage at any cost and that a unit is a class (in languages that have classes).
This approach leads to hundreds of useless tests which mostly test mocks just for the sake of the coverage.
It's hard to ask for the removal of such tests because no one wants to be that guy which is against automated testing ;)
When I first read DHH's post about TDD I thought the guy was crazy because TDD was meant to solve all of the problems of software quality. Now I get it.
There is a great talk by Ian Cooper titled "TDD, Where Did It All Go Wrong" which touches on the problems with the traditional understanding of TDD and how it differs from the original idea.
Here are the key takeaways:
- "Unit" is not a single class, it's a module with public API.
- Avoid mocks.
- Test behaviour, not implementation.
This approach avoids so many problems. Since we are testing a public API of a module there is no need for mocks. Imagine you test a
Calculator
module which can add and multiply numbers. It usesAdder
andMultiplier
classes underneath. But we don't care. All we care about is whether it can successfully add and multiply numbers, so we just test that.Calculator
module is free to change its implementation, removeAdder
andMultiplier
dependencies and perform calculations using a third-partyMath
library. Not a single test would have to be changed and we would still be confident thatCalculator
works correctly. Compare this to a traditional approach: - Write a test for
Calculator
usingAdder
andMultiplier
mocks. Make sure they are called with the right arguments. - Write a test for
Adder
. - Write a test for
Multiplier
. Now we have 3 brittle tests from which only 2 are testing the actual logic. If we need to switch to a third-partyMath
library we'd have to rewrite theCalculator
test, and removeAdder
andMultiplier
ones. That would slow us down and more importantly when you have to change tests during refactoring how are you sure that you can rely on modified tests? It has been a joy working on projects which embrace "unit is a module" and "no mocks" approaches. Some say that if you don't test classes in isolation it leads to a tight coupling between classes. In my experience, it's not a problem at all. If developers are free to painlessly refactor and move stuff around it usually leads to a better code organisation over time. How often have you reconsidered moving a dependency because of the burden of adjusting unit tests?
Slow and fast tests
Given that everybody seems to have their own definition of what "integration test" is I prefer alternative definition where you group tests by being "fast" and "slow".
"Fast" tests are those which don't talk to slow IO (no matter how many classes are involved) and "slow" are those which do. Database and network calls are usually slow. Writing and reading small files to an SSD drive is usually pretty fast. Not doing any IO is even faster :)
The way to avoid performing IO calls during tests is to replace those parts of the code which do with stubs or "alternates". There are multiple ways to do it and functional programming particularly shines at separating IO from the rest of the code, but the main idea is to provide an interface for external interactions and 2 implementations - one for testing which responds with pre-canned values and the real one which talks to the external world (it needs to be tested separately).
Again, the context of your application is very important here and I would argue there is no universal "best practice" approach.
Imagine you have an application which takes a JSON payload and puts it into a Postgres database. What could go wrong here?
- JSON codec might be misconfigured
- Extracting data from the JSON payload and putting it into database might be broken.
- Database schema might be different from the one that the code expects.
There is no logic to be tested here, so the test which would give you the most bang for your buck would be the one which accepts JSON payload from a string and writes it to the database.
There is nothing wrong with not having a "unit" test for this if it gives you no value. You can introduce it later when you have logic in your JSON->DB pipeline.
Conclusion
There is no single prescribed model for software testing. No application is the same and no team is the same. It is important to step back and evaluate your testing strategy.
If your current testing approach doesn't feel right maybe it's because what is considered a best practice is not right for your application. There is no point of striving for 100% unit test coverage in an application which is mostly a conduit between an HTTP request and a DB.
When you do write unit tests, avoiding mocks and testing behaviour instead of implementation goes a long way to achieve a balance between software quality and development speed.
Top comments (0)