Cover image by Brett Jordan on Unsplash.
Yesterday I wrote about one reason why 100% code coverage is worth aiming for. You can read that post here:
Do you aim for 80% code coverage? Let me guess which 80% you choose...
Daniel Irvine 🏳️🌈 ・ Feb 10 '20
Today I want to discuss another reason why. And this one is even more important than yesterday’s. Here it is:
Possessing the ability to achieve 100% code coverage is an important milestone on your journey to being an expert developer.
Think of 100% coverage as a skill
Coverage is a skill, just like being able to code in JavaScript, TypeScript or Python, and just like being able to use a framework like React, Vue or Django.
If you think achieving 100% coverage is hard, perhaps it’s because you’ve never done it!
Just in the same way React would be hard if you’d never written a React app, 100% coverage is hard to achieve if you’ve never done it.
Now answer this question for yourself:
How many times in your career have you achieved 100% coverage?
If the answer is zero, then what excuse have you been using?
Here’s two:
- code coverage is a useless metric anyway
- code coverage is too expensive / time-intensive for web applications, and only suited when software failure would be catastrophic
“But code coverage is a useless metric!”
I understand why you’re saying that. You think it’s useless because it’s possible to write terrible tests and still achieve 100% coverage. I agree with this.
It is a useless metric. If that’s what you’re using it for. Here’s a post that does a good job of explaining why code coverage is a relatively useless metric.
But ironically enough, this is exactly why it’s a useful skill to practice.
One, because full coverage it easy enough to do on its own, but it’s hard to do well.
Two, because we have relatively few developer testing goals that can help us get better at testing.
(We use the term developer testing to distinguish between testing pratices that are useful for developers versus QA testing practices).
So the milestone is actually in three parts:
- Can you achieve 100% coverage?
- Can you achieve 100% coverage by being honest? Without tests that are designed only to increase coverage, like explicit testing of getters/setters?
- Can you achieve 100% coverage without overtesting? (You want just enough tests that you get full coverage without having overlapping execution and without creating brittle tests.)
“100% code coverage isn’t worth bothering about for non-critical software, like web applications”
Again, I can understand why you’re saying this. Web applications, for the most part, aren’t of critical importance. Unlike, say, medical appliances or rocket ships.
When I hear the above what I think is “we don’t know how to achieve full coverage without drastically reducing productivity.”
Which again, is totally understandable. Testing is hard.
But there are many, many experienced developers who are capable of achieving full coverage at speed. They can do that because they were motivated enough to get good at testing it, and they took the time to learn how to do it well.
I’m sold. How can I learn how to do this?
- Start using TDD. You can learn from books like my React TDD book.
- Ask experienced testers to review your tests. Feel free to send PRs my way, I’ll happily look at them!
- Use side projects to learn, so you’re not putting your paid employment at risk when you’re figuring out how to make things work. Carve out some time in your day to learn.
Once you know how to achieve coverage and achieve it well, code coverage becomes far less important...
Personally I very rarely measure code coverage. My TDD workflow means I’m at 100%. That’s not to sound conceited; at some point in my career, getting to 100% coverage was an important goal. But now I know how to do it, I’m working towards other goals.
As I said above, developer testing suffers from having no clear ways of improving, and we have no objective ways of measuring our testing performance.
There are many milestones on the road to be an expert developer; like being able to refactor mercilessly, using TDD and being able to apply the four rules of simple design.
100% coverage is a great first milestone.
Oldest comments (19)
Great points here! Difficult to achieve but worthy to try to get 100% code coverage for becoming a much better developer!
On your last post, I commented that I don't care about getting 100% coverage in web apps, and I stand by what I said :)
It's not a question of being capable of getting to 100%. It's a question of testing the code you write, and tests bringing you confidence.
Sometimes, you test things that are already covered in another package (e.g. your models'
__str__method in Django - Django should (and does) test that, not you.)Personally, I wouldn't bother. If your preference is to go ahead and cover such cases to get 100%, that's cool. No need to act smug about it, though, IMO.
You've probably seen this image before, but this is what 100% code coverage can look like:
It even passes all tests.
This is what will happen when you make high code coverage a strict requirement for developers.
Error: Expected an instance of W, got M instead.According to the posted dashboard the assertion held up, so it must have been a W which was received.
I hadn’t seen this, thank you for sharing :)
I see what you did here. Controversial thoughts always catch people attention. But seriously tests are also code, additional code to maintain, more tests give more safety in moving forward, but also slows moving forward, yes really.
Some say - tests allows move forward faster, as you invest in the beginning and it pays off after. Ye, ye - seen this pay off during removing of tested code modules, or when requirements changed, and tests are thrown to the trash. I see static code analyses, static type systems as the way to go, tests are useful in some amount, but never too much, and never code coverage as a metric, never.
I reached 100% coverage on ~1500 LOC project ( a language actually) a while back, but I did not bother too much since I tried github actions.
It helped get better at testing, for sure, but also at coding.
Nice Article, thanks.
EDIT: it is ~15000 LOC, not 1500. my bad
I would say that 100% coverage is useless because you're chasing the wrong metric. Covering all branches doesn't mean you're covering all use cases. We should be chasing case coverage instead. The problem is, of course, there's no way to measure that reliably and automatically.
Additionally, there are parts of many applications (especially web apps, but not exclusively) which are inherently integration points and should not be unit tested at all, so 100% coverage is actually bad there because you're unit testing integration which is not only against the idea of unit testing but requires a ton more work. Testing these pieces of code require mocks and other stand-ins which are almost never required in our unit tests.
Having said all of that, we do in fact aim for 100% coverage of the non-integration files. But again, that's only because it's measurable and case coverage really isn't.
I have no problem with 100% test coverage in some cases. I have 100% coverage in a library I wrote, as it's easy to test, I wrote all of the code, and it adds value to test everything.
That's the key thing to me - how much value does adding more tests add?
If I am using a framework like Django, testing that Django isn't broken doesn't add value to me, as it has its own test suite.
Congrats - Django isn't broken. Completely useless test, IMO.
You should refactor tests just like you refactor code. Don't get sucked into the sunk cost fallacy that just because you wrote some tests at some point and later realize they don't bring value, they are untouchable because you spent
xhours writing them.If they are harder to maintain than the value they bring, re-write or even delete them. If you're confident with your code, even with the coverage dropping a bit, that's fine with me.
Basically, be pragmatic, not dogmatic, when it comes to testing (and most other disciplines).
Thanks for posting, interesting read 👍. Here's one more perspective from @localheinz i enjoyed.
Take this post for example:
The Myth of Code Coverage
Matt Eland ・ Nov 9 '19 ・ 6 min read
He has a cool example showing 100% code coverage does not mean good or correct code is being tested:
martinfowler.com/bliki/TestCoverag...
"If you make a certain level of coverage a target, people will try to attain it. The trouble is that high coverage numbers are too easy to reach with low quality testing. At the most absurd level you have AssertionFreeTesting. But even without that you get lots of tests looking for things that rarely go wrong distracting you from testing the things that really matter.
Like most aspects of programming, testing requires thoughtfulness. TDD is a very useful, but certainly not sufficient, tool to help you get good tests. If you are testing thoughtfully and well, I would expect a coverage percentage in the upper 80s or 90s. I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing."
I agree with Martin Fowler and respectfully disagree with your take - I think it's important to avoid being dogmatic about these things.
Saying you need "X" to be an "expert" developer is an unhelpful hot-take. Software development projects, as with many things in life, are rarely black-and-white.
Kyle, I agree with Martin Fowler on this too. I’m not dogmatic about it either (despite what you might think from my writing 🤣).
My point with this post is that the skill of being able to achieve 100% coverage is a great skill to possess as a developer.
Not that I’d always need to achieve it on every project.
I too am suspicious of 100% coverage. That’s my point about honesty above. Being able to achieve 100% coverage without cheating is difficult.
One thing I’ve learned from writing about code coverage is that it’s hard to get across the message that I’m trying to. It’s unusual for writers to frame code coverage as a learning/growth tool. I’ll keep trying!