DEV Community

Cover image for Achieving 100% code coverage will make you a better developer. Seriously.

Achieving 100% code coverage will make you a better developer. Seriously.

Daniel Irvine 🏳️‍🌈 on February 11, 2020

Cover image by Brett Jordan on Unsplash. Yesterday I wrote about one reason why 100% code coverage is worth aiming for. You can read that post her...
Collapse
 
elmuerte profile image
Michiel Hendriks • Edited

You've probably seen this image before, but this is what 100% code coverage can look like:

It even passes all tests.

This is what will happen when you make high code coverage a strict requirement for developers.

Collapse
 
aminnairi profile image
Amin

Error: Expected an instance of W, got M instead.

Collapse
 
elmuerte profile image
Michiel Hendriks

According to the posted dashboard the assertion held up, so it must have been a W which was received.

Collapse
 
d_ir profile image
Daniel Irvine 🏳️‍🌈

I hadn’t seen this, thank you for sharing :)

Collapse
 
patryktech profile image
Patryk

But there are many, many experienced developers who are capable of achieving full coverage at speed.

On your last post, I commented that I don't care about getting 100% coverage in web apps, and I stand by what I said :)

It's not a question of being capable of getting to 100%. It's a question of testing the code you write, and tests bringing you confidence.

Sometimes, you test things that are already covered in another package (e.g. your models' __str__ method in Django - Django should (and does) test that, not you.)

Personally, I wouldn't bother. If your preference is to go ahead and cover such cases to get 100%, that's cool. No need to act smug about it, though, IMO.

Collapse
 
patryktech profile image
Patryk

[The] author made a point that too many new developers are too obsessed with 100% test coverage and that it can end up eating up too much development time. The authors point was you should test the most important parts and be happy with maybe 80% test coverage in order to save some time.

I have no problem with 100% test coverage in some cases. I have 100% coverage in a library I wrote, as it's easy to test, I wrote all of the code, and it adds value to test everything.

That's the key thing to me - how much value does adding more tests add?

If I am using a framework like Django, testing that Django isn't broken doesn't add value to me, as it has its own test suite.

from foo.models import Foo

def test_foo_name_matches_bar():
    my_foo = Foo(name="Bar")
    assert my_foo.name == "Bar"

Congrats - Django isn't broken. Completely useless test, IMO.

Also another issue with 100% coverage is when one thing changes about your code you have to reevaluate your tests and make changes and maybe add more etc.

You should refactor tests just like you refactor code. Don't get sucked into the sunk cost fallacy that just because you wrote some tests at some point and later realize they don't bring value, they are untouchable because you spent x hours writing them.

If they are harder to maintain than the value they bring, re-write or even delete them. If you're confident with your code, even with the coverage dropping a bit, that's fine with me.


Basically, be pragmatic, not dogmatic, when it comes to testing (and most other disciplines).

Collapse
 
ky1e_s profile image
Kyle Stephens

martinfowler.com/bliki/TestCoverag...

"If you make a certain level of coverage a target, people will try to attain it. The trouble is that high coverage numbers are too easy to reach with low quality testing. At the most absurd level you have AssertionFreeTesting. But even without that you get lots of tests looking for things that rarely go wrong distracting you from testing the things that really matter.

Like most aspects of programming, testing requires thoughtfulness. TDD is a very useful, but certainly not sufficient, tool to help you get good tests. If you are testing thoughtfully and well, I would expect a coverage percentage in the upper 80s or 90s. I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing."

Collapse
 
ky1e_s profile image
Kyle Stephens • Edited

I agree with Martin Fowler and respectfully disagree with your take - I think it's important to avoid being dogmatic about these things.

Saying you need "X" to be an "expert" developer is an unhelpful hot-take. Software development projects, as with many things in life, are rarely black-and-white.

Collapse
 
d_ir profile image
Daniel Irvine 🏳️‍🌈 • Edited

Kyle, I agree with Martin Fowler on this too. I’m not dogmatic about it either (despite what you might think from my writing 🤣).

My point with this post is that the skill of being able to achieve 100% coverage is a great skill to possess as a developer.

Not that I’d always need to achieve it on every project.

I too am suspicious of 100% coverage. That’s my point about honesty above. Being able to achieve 100% coverage without cheating is difficult.

One thing I’ve learned from writing about code coverage is that it’s hard to get across the message that I’m trying to. It’s unusual for writers to frame code coverage as a learning/growth tool. I’ll keep trying!

Collapse
 
fennecdjay profile image
Jérémie Astor

I reached 100% coverage on ~1500 LOC project ( a language actually) a while back, but I did not bother too much since I tried github actions.
It helped get better at testing, for sure, but also at coding.

Nice Article, thanks.

Collapse
 
fennecdjay profile image
Jérémie Astor

EDIT: it is ~15000 LOC, not 1500. my bad

Collapse
 
nombrekeff profile image
Keff

Take this post for example:

He has a cool example showing 100% code coverage does not mean good or correct code is being tested:


Here, we call GetAnswerString with 2 and 2. This method should give us back “The answer is 4”.
Unfortunately, the developer didn’t really do addition and the method always returns “The answer is 42”.

Unfortunately, the unit tests are just built to ensure that the string starts with the expected prefix, so the actual value isn’t tested.

As a result, we have 100% passing tests and a blatantly incorrect method.

Just because a line is executed by a test, doesn’t mean that the line is correct or accurately tested.
@integerman

Collapse
 
jdforsythe profile image
Jeremy Forsythe

I would say that 100% coverage is useless because you're chasing the wrong metric. Covering all branches doesn't mean you're covering all use cases. We should be chasing case coverage instead. The problem is, of course, there's no way to measure that reliably and automatically.

Additionally, there are parts of many applications (especially web apps, but not exclusively) which are inherently integration points and should not be unit tested at all, so 100% coverage is actually bad there because you're unit testing integration which is not only against the idea of unit testing but requires a ton more work. Testing these pieces of code require mocks and other stand-ins which are almost never required in our unit tests.

Having said all of that, we do in fact aim for 100% coverage of the non-integration files. But again, that's only because it's measurable and case coverage really isn't.

Collapse
 
macsikora profile image
Pragmatic Maciej • Edited

I see what you did here. Controversial thoughts always catch people attention. But seriously tests are also code, additional code to maintain, more tests give more safety in moving forward, but also slows moving forward, yes really.

Some say - tests allows move forward faster, as you invest in the beginning and it pays off after. Ye, ye - seen this pay off during removing of tested code modules, or when requirements changed, and tests are thrown to the trash. I see static code analyses, static type systems as the way to go, tests are useful in some amount, but never too much, and never code coverage as a metric, never.

Collapse
 
beggars profile image
Dwayne Charrington

It seems like you're very passionate about testing and programming, Daniel. I don't want to seem like an asshole, however, my twelve years experience in the industry as a developer has time and time again shown that 100% code coverage is not worth striving for.

In the early days of my career, I used to think you had to aim for 100% code coverage. That is what developers used to be told you had to aim for. But, once you reach a certain number (70-80% approximately) anything beyond that becomes increasingly difficult to achieve (the last 10% takes 90% of the time).

I think you and everyone else would agree that any tests are better than no tests. That aiming for an achievable and realistic target of say 70% is better than not focusing on tests at all. The issue with aiming for 100% coverage is developers inevitably will start to take shortcuts as they become fatigued with striving for 100% (which is quite difficult to do).

Instead of writing quality tests, developers will start writing tests for the sake of bumping up numbers in the coverage report. As an industry, we do not do enough to educate developers on how to write good tests, we just tell them to do it and leave them to their own devices (at least, in the front-end space we do).

In the webspace, the ecosystem is comprised of packages from Npm. It is not uncommon for a modern application to have tens of these Node packages, each with their own dependencies. If you are aiming for 100%, it is unavoidable you will be writing tests for these dependencies, having to mock them to get your tests to work and inevitably, testing packages that might already have tests.

Not to push my own agenda or advertise my own articles, but I wrote a comprehensive guide to unit testing on the front-end not too long ago. In my article, I have a section on writing good unit tests, advocating for developers to not just write tests, but spot bad code and refactor it. Testing bad code doesn't make it any better or safer, but bad code without tests makes it harder to refactor it.

Collapse
 
d_ir profile image
Daniel Irvine 🏳️‍🌈

Not sure if you’re just trolling or not, but with my FIFTEEN years of experience in the industry, I’m curious why you’d assume I’m not experienced in the topic I’m writing about 😉

It’s not that I disagree with what you’re saying, but I think you missed my point: as someone who has spent years teaching TDD and software design, I know how important it is to have clear goals that people can aspire to. With good TDD practice, 100% code coverage is straightforward and can even lead to quicker software development.

I enjoyed your article about unit testing. I’d be interested to hear what you think of my book, Mastering React Test-Driven Development. Let me know your feedback if you do read it.

Collapse
 
dwilmer profile image
Daan Wilmer

Yes, but also no. Or maybe a "Yes, but...".

To quote Marilyn Strathern's phrasing of Goodhart's law: "When a measure becomes a target, it ceases to be a good measure." (source)

The goal is to have well-tested software, and code coverage (in one of many shapes) can be used as a metric for that. Or better: the lack of code coverage is a smell indicating that software isn't well tested. However, just because you haven't detected a smell, doesn't mean there isn't a flaw.

So sure, aim for 100% test coverage, but be careful to only use good tests (there are plenty of tests that cover a line but don't test it), and don't think that 100% means "job done". It's just that one of the check boxes has been ticked.

Side note 1: code coverage comes in many different shapes and sizes, with varying demands and usefulness.
Side note 2: I'm not entirely convinced that 100% should be the threshold value, always and ever. It depends on the kind of code and environment you're in, and should depend on the needs of you, the company, or the customer. I would suggest starting with a 100% goal, and see where that is or isn't feasible and where it is or isn't helpful. Maybe you should have different targets for different kinds of code, as testing everything upfront might cost more than fixing some things afterward.
In the same vein, doing a project in the waterfall style, UMLing all the things, once in uni made me a better developer, because I now have these tools in my toolkit. However, I wouldn't recommend anyone doing this in a production environment unless there is a strict need to design and document everything upfront (although I'm not sure if even aviation would require this nowadays).

Collapse
 
domenicosolazzo profile image
Domenico Solazzo

Great points here! Difficult to achieve but worthy to try to get 100% code coverage for becoming a much better developer!

Collapse
 
aleksikauppila profile image
Aleksi Kauppila

Thanks for posting, interesting read 👍. Here's one more perspective from @localheinz i enjoyed.