DEV Community 👩‍💻👨‍💻

Philip Weinke
Philip Weinke

Posted on • Originally published at

Approaching 100% code coverage in a healthy way

One question the always comes to my mind in the discussion about code coverage is: What does the percentage refer to, anyway? This is probably a stupid question. It seems to be common sense that it is the whole codebase. But does it really have to be?

What is the value of code coverage, again? I like how Martin Fowler puts it in his article on this topic:

Well it helps you find which bits of your code aren't being tested.

Finding these bits is easier when there are only few of them. And with that I mean: few in the coverage report. Your codebase may have a lot of untested bits and that's okay. There are things you shouldn't test, anyway1. To get rid of the noise, you just have to follow a simple process:

For every untested bit, you have to decide: Is this testworthy? If it is, write tests (or at least acknowlege that you should write tests for it). If not, exclude it from the coverage analysis.

For some people this is cheating, but I'd rather call it focusing. I don't need to be reminded of code that I'll never write tests for. Working this way allows you to reach 100% without writing tests that provide no real value. But even if you don't, it leaves you with a better picture since you actively decided on whether something should be tested or not.

  1. Personally, I do not see much value in testing stuff like getters/setters, generated code or simple data mapping. You, your project and/or team may have other standards and that's fine. 

Top comments (4)

philou profile image
Philippe Bourgau

Hi. Thanks for this interesting idea. It makes a lot of sense for Legacy Code bases!

I remember doing something similar in a previous team. We used to mark sections of the code that were very performance sensitive with an @Crititcal annotation (any kind of in-code meta-information would do). This way, we knew we should apply readability first coding convention on all the code base, except on these few sections, where we had other coding conventions, targeted towards maximum performance.

Do you think there is a way to mark a piece of code as @NoNeedToTest directly from within the code? This way, when a developers changes the code, he would see whether or not he should keep this annotation or not? It might be easy to forget to re-add a piece of code to the coverage analysis when it is changed.


weph profile image
Philip Weinke

I work on a lot of PHP projects and I really like the way PHPUnit handles this. You can exclude code from the coverage analysis with annotations. You basically flag classes, methods or functions with @codeCoverageIgnore and can even wrap blocks of code with @codeCoverageIgnoreStart and @codeCoverageIgnoreEnd. That way, developers directly see whether the code they are working on is covered.

I know that Istanbul and JaCoCo have similar annotations. If your language/tools don't, you're left with conventions and that often doesn't work so well. Humans are humans, after all.

weph profile image
Philip Weinke

Thanks for sharing your view. I do TDD as well, and being strict on that is definitely a good thing. Different teams have different standards and I'm not here to judge. If you are getting confidence from testing getters and setters that's a perfect reason to do so.

Become a Moderator Do you want us to help make DEV a better place?

Fill out this survey and help us by becoming a tag moderator here at DEV.