DEV Community


Posted on • Updated on • Originally published at

On one hundred percent code coverage

The mindset for testing for developers has changed over the years as the software industry is evolving. Even though, testing practice is not widely used, it is integrated in the development cycle of the application. As such, code coverage became a popular subject among developers and discussions have been risen in order to agree on a consensus of code coverage.

This post aims to give this discussion a push and share what I think about code coverage, what I see teams doing and what I understand that is effective or not.

Test driven development

Test driven development has been (TDD) adopted for developers in order to achieve high quality software as well as to keep it evolving over time and avoid the fear of change. Therefore, Kent Beck and Martin Fowler describes as TDD being a three stage flow (red-green-blue), is not what I identify in the projects I have worked on.

Most of those projects were using a combination of ITL(interactive test last) or not testing at all, the test was given to a QA (Quality assurance) professional. In this scenario, the team wasn't getting any value at the time to keep the tests up to date or even to write them. This is a entire discussion that I will not approach here, though,
it is what possibly leads to miss leading metrics. As often management try to force developers to reach 100% coverage just for the sake of it, or because they saw they could use that to force some kind of behavior on developers.

James Carr catalog named this anti-pattern as "The Liar", which Dave Farley uses as a base line to discuss the subject on his video also mention the coverage goal being a misleading understanding on his video about Behavior Driven Development).

TDD is a safe net for developers to keep improving the code, communicate intention and also a culture to follow. As Dave Farley says on his video: practice TDD and avoid the liar trap.

Quality gates

Quality gates are used to enforce some minimum rules during the software development life cycle. Among different rules we can list:

  • Code linting
  • Test suite
  • Security checks
  • Performance budget

Many would argue that code coverage would have its place, which I agree. We could use code coverage as a gate to not allow code into production, if the code base has less than X percentage we fail the release process. Therefore, besides being a quality gate, it should be a indication of the test suite health.

The team, should trust the test suite and the coverage should reflect the health of the suite. Which, in many cases this is not what happens. The common approach is to target X percent of coverage no matter what.

Avoiding wrong metrics

My experience tells that for many developers tests are a matter of needed, kind of obligation. They write the tests not because they want to be proud of the work they do, or because they want to give the next developer (that will
maintain that in the future) a hint on what the code was built to support or not.

As such managers, try to enforce the idea that forcing developers to reach X percentage of coverage will increase the code base quality. Emily Bache recorded a 3 series video going through the famous Gilded Rosa kata, which the goal is to refactor the code given. Around the minute 15:59 of the first
video she depicts an issue in the tests that she had. Even though she changed a critical part of the production code, the tests were passing. The code had one hundred percent coverage. It was not giving the desired feedback.

Of course it was a kata and she beautifully depicts this problem as she goes through the code. Still, I see developers making they proud of themselves because they have X percentage of coverage.

This video series alone points how useless metric of X percentage coverage is. Code coverage should be a side effect of the test suite quality, which software developers can rely on and understand that tests are a safe net for continuous improvement of the code.

Top comments (0)