loading...

re: Why code coverage is not a reliable metric VIEW POST

FULL DISCUSSION
 

I totally agree with what you've written in conclusion. Code coverage "is" a reliable metric, but it does not show the quality of tests, it only shows that code was covered with "some" tests.

We are working with legacy code, with really bad test coverage and bad tests in overall. What we did was that we actually removed some of the old tests and presented coverage drop to our stakeholders with comment that "this amount of tests did nothing to verify correctness of implementation".

 

Indeed, it doesn't show the quality.
Code coverage tools only tell you there are unit tests that cover SOME scenarios of a particular piece of code. But one unit test is rarely enough for the same code.
That's precisely the problem. Code coverage tools cannot tell you how "good" a unit test is. For that you need human logic or very strong artificial intelligence.
But tools that could do that would probably cost more than companies would be willing to invest in.

 

Yup, that's true, I was actually planning on developing some tool, that would indicate how "good" the tests are. It's not a silver bullet, but it should at least tell you things like:

  • the stub you've declared isn't used anywhere
  • the result you are receiving is not verified
  • there have been changes on some object without proper assertions, etc.

...and this list goes on, but I found it to be technically too challenging to just do it as a pet project :D.

That sounds cool.
Please consider sharing if you ever develop it.

Code of Conduct Report abuse