re: Why code coverage is not a reliable metric VIEW POST

TOP OF THREAD FULL DISCUSSION
re: Hi, nice article. Do you think using PACT will help. We do not want separate Backend testing and relay on our Unit and Integration tests. We thin...
 

Hello,

I'm not familiar PACT. I just did some quick research on it a few minutes ago.
If you say you're working on an API and using PACT to test it makes your life easier, I say go for it. :)

"We do not want separate Backend testing and relay on our Unit and Integration tests. We think that is more than sufficient"
Not sure if I understood correctly, but are you saying that you don't have any kind of tests on your backend? (unit, integration, system, acceptance, etc.)
If you don't, it's probably a good idea to have have some that test at least basic functionality.
To see writing any tests for your backend would be worth it, just look at your list of resolved bugs and make a rough esimate of how many could have been avoided
if you had tests (of any kind).
But if you feel that things ok this way and few bugs have been found in your backend by QA after rigurous testing, then it's probably not worth investing the time
to write any tests for this.

"We are relying on Code coverage (at the line, method and Class level) and with Sonar cube thing we are like 90%"
Yeah, well the point I was trying to make in this article is that code coverage is not a very reliable metric. :))
That doesn't mean that your unit tests are useless or that you should ignore your coverage.
It simply means that you having 90% coverage does not necessarily make your application almost bullet proof.

"An experienced developer in our team has shared this linked suggesting that we are doing it right in terms of keeping the quality of the code to optimal"
I've looked at the this, and it looks ok in theory. However, from my experience, the best solutions are taylored ones.
I think you should try the suggestions in that article and see if it works for you (or what part of it works for you).

"Can you suggest to me, are we doing it the best way?"
Well, as I mentioned above, the best solutions are the ones that are adapted to every team's needs (or project's requirements).
Without being involved in your project/team, it's difficult for me to say if it's the best. That's because "the best" depends on so many factors.
But, as I mentioned earlier, the link that your colleague shared sounds good to me. But the only way to know for sure it to try it.
If you say that there are no constraints on the budget or resources, you can probably afford to experiment and see what works for you best. ;)

Let me know if this helps! ;)

 

Hi, Thanks for replying-

Regarding - "Not sure if I understood correctly, but are you saying that you don't have any kind of tests on your backend?"
...yes we do have unit and integration but it's just that. We are not sure if those tests are sufficient. And we are not getting any strong opinion about doing or ignoring backend testing. how can find it that it is necessary? I read what you said about defect matrix - " just look at your list of resolved bugs and make a rough estimate of how many could have been avoided
if you had tests (of any kind)." we do not like to log many, we fix them as soon as we find them or we add them to using a story in our backlog for tracking.

about - "But if you feel that things ok this way and few bugs have been found in your backend by QA after rigorous testing, then it's probably not worth investing the time to write any tests for this." we are not doing manual testing at all and backend has never been exposed to any sort of testing except unit and integration(mostly using mock). This is the first time I am observing this way of programming without much testing & ready to learn from mistakes but not at the cost of leaving open ends, corner cases, vulnerabilities etc.. So I want to discuss this as much as possible in forums like this to gain a perspective :)

I would advise to do at some manual testing at least once in a while.
From my experience, simulations and real life situations (i.e. mocks vs actual manual testing) are not always equal.
When mocking, you're basically making some assumptions (which are usually favourable to your expectations) that might not be true in real-life scenarios.

code of conduct - report abuse