I accidentally reached 100% coverage. Before I had 90%+ and felt pretty confident. The coverage reports gave me useful insights about my own codebase and I understood deeply what's going on and where.
The last statement was a bit more difficult to test and I was pretty confident that it works. So I wrote the unit test only to learn how to test this kind of stuff… and I found out that there's a bug.
I always thought that it's about testing all lines of code. However, decent coverage reporter complains about missed else paths, so it actually also includes code which isn't there.
More accurate definition might be that all branches of code are executed during test run. Eg: for every if condition, there has to be at lest two unit tests (one for if clause, one for else clause).
The good thing is that you don't need a unit tests for 100% coverage. When you have a complex integration test which covers all cases, you might get most of your code tested.
The most common counter argument is that the metric itself is useless, because some tests are more important that others. I agree with that, some parts of code are crucial and should be tested more than others (these parts of codebase I usually have at least 100% covered).
The point is that the number itself isn't important. Test coverage is great because it helps me to understand how the code is executed. It allows me discover edge cases I'm not aware of.
The goal isn't to have 100% coverage, but rather understand 100% of the code and be aware what's going on there. Full code coverage is just a side-effect of such approach.
Link to original post.