"Test Coverage" actually has several meanings and they are quite different en.wikipedia.org/wiki/Code_coverage.
My IDE reports lines covered, which is a near meaningless measure. I can write a "test" that invokes a large amount of code and not assert anything and my coverage will be extensive (because so many lines of code were executed).
The folks working in safety-critical circles have tools that test for the different kinds of coverage mentioned in the wikipedia article. I'm in web dev so I've never taken things to that level. But if your project needs that kind of rigor, the tools do exist.
Steve McConnell has some nice stuff about testing in Code Complete (both 1 and 2) that shows you how to write tests that systematically exercise more paths through your code. It's probably overkill in many circumstances but it shows how a function with '100% coverage' can be far from 100% tested.
Lots of edge cases don't actually matter or aren't reachable from within your program because of some validation or restriction coming from above the code in question.
For example, I can imagine writing a function that does not behave correctly if it is passed a null. But when you look at the actual code, the place that calls it makes it impossible to call that function with null.
So in theory, you could argue that you have a defect. And you could probably write a unit test that shows the defect. But in practice, it doesn't matter and you're wasting valuable time working on this valid, yet irrelevant defect.
You could argue that you don't know how that function will be called (or how the calling code could be modified) in the future. And that's true but of all the priorities calling for my attention, is this the best place to invest my time? Every team has to decide for themselves what their priorities are for their project.
How does my team handle these issues?
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.