Code coverage is a measure of test badness.. if you have low coverage you can know that your tests are probably bad. If you have high coverage you haven't found any obvious badness. But you still could be bad.
This is the same with testing in general. Testing is there to find bugs. If you don't find any bugs that doesn't mean you don't have bugs. But if you do, you've detected "badness".
Are you familiar with double-entry bookkeeping? I'll leave a link to the wiki page below. You can trust double-entry bookkeeping because it proves that the money is all accounted for in the same way that you can prove 2 + 3 = 5 because 5 - 3 = 2.
If you wrote your tests correctly, you could trust them the same way. It doesn't prove that the code does what it should (only the customer can say that) but at least it proves that the code does what the implementer thinks it should.
How do you even trust your tests? Don't you need another set of tests to test the POJO tests?
Code coverage is a measure of test badness.. if you have low coverage you can know that your tests are probably bad. If you have high coverage you haven't found any obvious badness. But you still could be bad.
This is the same with testing in general. Testing is there to find bugs. If you don't find any bugs that doesn't mean you don't have bugs. But if you do, you've detected "badness".
Neat right?
Are you familiar with double-entry bookkeeping? I'll leave a link to the wiki page below. You can trust double-entry bookkeeping because it proves that the money is all accounted for in the same way that you can prove 2 + 3 = 5 because 5 - 3 = 2.
If you wrote your tests correctly, you could trust them the same way. It doesn't prove that the code does what it should (only the customer can say that) but at least it proves that the code does what the implementer thinks it should.
Edit: Link to wiki = en.wikipedia.org/wiki/Double-entry...