Last week I attended Dan North’s workshop “Testing Faster”. Dan North is the originator of the term Behavior Driven Development (BDD). The whole wo...
For further actions, you may consider blocking this person and/or reporting abuse
The main problem that tests have is that they're usually written by programmers.
Most of the weird bugs that I've encountered are edge cases that customers trigger . Of course it's good to cover the bug then, and create a test case for that issue, but as you said 'coverage' is misleading in that case.
That is why you are writing your test in TDD before implementing the production code. So you keep sure that you do not look what the method does and write a test for that outcome. The result is then a more independent approach.
But I see your point, separate testers may detecting errors better than the developer who implemented the production code. On the other hand the may also just assert the outcome of the production code method.
I realized that the best tester is the
client: Always find the way to break the code...
A week ago I did a code to encrypt files using OpenSSL. In order to create them, I need two files and a password. My function create two new files and uses them to create a final one. I checked everything, all weird validations, "what if..." cases, and asked all Support team what the common user does. More than 4 hours testing. Also a partner with more experience with the user tested my code. Aparently, everything was fine.
Well, the code only stayed in production for 24hrs... One client found a way to make it crash. 2hrs trying to figure out why aaannnddd, finally, we found it: He added manually the extension because (in his own words) "It does't has one" (Windows don't shows it). The only case we didn't consider because, usually, the user NEVER touch those files (one time every 5 years), and is less probable to modify them.
So, I conclude that it doesn't matter if you do tons of test cases, the user always finds the case you never considered... Of course, do the cases to find the most common errors. The weirdest ones, let the user find them.
As Aaron said below (above ? :P), customers are 'clever'. You need to take into consideration all of the weird things they might do including renaming files to match extension requirements and that might be either
a) way too time consuming to write tests for all of the cases, and from a business perspective it might not be feasible cost wise.
b) you most likely will miss something
Imho best thing is to treat all user input as junk all of the time, and constantly sanitize and compare with what you actually need.
Also remember that the web is 'typeless', so user input is always tricky to validate.
What about legacy code?
Don't you think that in this case, 100% coverage is pretty great?
Yes, having a test coverage of 100% is always great. However, everything we do costs time and money. If I had unlimited resources I would probably also try to write tests for every possible case in a legacy system ;-)
And what if there was a tool that creates coverage for legacy code automatically?
Do you know a such a tool?
Hehe, if you find such a tool and that tool creates only useful tests then let me know ;-)
Will do :)
So what's your recommendation? testing only the main components?
It always depends ;-) If tests help you to build your software then do TDD. If you want to decide where to start writing tests for a legacy system then you might ask your stakeholders what’s most important for them and start with the components which are most likely to break and which would cause the biggest damage if they broke.
Thanks a lot!
You're making good and interesting points :)
:)
There are tools like IntelliTest msdn.microsoft.com/en-us/library/d...
Jessica Kerr (in a very interesting talk) mentions a tool called QuickCheck which allows you to run property based testing to find cases where you program might fail youtu.be/X25xOhntr6s?t=20m28s
Thanks for sharing Raúl! Do you use such tools? Do they work well for you?
not like I have seen 100% coverage, but if your testing suite is comprehensive, then uncovered code is dead code, as if it really would have served some purpose, it would be covered by specifications from one of stakeholders?
Risk-based testing is essentially the same idea. Consider where a bug would hit you most often or where it would deal most damage, that's where you have to test. Testing is always punctual, it's never a full proof. A good test suite will, in the best case, detect the presence of a bug. But it will never be able to show the absence of any bug. Code coverage is nice (and comparably easy to measure), but should not the primary metric to strive for. In my experience, getting code coverage higher than 70% (provided that the existing test cases really are meaningful) is hardly ever worth it. Better spend your time on documentation.
My main issue with this reasoning is that if it's that unimportant that it works correctly then why build it in the first place, or waste time maintaining it. Even more time since there would be no tests to indicate what's working, what may be broken, how it may be broken etc.
I think in many cases building a component with low test coverage is still more useful for many users than nothing at all.
Code coverage doesn't seem to help with side fx
You get the side effect executed and covered but doesn't imply the side effect is asserted, sometimes you can't even look for it
But it does help remember why code was written in the first place, and forces you to review your own code
Just aim for 100% responsibilities tested, or 100% features tested, not 100% code covered
I assume, this is NOT about unit testing. If that's the case, looks good. Otherwise, this whole thing should be reconsidered or forgotten. It is already suspicious as it's talking about testing without considering different type of tests.