This article originally appeared on IG's blog
It is funny how things turn around. For fifteen years I have been preaching TDD, or at least for developers to write some unit tests. However, in recent times I have found myself saying more often, "Why did you write that test?" instead of, "You should write a test."
What is going on?
While walking around the office, I was asked by a developer to help him with some unit tests. It seems that he had trouble using Mockito to test the following piece of code:
I think he was very surprised with my response: "You don't need to test that."
"But I have to!" he said. "How do I know then if the code works?!"
"The code is obvious. There are no conditionals, no loops, no transformations, nothing. The code is just a little bit of plain old glue code."
"But without a test, anybody can come, make a change and break the code!"
"Look, if that imaginary evil/clueless developer comes and breaks that simple code, what do you think he will do if a related unit test breaks? He will just delete it."
"But what if you had to write the test?"
"In that case, this is how I would test it:"
"But you are not using Mockito!"
"So what? Mockito is not helping you. Quite the opposite: it is getting in your way and it is not going to make the test more readable or simpler."
"But we decided to use Mockito for all the tests!"
Me: "…"
Next time that I bumped into him, he proudly stated that he had managed to write the test with Mockito. I understand the mental satisfaction of getting it working, but nonetheless it made me sad.
Another example
I got pulled in by a developer all excited about the high code coverage of one of their new applications and their new found love for BDD. Looking around the code we found the following Cucumber test:
If you have used Cucumber before, you will not be surprised about the amount of supporting code that it needs:
And all of that to test:
Yes, a simple map lookup.
I had enough trust with the developer to bluntly say, "That is a big waste of time."
"But my boss expects me to write test for all classes," he replied.
"At the expense of?"
"Expense?"
"Anyway, those tests have nothing to do with BDD."
"I know, but we decided to use Cucumber for all tests"
Me: "…"
I understand the mental satisfaction of bending the tools to your will, but nonetheless it made me sad.
Where is the tragedy?
The tragedy is that two bright developers (both of whom I would take to a team interview) are wasting time writing those kinds of tests, tests that are pointless, and that will need to be maintained by future generations of IG developers.
The tragedy is that instead of using the correct tool for the job, we decide to keep plugging away with the wrong ones, for no particular good reason.
The tragedy is that once a "good practice" becomes mainstream we seem to forget how it came to be, what its benefits are, and most importantly, what the cost of using it is.
Instead, we just mechanically apply it without too much thought, which usually means that we end up with at best mediocre results, losing most of the benefits but paying all (or even more) of the cost. In my experience writing good unit tests is hard work.
So is 100% code coverage worth pursuing?
Yes, everybody should achieve it … in one project. I am of the opinion that you have to go to the extreme to know what the limit is.
We already have plenty of experience of one extreme: projects that have 0 unit tests, so we know the pain of working on those. What we are usually lacking is the experience in the other extreme: projects where a 100% code coverage is enforced and everything is TDD.
Unit testing (especially the test first approach) is a very good practice but we should learn which tests are useful and which ones are counterproductive.
But remember nothing is free, nothing is a silver bullet. Stop and think.
Latest comments (42)
the problem with this is that it's subjective. as organization grows, as people of different levels and personal preferences come and go, subjectiveness in these best practices will inevitably lead to lower software quality.
The only reason not to test something is if it's hard or tooling is missing. Those cases require case-by-case analysis and thorough code review.
Hi Alexander,
Thanks a lot for the comment.
I am going to argue that “there are no conditionals, no loops, no transformation” is quite objective. This rule can be follow by any developer of any level. There is no room for interpretation.
On the other hand I find that “not to test something if it’s hard” is very subjective, as it will depend a lot on each developer’s experience what they find hard to do.
That said, I think you are right that a big chunk of what we do as developers is subject to a high degree of subjectiviness: good architecture and design, code quality, simple code, easy to understand, obvious code, engineering vs craftsmanship, dynamic vs static...
I think this is why a lot of people say that what we do is “art” and not engineering. What I think is a masterpiece, you can think is a big pile of crap, and neither of us can provide objective enough reasons to convince the other.
I think the only reason to not automate a test is if it brings no or negative value. The value of a test is he cost of writing and maintaining it vs its benefit: time saved by catching bugs. Unfortunately, as you point out, the value is very subjective.
I am curious about the “tooling is missing” statement. Do you mean UI tests?
Cheers,
Dan
I believe that 100% test coverage is not a good idea in applications. However, I find it very very valuable in libraries. I have a few personal recommendations:
Avoid white-box testing. It makes refactoring harder. We should be able to achieve 100% test coverage using only the public API of the library.
Every reported bug should be reproduced with a unit test before it is fixed. These unit tests are more important than achieving 100% coverage.
Wow, this is my first time seeing Cucumber used for unit testing. That is ...interesting to say the least.
I think the real problem is this DTOMapper. Sounds like a really sad architecture/ framework. It should have never been created at first place, then no one would have idea of testing it.
I know - if you follow "industry standard" of having anemic domain model with javabeans - you quickly get that abominations - your choice.
Don't get me started with the new industry standard: annotate-all-the-things.
See youtube.com/watch?v=-6zT60l5hDc for a good presentation about it. Maybe you are familiar ;).
Cheers,
Dan
Part of the tragedy is that code coverage doesn't necessarily relate to actual product quality. There is no reason to believe than increasing the coverage number will actually help route out more bugs and keep the product stable.
Programming quality through tests requires specific targetting and deduction. Some code will have may more than 100% coverage, and other bits will have virtually nothing.
Testing also has the potential problem that people rely on tests too much while refactoring code. They assume that since they have tests they can make random changes and will be safe.
Writing a good test suite is very hard. As you said, requires thinking ;).
When doing TDD I think it's important to treat your tests just as importantly as you treat your code. That means also applying the KISS and DRY principles when writing your tests. When I'm writing tests for a class or something, I'll usually write a test that I can instantiate the thing first, but after I have written the tests testing what the class actually does, I'll prune off all the pointless tests that served more as progress markers during the TDD process rather than actual, meaningful tests.
That is a very mature and, in my experience, unusual practice. Congrats!
Thank you for this article. I can now go back and remove some tests which I felt at the back of my mind were a waste of time! :-)
Having said quite a bit of learning does come about in terms of architecture (is that too big word?) when writing unit tests but only if we know what we are testing.
Really good article explaining that you have to be pragmatic when doing anything, instead of blindly following best practices.
Thanks a lot for the feedback. Really appreciate it.
Very resonating post. If one wants to be called software ENGINEER, he/she must start thinking about business tradeoffs and invest time where it's most valuable.
An article that has been really enlightening for me in the past:
blog.stevensanderson.com/2009/11/0...
Hi Raptis,
Thanks a lot for the link, it is indeed very good.
Cheers,
Dan
In my opinion, a bright developer should know what pieces of code are worth to test and those that are not. Even if they are required to meet a coverage number I had expected a way better response from those devs. Were they very junior developers? Maybe not so exposed to unit testing before?
Hi Israel,
I don't think writing good tests is easy, no matter how bright you are. Writing good test is as difficult as writing good production code.
I see a big cargo cult around testing in the Java world (not sure about other communities), which is difficult to see from within, and more difficult to go against, specially if your team lead and your teammates are part of it.
Thanks for the comments!
Dan