This post originally appeared on jhall.io.
I wrote this post in October of 2015 as I was adjusting to life without unit tests at a new job. I recently stumbled upon it in my Drafts, and edited it down to a single point for publication.
In October of last year I took a new job at a company with a large group of programmers, and an old (15+ years) code base. The new company doesn't generally do unit tests, for various historical reasons, none of which make much sense upon close examination.
Legacy code is simply code without tests.
-- Michael Feathers, Working Effectively with Legacy Code
During my on-boarding process, I found this to be especially painful as I had previously come to see unit tests as a form of code documentation. And there are few things as painful as learning a legacy code base without any documentation!
Early on I came across some Perl code that looked like this:
sub foo {
my ($a,$b) = @_;
die "You must provide \$a!" if !defined $a;
return (undef,"You must provide \$b!") if ! defined $b;
# .. do something interesting
}
Two ways to handle different errors. Was this intentional? There are three ways to find out:
1. Look up all the callers
I could look at all of the callers to the foo()
function to see how they expect to handle errors. What if they all expect an error as a second return value? Does that mean I can change the first error condition to match this pattern? Maybe. But maybe not. Maybe !defined $a
is grave enough to warrant a die. The best I can hope to accomplish with this method is an educated guess, no matter how well the rest of the code is written
2. Ask the original author
Sometimes there's more than one. In fact, in a case like this, it's probably a safe bet that there is more than one. Odds are one author added the die
, and another added return
. It could simply be out of personal preference without noticing that another convention was in place. So which one should I use? Unless someone has an express memory of supporting both error handling methods, the best I can hope for here is a reconstruction of the history of the function, and then make an educated decision about whether both error methods are needed.
3. Read the unit tests
If a (well designed) unit test were to accompany this code, it would be immediately obvious to me that both error handling methods were necessary, because the unit test would cover both cases. Now, one might argue that this doesn't prove anything, except that somebody created two error handling cases and tested them. There may not be any callers using both conventions. That's true, as far as it goes. But it does add a lot of weight to the design decision. For someone to make that decision, and assert that they believe it is correct by writing a unit test, says it was well-considered. By contrast, the code alone, without a test, simply says to me "Someone decided to die on error without considering the implications, or whether another convention was already in use."
Further, in the case that my assumption about the existing code is correct, a unit test would do a lot more, actually, than simply document the two error handling cases: It's likely to prevent the situation from arising in the first place! Suppose Author A creates the two-return error case. Some time later, Author B comes along and adds the die, then goes to write a test for it, he'll notice Author A's tests for the two-return value case, and will switch to this same convention (unless, of course, there's a good reason for both, then his new test will document this).
One could make a case simply adding a comment explaining the need for two types of error handling would document the change as well, and would be less work than writing unit tests. This might be true, but comments tend to fall out of sync with code in a way that unit tests cannot.
Top comments (0)