I can’t solve this one, and I think I need the help of the DEV Community. So, a developer was responding to a code review comment I made and they s...
For further actions, you may consider blocking this person and/or reporting abuse
Never really called this "defensive programming" – or fear-based programming. A good programmer/engineer/architect/etc. is forward-thinking and seeks to author solutions that neither paint themselves nor their successors nor their customer into a corner. If the incremental cost of adding anticipatory code-paths is low and otherwise obvious, it makes little sense to take a shortcut now that will bone you, your successors or your customers in the future (see "technical debt").
Totally agree with this statement except for when you code for scenarios that might never exist. We had a severe problem of simple tasks taking days longer because every engineer is trying to fix every edge case. Honestly I think a more proactive approach is making your code clean and tested so it's easy to expand later. So the real gotcha here is, how much is too much "defensive programming"
Yeah. It's definitely a balance. And it's a "balance" that comes both from general experience and knowledge of your particular customers' or target userbase's proclivities.
One thing that helps with tempering a tendency towards sacrificing adequate delivery-speed on the altar of future-proofing is realizing that "future-proofing" is often an stand-in for "idiot-proofing" ...and that there will always be a better idiot out there. =)
Yes indeed. It’s all about knowing the right balance between future proofing and what is needed today by the user. Just out of curiosity, did you find this “exhaustiveness checking” approach to be within the balance? I.e. would you use it every time you write a switch case statement or an if statement that checks all cases? Because I use it every time but I am curious what others think.
I think we need to do our due diligence but if you're working on an app and you know the user base will be quite small (internal software for companies or marketing sites, even niche software) then a lot of the scenarios engineers dream up just won't happen and if they do then you'll have the extra cash to address them. Just make your code clean, so the next engineer hates you a little less 😉
I don’t see new cases as being something “engineers dream up.” Happens every day. That’s why the never type says “with what I know now, the code will never experience a different case at runtime.” Seems like an easy thing to apply every time regardless of the size of the user base. Because the user base will grow. And when the user base grows, so will the requested functionality.
I definitely do, but without solid samples it's just us bike shedding 😂
I would say it depends what you are programming and who for. If you're writing software for something where failure will cost someone's life (eg air traffic control, railway switchers, radiation therapy treatment for cancer), I would argue be as defensive and thorough as possible to avoid unexpected conditions.
That’s great to hear. I also feel that way. What have you done to persuade the less-experienced members of your team that the incremental cost is low?
I find that I can explain it once. I can remind them again in a code review. But if they ignore it the third time... that means that the developer really doesn’t want to code like (as you put it) “a good programmer.” That’s always sad for me to watch when a developer doesn’t want to take the extra step to help the customer from surprise bugs, but part of my growth as a senior dev has been learning to “advise” not “force” good behavior. I must admit that I’m finding it challenging. I’ll take any and all advice! :)
Usually, it's a dialogue that usually depends on the personalities involved. My approach is generally to frame things as questions. Usually something along the lines of "that's a good approach to the direct problem, but how would you extend this to meet ". The other thing that helps is to remember that showing usually helps more than just telling.
Ultimately, whether you're advising or explicitly-mandating a change, you're exercising a degree of force (inasmuch as you're causing someone to do something they wouldn't have otherwise done). The degree of force appropriate to a given situation will depend on the person you're interacting and the importance an visibility of the deliverable.
The other thing to bear in mind when evaluating the force applied is that, at the end of the day, when you sign off on a PR, your name is now on that code, too. Whoever looks at that commit history can rightly interpret that you were ok with the state of things. Generally, I'm all for letting people do things how they see fit. However, I have to feel comfortable lending my reputation and my employer's reputation on a given chunk of code.
Ooh I’m definitely gonna use this line of yours:
Thank you so much for your response. I think choosing my battles is the hardest part of being a lead/senior dev. It’s great to get feedback from others. :)
I've had devs in code reviews that are grateful for the advice, but others that thank you for the input but know better. Sometimes if the impact is minimal and the bug reversible I've let code to go through that isn't great so that the person learns from it. I wouldn't recommend that for every problem though!!!
Yea absolutely As you pointed out, it’s really tough the draw the line between code that needs to be corrected and which one can slide. But I think that the fact that devs like us try to think about he distinction at all makes us better servant leaders. So congrats on being a thoughtful dev! :)
Thanks and congrats to you too!
Defensive programming is a must if you wish to obtain high-quality software. The main thing it does it stop bugs from propagating far through the code.
Your example is a complex one, because it depends a lot on the language. There are numerous simpler examples that can demonstrate the value of defensive coding. That is, just because there are situations where it may not be warranted, doesn't mean it isn't warranted as a whole.
Checking enums via if's is generally the wrong thing to do. If you intended to cover all cases than some kind of switch is expected -- which then many languages will catch new conditions being added which aren't covered. If the language does not support a switch statement, then having a catch-all final case is the only sane option. This is because a series of if-else conveys a different meaning than a switch statement, but you actually intended to have a switch statement.
A lot of people avoid switch statements since it’s too hard to enforce good behavior like avoiding fall through. But I get your point.
Not all languages are broken like classic C. Even C/C++ compilers can provide warnings on fall-through and missing case's now. Using those languages without warnings turned on, and heeding all warnings, is crazy.
C# requires full coverage I believe.
Two points of correction/clarification:
1) the solution I provided in the article works for switch statements too
2) the solution I provided gives you feedback at compile time. C# would only help you out at runtime. See here: stackoverflow.com/a/20759116/706768
Oh, I didn't mean to imply there's something wrong with your approach, only that there are simpler cases to convince people of the need for defensive programming. Your solution is fine for the languages you're using.
Hmm, I wonder what language it was, if not C#? I recall one of them would produce an error (and of course C++ with the warning enabled).
Point taken. As for your question, I’m not sure which language besides TS supports checking exhaustiveness. Btw, I had copied the gist link incorrectly in the solution part of the article. You can now see the use of the never type. Woopsie!
@edA-qa, in the .Net world are you thinking of F# maybe?
I'm probably a minority, but I really hate YAGNI. It is based on the assumption that the cost of doing things when the need arises is the same as doing them now. In practice, this assumption tends to be false, because by the time you'll actually need to change that code, two things almost always accumulate:
These two factors make is significantly harder to implement the feature when you need it. You are not trading X hours of work now for X hours of work later with a probability of not needing it - you are trading X hours of work now for αX hours of work later, and suddenly it's no longer a matter of best practice dogmas - it's a matter of risk management and the estimated probability of not needing it matters.
Even if after considering everything you decide not to do it now, you should at least put some minimal effort to make it easier to do in the future - like creating stubs and writing more comments explaining things you that may be hard to recall later.
Good point. I also try to call out YAGNI whenever possible. But do you think that avoiding fallthrough in a switch case (as shown in the article) is YAGNI? I’m just curious.
YAGNI is flexible enough to support both. You can say it's YAGNI because supporting more than the current 3 states for a traffic light is not a feature you currently need, or you can say it isn't YAGNI because ensuring the code does not break when you change it is a feature you need.
If you treat YAGNI as a holy dogma then it make sense - as with all sacred scriptures - to force the definitions to match your opinion. I prefer to treat YAGNI - or any other best practice - not as a rule that cannot be broken (but can be twisted) but as a rule of thumb. Not "you shall not do this!" followed by a thunder noise but "you should take this into consideration.
So, in this case, YAGNI applies - you should consider the probability that you won't need to support more options. But that's just one factor you need to consider - you should also consider the probability you will need it (1 - ρ), the cost of doing it now, the cost of doing it later, and the potential bugs of both cases. And with everything considered - it's pretty clear that avoiding the fallthrough is simply not worth it.
Thank you for responding. :) I’m a bit confused by this point though:
Because it takes about 15 seconds to add any else/default case that throws an error. And if you have an assertUnreachable function around, then it takes the same time to get compiler time feedback.
I totally 100% respect your thoughts on measuring the cost— after all, modern software development and Agile is all about trade offs. But if you just make it a rule of thumb to never fall through... you’ve avoided a whole class of bugs and it only took you 15 seconds per switch statement.
As for the use of the word YAGNI, another commenter provided this wonderful quote that has helped clarify my thoughts. Martin Fowler says:
And after using the never pattern (described in my article) for a few months now, I can tell you that it makes refactoring so much easier. And to your point, maybe the refactoring is never required, but I do feel a lot happier in the present when I take the time to pay it forward to future me.
Sorry, that was probably a miscommunication. Since the regular meaning of "fallthrough" (missing
break;
statement in aswitch
clasue that causes execution to fall through from one case to another) does not apply here, I interpreted "avoiding fallthrough in a switch case (as shown in the article) is YAGNI" as the original code - the one that not modifying it "respects YAGNI".Of course the option that does not potentially cause a lethal accidents when new options are added is preferable...
A "rule of thumb" does not mean "never do this" or "always do that". What it means is "always consider this". You still need to apply your own judgment.
Best practices are treated too much like holy scriptures. A set of rules, set in stone, that everyone can quote, and whether or not they know the origin of a rule - the assume that it came from God and must never be broken. But for a law to always fit reality it has to be very elaborated, and these best practices usually try to be short and catchy proverbs. So wise sages (like Martin Fowler here) add more interpretations and clauses to make it fit real life cases.
I really disagree with this approach. Developers like Martin Fowler simply apply their own judgment to the rule for everyone to use, but I think every developer should be capable of thinking for themselves and using their own judgment. You don't need to find some sage to quote to support your judgment - you can provide your own reasoning. Even if you don't have your biography and achievements listed in Wikipedia.
Of course, if your favorite sage published an article or wrote a blog post with well-built arguments there is no shame in linking it. The point is that you should rely on the logic of the arguments - be them your own or from external sources - and not the holy wisdom of the arguer.
Yup absolutely. I only quoted him because j thought he expressed a nice sentiment succinctly and I think it’s important to cite people for their contributions. As for my own thoughts: I have yet to find a good, safe reason to have a default case that handles more than one state (I thought that was called the fallthrough case but my bad). I think (without anyone else telling me) that it’s better to throw an error (or better yet, create a compiler error like I show in the article) when a not-yet-discovered case is found in the default.
I was hoping that someone would provide a reason to avoid the “rule of thumb.” I like discovering when ideas are not absolutes. The fun is in the gray area. But until someone presents a compelling reason for a non-never default case, I’ll continue to make it a correction on code reviews that are submitted to me. Defense it is.
There is one case I can think of where you want a
default
clause that does not throw an error - handling keycodes:(in no particular language)
Adding ~100 more
case
clauses for all the other key codes is too much, and you wouldn't want this to fail compilation just because someone updatedKeyCodes
to support some more keys.Whadya know, an exception to the rule. Bravo! 👏
But yea, I think exhaustively checking every case in the KeyCode enum would be a waste of time and would be way too verbose. I gotta be honest, I wasn’t expecting someone to come up with something that made me think it was wise to avoid the never assertion, but you did. :) I guess that’s he beautify of seeking feedback.
That's why my best practice is to never blindly follow best practices to the letter and always apply your own judgment.
I can't see the point in the example you provided since a simple dictionary is gonna do the whole if's thing:
And voila, defensive coding achieved!
I’m not sure I see how a dictionary would solve this. Could you provide a code example so I could see and understand?
Sure, I'm using Python generally but this could apply to any lang:
As a general rule of thumb, whenever I see many if's, I just think directly: "Can I use a dictionary here instead?"
The problem with that dictionary solution is that it relies on throwing an error at runtime. There’s no information provided to the compiler to catch it sooner. Which means that you’ll only find out if you forgot a case (or a key the dictionary) if you write a unit test that tests for exhaustiveness. But rather than write a unit tests, wouldn’t it be better to find out at compilation that a mistake was made?
So I would recommend that you give the article another read and then try out the samples I provided in the article in a REPL. It’s probably not apparent the advantages without seeing how quickly the error pops up in a TypeScript playground. You’ll find that they allow you to discover bugs much faster (since you don’t have to run the code).
Ah, now I get your point...
You're trying to get things checked at compile time, but I'm not sure if the compiler is really suited to check the app logic (the compilers' job is to check syntax and such).
Not really, I trust my tests more than the compiler, cuz the app logic might break one day (one way or another), and the beauty of tests is to get that logic checked everytime you go into the building stage.
@yaser
I completely disagree here, ideally the compiler would check all your app logic too, the more you can get checked by the compiler, the less tests you need to maintain. Languages like Haskell are popular just because the compiler can help you a lot.
Tests can never show the absence of bugs, only the presence. Having a type system cut the possible inputs down to (in this case) a finite amount of values is far more valuable than testing the 4 values you mention in your unit tests.
@Cubicle
Not with TypeScript, the following code will throw a compiler error if you change the
TrafficLight
type, without adding something to the object:Great work Yan. Yes, your type would require each key to be present. It’s another thing I love about TypeScript. There are so many ways to express concepts. :)
I agree with you in this TypeScript scenario.
But, as for Python (or any similar lang), I'm not sure this would be the case since this even goes against the Python moto: "let the exceptions fly and catch them later".
So, defensive programming model might be different from a lang to another.
Interesting. I tried to find an article about Python and “letting the exceptions fly” but I couldn’t find anything.
One should always choose the best tool for the job. Sometimes that might be throwing/catching an error, and other times it might mean preventing it with the type system. Why limit yourself to one tool?
“When all you have is a hammer, every problem starts to look like a nail.”
I think the last time I heard it about was in a video or so, but the correct idiom is "Easier to ask for forgiveness than permission"
This video explains it in a nice way: youtube.com/watch?v=x3v9zMX1s4s
And this article summarizes things: devblogs.microsoft.com/python/idio...
I can totally relate after I saw how TypeScript goes (I never used it before, just the old normal JS).
I have to agree that if there's ANY chance that a stray value is going to get into your function you should throw an exception (if your type system forbids it then you wouldn't need it.. but....).
Now about your specific example. If/else ifs/.. are a bit of a code smell in the OO era. That is why people propose the dictionary solution as a table based solution to the problem. Or one could use an OO based solution where each type is an object that responds with a method that indicates the desired behavior.
Doing it that way the OO hierarchy (or interface) would force you into doing it right every time.
But barring the specific example, defensiveness is essential. Especially in the new security conscious word.
If someone throws YAGNI at you, throw the 5 C's of programming:
Clear
Concise
Correct
Complete
and the most important these days: C-secure
You mention that if statements are a code smell... but what does it smell of?
Like the code in the article does it’s job and it communicates it’s intention clearly.
They are an OO code smell.. In OO things should be solved by dynamic dispatch and not by successive ifs since all the rules about a type should be in a class, rather than dispersed all over the program.
A second best option is to use tables (as others suggested) but that's just a fancy if/elseif/...
From a purist OO perspective the solution is to make methods in each type that would solve your problem.
So instead of:
if (type == 'red') {
doSomething1();
else if (type == 'blue') {
doSomething2();
} else if ...
You can just write:
type.doSomething()
Where doSomething() is overloaded for each type.
Now you may use a single if/else if/.. statement or table in one place to convert your string into an object (a factory method) but thats in only one single place in the program.
If you can’t tell me why it smells (as in you can’t tell me what type of bug will occur based off of what you’re seeing) then you might be blindly following doctrine.
And as far as “the OO era” that you mentioned, many of us are moving to more functional concepts like splitting data from logic. I do that not because I want to follow the functional programming doctrine but because I found code to be more testable that way. I’ve also found that I was able to utilize composition much easier when I started to throw away the idea of encapsulation.
Consider checking out this incredible article: medium.com/@cscalfani/goodbye-obje...
The specific bug is that by spreading your type logic all over the program, if you need to update it, you need to find all those if statements to update them. So your code is more error prone and less maintainable. Its also a violation of the DRY principle. Specifically if you want to add a new type, and forget to update one of your many if statements you'll have a bug. Or if you want to change the behavior of a type, and forget to update it in one of the many if statements.
I did mention "OO era" in case because I new the functional style would be mentioned. Note however that with a functional style, you shouldn't throw away encapsulation necessarily. Encapsulation is modeled in your code module. You could do the exact same thing in a functional style without proliferating your code with if statements.
The functional equivalent for this is multimethods. (though I'm not sure the language you are using supports that construct)
See clojure.org/reference/multimethods for example
You might be missing the point when you say this.Based off what you said, I feel that I might not have explained myself well:I’d like to clarify that the assertNever function tells me if a new type was added and it tells me if one was removed.
So I don’t need to “find those if statements” because the compiler will inform me.
That is a work around.
But no, the compiler won't tell you. An exception will tell you at runtime. Assuming you tested correctly you may find this before it hits production. But its hard to argue that this is better than just using better coding practices.
if/elseif/... is a code smell. A bad practice if it can be avoided. In this case it can be avoided.
Try compiling this code. The compiler will in fact tell you if you’re missing a case that’s described in the discriminated union.
Of course, that only works if you recompile all your libraries and dependencies. The OO solution would work even if you only recompiled the class.
If you want to write "defensively", you have to ask yourselves: what is the target, I want to defend against. The problem with defensive code is, that relative what your targets are, workload increases.
Regarding your traffic light example, I would argue, that there is no need to implement the "never"-strategy. Simply write tests - which you would do anyways - and before it goes to production, the according test would fail, you would detect the booboo and fix it. No harm was done.
But I agree, it is hard to assess the risk for errors correctly.
Hi Thomas. While I appreciate your response, I needed to share that at the time you commented, I had a copy paste error and wasn’t showing the actual solution in the final part of the article. So if you get the chance, you can see how the never type is used and how it helps me to not have to write a test at all.
I think it’s much nicer to not have to write a test for this kind of thing.
That being said, I’m thankful that there are people out there like yourself who want to write automated tests at all. Me too! Let’s keep spreading the test-writing love. :)
Happens ;)
To put it in another way:
Write tests to observe changing behaviour of your application.
If your code has to change due to an error, write a test covering that changing behaviour.
Do not try to cover each and every cornercase.
For me there is a distinction to be made when it comes to defensive programming. On the type level you should be as defensive as possible. Your types should result in compiler errors if you do not handle new cases in the future. Make your types as small as possible, just like you did in your article (use a sum of three distinct values instead of simply a string).
On value level on the other hand, I really dislike defensive programming, usually because it means that your compiler's type system is not strong enough to check these at compiler time. The prime example here is the classic
null
. TypeScript has non-nullable types, but for example Java programmers have to putif(x != null)
everywhere, just in case.I would write this code snippet like this:
The value level is short and concise, it only contains what I want to do (ie mapping a signal to some action) with not much boilerplate syntax. The type level code is almost the same length, guaranteeing that adding future cases will result in a compiler error.
OP: This is another great solution ^
Interesting article ( and very interesting blog btw :-) )
I don't really like the word defensive programming because over the years I see too many bloated methods (even in typed languages like C# and Actionscript) full of null checks, undefined checks for basically everything. and every time I asked " but why... ??" the response was always " Just to be sure " or " better be safe than sorry" or " do you really wanna risk a crash on production !?!?"(the last being pronounced with a horrified face showing disappointment for my being so irresponsible).
But I agree that it is better to make the best out of our coding style and out of the tools we have ( strictly type languages, linters, unit tests) to prevent errors that might happen in the future.
The first thing that I thought when I saw your example was that i would have used a Dictionary/Map to retrieve the right action for the right signal. if there is no mapping then there would be an error. Of course, that would not work at compile time - so the solution suggested here is perfect and elegant.
But when it comes to really be defensive then I sometimes get very paranoid and think of what could happen at runtime. All your type checking works only at compile time: in fact, the elegant solution would be converted to this in simple js:
And nothing would prevent your function to be executed passing invalid values. (imagine that the signal is a value coming from the server or from any external API )
Of course with a mapping instead of an if or switch you will get "undefined" rather then "go" but you might end up with unpredictable behaviour anyway. In such case, an error thrown ( and properly handled ) would be better.
So to conclude: as always Defensive Programming: Good? Bad? It depends :-)
Thank you for your kind reply. Yes, I’ve been cooking up the blog articles for many months now. :)
So as for your code I your response, I’m a bit confused about the benefit. Your function returns undefined (as you mentioned). I don’t understand why you feel that’s a good thing. That means that a consumer code needs to either handle the undefined case every time, and if they forget to do so it will be a runtime exception. And that exception would be some kind of unclear “undefined exception.” Even in JS it would be better to throw an error so as to be explicit about the reason why it couldn’t find a match. But there’s a better way with TypeScript where you kind find out months before an error would ever be discovered. The compiler is run on every check in. Why wouldn’t you want the faster feedback of finding out when you check your code in?
Personally, I find that I go much faster if I have a short feedback loop where I can find out what I did wrong. Now, I realize that I’m arguing for static type analysis, which is not everyone’s cup of tea (although the recent Stack Overflow survey showed immense support for TypeScript). But the main point I’d like to make is that it truly is better to avoid bugs in production if you can catch them sooner.
probably I did not articulate my reply properly.
I totally agree in the benefit of the static type checking. and I definetely want to catch error at compile time.
And i said that i would implement that check not with an IF /ELSE IF nor with a SWITCH, rather with a Mapping like suggested above by @jvanbruegge :
BUT i would also be even more defensive and check for possible error at runtime in case - at runtime the function is called with an invalid value.
At runtime Typescript does not exist, after you compile all your type checking is gone and what you have is the function I posted.
See and play around with this Typescript Playground snippet for comparison
Therefore, if the invocation of the function could be dynamic ( server could respond with blue or user could type in orange ) i would add a catch and a fallback to prevent a runtime error.
Hope i was clearer now :-)
Yup yup. Yea the map + the undefined check would be the ideal approach. Sorry I didn’t understand at first. Classic misunderstanding with remote communication. My bad! :)
Oh and as far as the server sending bad or new data types, I have been using a library called TSOA to enforce runtime types at the boundaries. It’s a way of preventing “garbage in garbage out.” It’s pretty cool stuff. There are similar libraries that do runtime checking in the UI too.
No, I would not add runtime checks to this. If you call the function with something different than the types specify that's on your own. The caller has to verify that he can call the function
i think it depends on how complex it will be when you try to be "defensive". There are times that the code gets too complex to understand because we're trying to handle a lot of cases, when it is just a simple feature.
Simple features have a habit of becoming complex features as time progresses and the application grows. That’s why defensive programming provides a foundation so you can grow the program without worrying.
It takes experience to determine which simple features have the potential to be a complex one. If the code gets too complex and I'm spendin too much time, I will always revert to KISS and YAGNI and not be paranoid about future use cases because it will show up anyway and we need to do some CR/Bug/Enhancement about it. I focus more on good design so that code can be easily refactored.
If you’re waiting til someone makes a bug report, then you’re waiting too long. That’s by definition reactive. The approach in this article is proactive.
The best bug is the one that never makes it to production.
yeah, tho i still think it should be a balance, you can never be too proactive because it might be costing you too much time, when you can just design it better. If you are already good at it then that's awesome.
In your example, in order to create add new traffic signal, you don't file a bug. You need enhancement for that and you can refactor the code the way you did. But in other cases, simple code that only handles 1 case but made too complex will be a maintenance nightmare for you and your team.
I think it is valuable to not only think of defensive / anticipatory / forward thinking programming in terms of the code only, but to also think about it in terms of the domain.
In other words: when this code fails, will it result in a safe / acceptable condition in the physical domain? Disregarding all other safeguards, this code would fail with "green" (which is probably the least desirable outcome with regards to the domain), but can be easily changed to fail with "red". If there is no buy-in to change from a code perspective, at least convince them to make a change that will fail to a safe domain condition.
Interesting thought. I suppose the whole idea is that I don’t think we can ever know what the future holds. So how can we (as you say) “fail to a safe domain condition?” Because we don’t know what is safe for a case that we haven’t discovered yet. So for instance, the safest response is to stop. But if you stop at a blinking yellow, you might enrage the driver behind you who was expecting to pause. I’m sort of joking. But yes, I will meditate on what you’ve suggested. I think you might be on to something. :)
I had exactly the same thought when I saw the example, and can't stress enough how much I think this is a great approach
When I review pull requests I like to make my comments open ended. "What do you think about...", "Do you see a benefit with...", "How difficult would it be to..."
This helps draw out expertise from the other developer, fosters open discussion, and helps maintain code quality and future proofing while allowing the developer that is actually going to write it to say that it isn't worth the time - and gives them a chance to provide a reason for that opinion.
That’s a great idea. I’ve always appreciated the Socratic method. Question for you: does this work well for you in remote teams? I feel like this might work better as an in person code review. I say this because questions can be perceived as passive aggressive in text.
I work almost entirely remote and most of the people I work with are non native English speakers. So it definitely works but you are absolutely right. I make sure to go out of my way to prevent a accidental passive aggressive tone.
Yea totally. Remote working is a skill unto itself.
I would absolutely code it the way you did. I wouldn't say I program defensively, I'd say I program haphazardly, but one of the reasons is because I trust myself to have written code that will explode when I break something. If you don't write code like this, then when you go to add that next traffic light status, you're going to be extremely scared and want to do extensive regression testing. Plus, it's just logically incorrect to use an
else
here. There are a fixed number of cases, and"go"
is the correct answer for one of those cases, not the default with a few exceptions.Every professional software deliverer has the responsibility to consider the implications of what they’re doing and when.
Defensive programming is one example where this comes up.
This post is really about how we balance the relative costs of doing or not doing something at a particular point in time. The bad news is that YAGNI is a little more complicated than the immediate “cost to build” the unnecessary feature you’ve described.
What if your customer had another, much more valuable feature for you in the meantime? You might of incurred a “cost of delay” in not giving the customer that other new feature quickly.
In introducing the presumptive feature now you’ve also incurred a “cost to carry” that code around in your codebase. You’re going have to ask yourself and others to read it and understand it, to store it and version control it.
When you do eventually encounter the breaking scenario you’ve described in the future you will incur a “cost to repair” that feature. That cost might be greater than it would be now if your code changes a great deal in the meantime.
So YAGNI absolutely should not apply to refactoring that allows you to change your code easily. A clean codebase is the ultimate defence kit against these costs.
I really like your thought process on this. Though I just want to make sure I understand you correctly: Are you saying that YAGNI is not a reasonable argument against clean code techniques like the never/exhaustiveness checking I showed above?
If so, I agree with you because it helps to keep the codebase clean, and as you said:
Well put. :)
Yes. As Martin Fowler puts it:
“Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify.”
Wow. That’s an incredible quote. Thank you so much for making me aware of it. :)
Here’s the reference along with a discussion of the cost considerations behind YAGNI
martinfowler.com/bliki/Yagni.html
I agree that done well, defensive programming is helpful. Being mindful of the future is generally a good thing, especially in software engineering where things change so frequently! However, in the interest of making this a more interesting discussion, I posit defensive programming can be done wrong. I once contributed to a JavaScript codebase that took defensive programming to an extreme. Every function checked that all the arguments were the right type and were non null. While this might be desirable for some functions (especially functions consuming user input or exposed in an api), the proliferation of defensive checks had a obfuscating effect. I couldn't tell which functions should actually expect and handle null input, and what functions had just cargo culted the checks. We structure our code primarily to communicate to our future selves and teammates. This code was hiding valuable information about the flow of data through the codebase by introducing unnecessary fear.
We later transitioned to TypeScript and deleting all of those defensive checks (replacing them with types) was very satisfying.
Omg this was totally the way that I wrote JS before TypeScript was invented:
And this was also me:
It’s so nice to meet others who have had similar journeys. And thank you for sharing your perspective on the clutter of duck typing. I never thought of that so it was really valuable to hear. :)
This is good (defensive) programming; what differentiates this to a YAGNI issue, is - the time taken to prevent calamity is a lot less than the time needed to fix it later.
If prevention becomes bigger than the actual problem being solved then definitely, YAGNI.
In the spirit of your article, I'm going to say - this is where past, present and future meet in the eternal now, and leave you with this quote:
I totally agree about the preventing calamity aspect. It’s hard to straddle the line between future proofing and letting the future rule you. But I think this example is an easy decision due to the low cost and high value.
Btw I’m happy to see another Thich Nhat Hanh fan on dev.to. :) Thank you for your excellent reply. :)
Yeah indeed, being prepared for the future is always great. Actually your example is very simple and I'm not very good dealing with this type of examples I always like to explore concrete examples, though why you didn't consider the Open-closed Principle ? it's more appropriate for dealing with future extensions.
Returning to the solution you provided, defending future changes by throwing a compiling error is not my way to go in my opinion, the code should be more flexible. Maybe you could throw an exception instead (As I understood from your words that TypeScript will stop compiling if it face that error).
Side note: getting rid of that chained
if
statements could solve some problems and introduce more flexibly. And as I mentioned I'm really bad with simple illustrative examples 😋.How does getting rid of the chained ifs help? I’d have to see an example to understand what you mean.
As for your point about throwing an error, why would a runtime exception be a better option?
When would you identify that an error was being thrown? (My assumption is that the the error will first manifest in production when the users will suffer the consequence. As where with a compilation error you can catch the logical mistake before the code even hits production)
As I mentioned I was talking In General.
1) The
if
part: I meant making an early decisions may add more flexibility to your code for exampleUsually, it would make the code more readable and maintainable.
2) Well, my statement regarding the run-time exception was a little bit biased by the Open-Closed Principal as I mentioned. Which suggests making the code extensible without actually changing the base code itself.
Another reasons I was thinking about:
3) I think that we have to be creative about this point, making an application that is tolerant to run-time errors is always challenging and great (Maybe for your example, you can use the previous state as a fallback when a signal is not defined or maybe you can define an emergency safe state that avoids all damage maybe a pull-over state ). Also, it totally depends on the situation for uses cases with low tolerance to errors your approach may be appropriate for extent ( still you are not dealing with unexpected run-time errors ).
I hope that you got what I'm trying to illustrate.
One thing that's informed my thinking about this is a talk where Rich Hickey discusses the relative badness of software problems. The big ones are he calls "problems of misconception," and are closer to the domain or architecture. The small ones are things like typos, naming, formatting, etc. The small ones are easy to fix; the big ones are orders of magnitude more dangerous.
We'd love to guarantee that certain things can't happen at runtime, so that we'll have less to reason about. The guarantees usually require adherence to some discipline, like design-by-contract, static typing, or avoidance of language features (e.g. no
switch
withoutbreak
). But we should weigh the behavior's badness (and likelihood) versus the cost of prevention.Also, no one's perfectly defensive. For example, occasionally one needs multiple dispatch. Yet most programmers don't insist on sticking to the handful of languages that offer it. Do you?
We're all happy to say YAGNI when we perceive the costs.
Static type checking has its benefits, of course. But annotating isn't where the true cost lies. Instead, it's:
Options
/Maybes
/ etc., or else allow null to propagate, which is type safety theater. (Though I've heard that TS has row polymorphism and I'm interested to see how much it helps.)The good news is, type checks are always on; you can't neglect them. And the cost of annotation is low for most functions.
The bad news is that type checker can only guarantee a narrow set of things. It guarantees them all day long, but that's only because they're the things that are easy to check. What are the chances they're also the things that you truly care about?
Well, others notwithstanding, I like the term "defensive programming," as long as we understand that the one being defended is the user, not the programmer.
haha funny and true
I have just started to learn to program. I'm 45 and have an unrelated unrelenting full time job that doesn't require it.
I know just enough that I freaked out a little inside when I saw that code. That will kill someone is what I thought.
I appreciate the example... And hey, aren't you just bring mindful :)
Welcome and congratulations on choosing a wonderful hobby, career, and passion. I love programming! And yes, some software can be really dangerous. I guess it’s bad to apply military-like defensive practices against code that (for instance) makes Alexa tell you a joke. But I do think it makes for a nicer life if you apply defensive programming to all software. I know I don’t get nearly as many production bugs or weekend calls as other devs do. I guess it’s a trade off. But yes, I prefer to be mindful.
Again, welcome and thank you for your comment. 😊
The never type is nice, and this is a nice example of it, but I feel YAGNI really should apply to features, or the idea of a feature down the pipeline.
This is more of using a bad a data structure I feel. There should not be an anonymous union type as a return, and since it's a map, I think an explicit map makes more sense to use here.
This all said, the kind of YAGNI that it is, I say fine, but make a tech debt issue or something if they don't want to deal with it right now. I get the idea that you sometimes just want done with a ticket or PR.
But a never case is nice for sure. It fixes problems of people using switches poorly too. Switches work well for things that have a defined range of values that won't change (Days of the week I hope...) but not things like traffic light states.
Good points. I especially appreciate your pragmatic point about making a tech debt ticket.
Yea I must admit that I used the string union type because I think it’s fun (and a type that is only available in TypeScript and Haskell from what I understand), but I could have used an enum to illustrate the same point. And I feel like this kind of defensive programming is even more important for enums since values can be added at any time.
But yea, never is a fun type and very valuable here.
Useful post, but I'm not sure defensive programming has anything to do with fear, it's more, as you touch on, to do with professionalism. We're paid to produce good software, and part of that is minimising bugs. Simply put defensive programming helps minimise bugs.
Yes I agree, and thank you for weighing in. But I firmly believe that happiness comes from living in the present. So it’s a fun intellectual problem that I’ll be exploring in future posts.
My currently conclusion is that by being proactive in your code you can live more freely. And in a sense that’s the happiest way to live right now.
But I suppose one can go “too far” with defensive coding in which case it helps to have good teammates that call it out. It’s all a balance, right?
I believe that's what Aristotle said. ;)
Trying to find a balance is usually the best approach.
That's not a defensive programming. That's just a common sense. Traffic lights are not equal. Green allows some dangerous action. Red is safe. So, it is just correct to explicitly check conditions to allow something dangerous and then fallback to less dangerous cases. Some code that would communicate this semantics would make a great job for its readers.
A PR should be merged if it solves a problem and does not break/prevent future changes.
Then you can always do a followup PR to fix styling, complexity etc.
Everything else can be solved outside of the PR.
Need better styling? Teach people this.
Need less complexity? Pair program and mentor people.
Need people to see future problems? Share your code visions with the team at a planning meeting or even better write a spec’.
Mind, it took me a while to get over my own ego to realize this... :)
Yup, I totally agree that if you want to raise the level of expertise on the team it has to come from mentoring. But it’s kinda important to mention that:
So while I love pair programming personally, sometimes I have to use the code review as my chance to learn and to teach.
I was speaking in broad terms, I totally agree, this is not a one shoe fits all situation. :)
Both of those examples however present a less than ideal situation, that could be resolved outside of the PR.
Such as, keep people that need to learn on site, and not remote. And to teach people the importance or pair programming, especially in a learning setting.
If these can’t be done, then the mitigating action can be to use the PR.
But it really should be considered last option, as it is super unproductive, compared to alternatives. At least in my humble experience.
I work at a 100% remote company, so there is no “onsite” location
Good idea! In fact I tried that today. I’m on a pair programming break right now. :) It’s tough going at first, but I’ll be sharing my learnings on it in a future article.
Thank you for the fun conversation. I hope you subscribe/ keep reading. I like having readers who challenge my assumptions! :)
Being still relatively a beginner, I would have added green as an else if, and then an error message in the else. Why? Because I’m always stumbling on unexpected bugs or behaviors that I never could have anticipated (because of lack of experience); so I’m always thinking “what could go wrong?” with every line I code.
Also, where I come from, the traffic light could be red, green, yellow, or turned off because of a power outage, so I would have thought of that specific scenario as having an important reason not to be else: green. I would do else: red, so if something unexpected comes up, the response is “stop” and not “go”.
Now, I wonder if this train of thought could be limiting my problem solving, or if by being extra-careful I’m being a better programmer? 🤔
It’s always better to be explicit. So even if you wanted to have a case for “power off” you should have a specific if statement for that. Falling through hides complexity and makes for code that looks simple but has hidden complexity. So please don’t have anything in the else other than:
throw new Error(“unexpected scenario”)
This is an interesting topic to chat about sometimes. (hint hint) I completely get how you think the two 'mentalities' of yours contradict.
There is nothing inherently wrong with 'defensive programming'. It is part of your job description as lead developer to see potential problems before they arise and predict to some extent what the future could bring.
I would like to suggest though that you stop calling it defensive programming and call it anticipatory programming or something.
Anticipatory programming takes the negative vibration of defensiveness and turns it into something positive; the suggestion that you know what you're doing. You take the future into account even when developing in/for the present.
Based on where I think you are now as a developer, I suggest you have a look at functional programming in JavaScript and the application design patterns that come with it. In my experience functional programming eradicates issues like this alltogether. This guy @joelnet wrote some stuff about it, good place to start.
My coding paradigm: some poor bastard is going to have to work with this some day, and it will probably be me. Code accordingly.
Seriously, experience is a good teacher. It's not always the deciding factor, but it is something to listen to.
Yes, there’s no replacement for experience. :) The challenge I find as a senior dev is how to help give junior devs the opportunity to fail safely so they too can gain experience. It’s a tough balancing act. :/
Assume the next person to look at your code is a psychopath and knows where you live
Great article! Defensive programming should be the de facto. It's not being scared of the future, it's actually being optimistic about the future in the sense that because you're handling this potential future scenarios, future you will not be hindered by them and will be able to develop freely.
Wow, you nailed it. You perfectly summarized why I continue to use defensive programming. It’s like we’re “paying it forward” even though the person we’re helping out might be future us.
Well put. :)
Thank you very much, it was a great article. Look forward to hearing more of your stuff!
The first code is indeed brittle, in the sense that it translates three definite cases as two cases and everything else, which is semantically very different.
What you achieve manually with the fourth case is what the Elm compiler does automatically with a pattern matching on an algebraic data type:
With this code, you have your
never
case for free. If someone adds a type constructor inTrafficLight
,respondToTrafficLight
won't compile anymore. Elm functions must be total functions.Oh yea I would love to use Elm! I asked my coworkers to check it out and they almost unanimously said no. :(
In this particular example, I would probably just make "stop" a fall-through case, so no matter what happens the movement doesn't start at the wrong time and call that a day. Globally, would depend on the real case, but I'd say you might want to include your reasoning in the code review from the beginning, and "you never know what might happen in the future" is not good enough. The better review would say exactly what kind of trouble you expect.
I would argue you are using the term "defensive programming" wrongly. To me it's "offensive programming"/"fail-fast", what's described here. The original version - continue running in the else branch a flawed process and pretend everything is fine, if (null) return null, etc is the defensive programming trademark
YAGNI is (like many guidelines) binary and almost dogmatic. Craig Larman in his "Applying UML and patterns" book put forth a nuance between variation points (explicit requirements or existing variations that are coded) vs evolution points (or speculative points of variation). Both can be documented, although my experience is that companies (programmers) almost never do, which is why YAGNI is perhaps a valuable dogma. Your example sounds like an evolution point.
As for fear related to past bug correction, I'm not a Buddhist. But isn't there some related philosophy about having suffered with "bad" code, which is a necessity to appreciate good design? I think it's something that makes teaching design difficult, because many things that give us maintainability can't be fully appreciated if we haven't had to pore over code that wasn't built with them.
Lack of foresight is unfortunately not a bug.
And objectively speaking, adding more to the solution would add complexity that only might' be needed, or at least is not needed now.
In my experience these types of debates often waste more time/resources then simply changing it later when it becomes relevant would have.
(Tho IMHO in this case it just comes off a bit lazy to not add the suggestion, if we can say with high probability that we would have more signal types in the future.)
It also depends on the nature of the code I develop.
For library code and their public endpoints, that tends to get reused. I go full out defensive. And even at times hostile, in removing functionalities and features to prevent misuse. Classic including not implementing the ability to get password hashes in the user auth library (among other things).
For application code, and one-off code. I relaxed the rule much more on this. If it's safe to assume its current use case will only have "X" params, due to validation being done on another layer, I would sometimes skip checks altogether for it.
I like that. It’s very pragmatic. Feels very much like “the middle path.”
Obscure knowledge from my first job to the rescue! Flashing/blinking red actually exists everywhere...there are two systems monitoring the output of the traffic signal controller: the conflict monitor unit and the master malfunction unit. The CMU detects if the controller outputs an invalid signal, like giving a green light to every direction of traffic at the same time, and the MMU monitors the CMU for failures and also looks out for general failures of the entire cabinet's traffic hardware setup.
If either unit detects a failure, the traffic light controller attempts to reset itself. While the controller attempts to clear the error condition, the output signal is transitioned over to a series of relays which cause the red lights to flash. This continues until the error condition clears or a technician is able to fix the cabinet issues.
That’s amazing! I never would have known this. Isn’t the internet fantastic that I get wonderfully surprising comments like this? 🥰
When I looked at the code and saw, that „redblinking“ will result in „go“, I immediately thought:“Oh F**k!“
I like your approach, but wouldn’t a switch be more readable? Making the „nevergonnagethere“ the „default“ of the switch?
I think if you're working on anything that's got security or safety implications, or that is production-critical, this is mandatory.
I liked your handle very much and made me follow you here. Your website is nice too. Now reading the articles. 👍 😅
That’s very kind of you to say. 😊 And thank you for the follow. 🥰
Yup, having a default that you could be testing for is an anti-pattern. The way I look at defensive programming... it isn't about fear, but rather it's about saving yourself work later.
I’m glad to see so many people agreeing with me here because that’s obviously why I’m so passionate about great patterns like the one in the article. I guess the challenge is how do you help non-proactive devs to see the value. It’s a question I plan on answering in future articles. :)
I generally avoid using else, in this case I'd have used switch and on default I'd have thrown an error. But never approach is better, I like your implementation