DEV Community

Cover image for Clean, DRY, SOLID Spaghetti
Jason C. McDonald
Jason C. McDonald

Posted on • Updated on

Clean, DRY, SOLID Spaghetti

Look at you, with your ready-to-ship project! You're quite proud of your style and standards conformance, and your strict adherence to DRY and SOLID. You've got tests, your bugs are squashed, your user and API documentation polished. Your code is so clean, it's shining!

I've got bad news. Your code base might still be terrible.

So, What's the Point?

To be clear, I'm not against anything I've described. The principles of TDD, DRY, SOLID (and a whole host of other fancypants acronyms) all have their place. Documentation, standards, and style are important. If you have actually managed to implement all of those, you deserve a serious pat on the back.

With that said, it is really easy to get caught up in the catchy acronyms and "Top Ten Ways To Write Clean Code," we forget why we're doing all this in the first place. And if you're not tuned into the purpose, all of the above will ultimately amount to null.

Regardless of purpose, language, or methodology, ALL software must fulfill exactly two criteria:

  1. The software must accomplish its stated goals.

  2. The software must be maintainable (thus, readable) by future developers.

The trouble is, to accomplish #1, the code need only to compile, pass tests, and work as expected. Even terribly-designed code can do that. We spend so much time focusing on #1, we often leave #2 to an afterthought. At best, we read a few articles on dev.to(), boil everything down to a few easy-to-follow rules that let us go on autopilot about maintainability. After all, making the software do stuff is a lot more fun.

But clean, DRY, SOLID spaghetti is still spaghetti. In fact, it's the worst kind! You probably know what I'm talking about: you boil spaghetti, drain it, and leave it sitting in the pan for heaven-knows-how-long. It's certainly clean, and by time you remember it, it's also an inedible dry mass. You can't do squat with a solid disk of spaghetti.

Similarly, an unmaintainable code base is doomed from the start. New features are hard to add. Bugs are almost impossible to fix. New contributors are scared off. Anyone compelled to maintain the code is sentenced to days or weeks of suffering as they pick and pry loose the program logic strand-by-strand from the tangled mass.

Before our code meets this unappetizing fate, let's look at a few ways our well-intentioned principles of maintainability go bad.

DRY Meets Wasteland

Desert

DRY: Don't Repeat Yourself

This handy little acronym is bandied about freely. It's a pretty obvious concept. Instead of writing the same code sixteen times, or even six times, put it in a function and call it!

There has to be a line, however. I've seen code bases where, to find out what one single-goal function did, I had to jump to no less than two dozen other functions and macros, across multiple files. To make matters worse, I wasn't even sure which of the hundreds of linked files contained the next piece of the call stack. Everything had been abstracted out to the nth degree, making the code base entirely unreadable.

SOLID's "Dependency Inversion principle" carries this same risk. We can abstract things out to the point where nearly every function is only ever calling other functions.

At that point, no one wins. Performance takes a hit (heard of "instruction cache misses"?) Readability nosedives. It takes far longer than it should to grok the code.

The fix here isn't simple. DRY is important, but it requires careful discernment. Before you abstract something out, you need to consider the cost. Then, you need to carefully determine the best way to abstract the functionality. Make it easy to follow the trail.

Cleaning the Crime Scene

I get it. You want a spotless code base, and who can blame you? It's easy on the eyes, and makes you look like a veritable programming genius. In the process of cleaning up, however, we tend to make some terrible mistakes.

Vacuum Catches Fire

Too Few Comments

Perhaps the worst is removing (or never putting in) intention comments. Many people make the argument that "comments fall out of sync with the code". Several standards proudly assert that comments should be rare, if present at all. "Write self-commenting code!" they exhort.

The trouble is, no code ever written can answer its own "why" to someone unfamiliar with the code. I firmly believe in, and actively practice, the principles of Commenting Showing Intent (CSI). In short, every logical block should have a comment describing its intention, its "why". We can't trust our intuition on this one, since almost everything is obvious to us at the time of writing. The fact is, no one can read your mind.

In answer to those who claim that comments fall out of sync, I believe that only happens if you let it. Implementing a commenting standard like CSI means you are using those comments as part of the code review process. A mismatch between stated intention and actual functionality should always be treated as a bug and resolved. Code without intention comments shouldn't even be allowed through to the code base in most cases.

Mind you, this doesn't mean you go whole-hog and clutter up your code with redundant comments restating the code. I've found it is usually best to comment everything first, commit, and then let those comments sit for a few weeks or months. Once your familiarity with the code has worn off, you'll be able to clarify and/or remove redundant, "what"-style comments.

Too Many Terrible Names

Self-documenting code is a wonderful thing. It allows us to see the purpose of the function or variable right in its name. The fact we need to follow this goes without saying.

But what about when it goes too far?

RtlWriteDecodedUcsDataIntoSmartLBlobUcsWritingContext();
Enter fullscreen mode Exit fullscreen mode

If I had to maintain a code base of functions like that, I'd rather quit and become a full-time musician. (There are more if you have a strong stomach.)

Find a balance between descriptive names and readable names.

While we're on the topic, if you're using Systems Hungarian in your code, you need to stop right now. Before you do anything else, use Find & Replace and get rid of those horrendously useless type prefixes. That was never what Charles Simonyi meant it to be. (By contrast, Apps Hungarian is self-documenting naming in action. Look it up.)

Too Little...Everything

Second, clean code is often "concise" code, but it is very easy to go overboard here! While you could collapse an entire conditional statement into nested lambda expressions in a ternary operator, can anyone else even READ that?

Too often, we try to look brilliant at the expense of truly good code. Just because you can doesn't mean you should. Strike a healthy balance between concise and readable.

Too Little Whitespace

Lastly, please don't minify your working code base. Ever. Period. That's not clean, that's maintainability suicide. In fact, except for a specific, objective, verifiable business case on shipped code (e.g. optimization or obfuscation), it's just a bad idea in any context. Sooner or later, someone will need to read through that code to diagnose a bug or replicate functionality. Be kind - don't minify.

When SOLID Becomes Stupidity

Hitting A Wall

In object-oriented programming, the SOLID principles are incredibly useful. However, none of them can be applied blindly.

The story you are about to read is true. Names have been changed to protect the idiots.

I once worked with a fairly popular and widespread web platform. I can say with fairly firm conviction that this code base followed SOLID principles to a "T". However, I am also adamant that the entire canonical source should be used as the test payload on an unmanned mission to the sun. It is, to date, the worst code I've ever seen. Yet, consider it in light of SOLID...

  • Single-responsibility Principle: Each class had one specific responsibility. And there were a lot of classes. As in, thousands.

  • Open-closed Principle: Every additional feature added to the code base came in the form of extension, and always via inheritance. Yes, always inheritance.

  • Liskov substitution principle: Every class could be substituted for a parent class, all the way up the chain. No exceptions.

  • Interface segregation principle: You only ever had to use the features you needed. Everything was single-inheritance based, so you never needed to import X just because you were using Y.

  • Dependency Inversion principle: Everything was abstracted. Literally everything. It was data-driven to the extreme.

Wow, 100% compliance with SOLID! And yet, it was an unmaintainable nightmare. Every single class was ultimately inherited out of the same family tree, sometimes hundreds of layers deep. To write one class, you had to read through 99 others first. Then, if a bugfix or optimization took place high up in the chain (which no one ever bothered to actually document, mind you), it broke everything below it. Thus, code that worked perfectly in version 3.2 would be entirely broken in 3.3.

As a result, contributors were afraid to improve the terrible code higher in the inheritance chain, for fear they'd destroy everything. Documentation was out of date before it could even be finished. Dozens of books were written on the platform, and were immediately useless by time they were published.

Was SOLID at fault for that? Absolutely not! It just goes to show, SOLID isn't some sort of magic bullet for maintainability. Its principles have to be applied with common sense and attentive discernment. Any coder with experience in OOP knows that such an inheritance structure as I described is poor design, but the large open-source team responsible for this project did exactly that. They had a completely SOLID, completely DRY code base, and yet I consider it a candidate for the worst production code in the history of computers.

TDD: Test Driven Disaster

Three Stooges Doing Chemistry

I'll be honest, anyone who can manage 100% code coverage for their test is a genius. I don't think I come anywhere close. However, while I write tests as a rule, there is a reason I'm afraid to embrace TDD: I've seen too many code bases follow it right over a cliff.

The goal of your tests should be to detect bugs in your production code, but the goal of your code should not be to pass the tests! I believe TDD is in especial danger of confusing the two objectives. If you've already written your tests, you will instinctively begin to write your code to pass them, and completely overlook other things...like functionality, readability, and maintainability. It is possible to become so myopically focused on your unit tests, you simply code to check them off as "passed," never even seeing the terrible hash your code is turning into until it's too late.

As always, this isn't a slam against TDD, but rather a warning for its practitioners (and everyone else who writes tests). To prevent this sort of tunnel vision, I recommend the following:

Write your production tests blind, AFTER your code is written and apparently functional. By "blind", I mean "don't look at your code while you write them". On paper, break down what you remember your code should be doing, and shouldn't be doing. Make a list, and write a test for each part. Ensure each test has explicit pass and fail conditions. Be unforgiving. Do Terrible Things To Your Code.

In practice, this has done two things for me:

  1. In knowing that my production tests don't exist yet, I focus on writing good code, instead of tricking my little predeteremined gatekeepers.

  2. I catch a lot of bugs and design flaws this way...and I mean a lot. Well over half of my fatal bugs and edge cases surface through my blind tests.

If you embrace TDD wholeheartedly and write tests before code, I recommend one step beyond. Delete your original tests and rewrite them blind. Only allow yourself to note what functionality they're testing for, and not how they did it.

Style Over Function

Pointless Lightsaber Twirling

Style standards are wonderful, and they make the code more delightful to read. However, even this can go too far. While we should follow our style standards, we need to be prepared to break with them when readability or maintainability dictates. Here's a few examples, and the principles they reveal:

  • Would breaking off at 80 or 120 characters make the line harder to parse? (Yes, these are rare, but they happen.) Grant yourself style exceptions when doing so will improve the readability and/or maintainability.

  • Love one-line, bracket-free conditionals? So do programming errors. Be careful of where you use stylish shortcuts. Sometimes brackets are just safer.

  • Are you having to hack the code to maintain part of your style? (Yes, this happens!) Stop it! Style alone should never require functional changes to your code.

  • Are you on a crusade against ternary conditionals? Conversely, do you love them because they make you look like an ALPHA hacker? Use the tools that best combine readability, maintainability, and functionality. Set your opinion aside.

Common Sense Not Included

There are a lot of infinitely helpful principles and standards that help us write good code...but don't believe the hype! None of them are The One True Answer To Maintainable Code. You will always need to be actively invested in the process of crafting good code.

In programming, spaghetti is bad enough, but SOLID, DRY, clean spaghetti? That's the hardest of all to untangle!

Top comments (37)

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Great post, Jason.

I can't think of a programming principle, technique, or "rule" that is free. Everything has a cost.

Sometimes the benefits exceed the costs, sometimes they don't. In my experience, great programmers are good at doing that math, but bad programmers don't even realize that there is math to be done.

Collapse
 
kip13 profile image
kip • Edited

IMHO

Sometimes is better get the best parts of all worlds based on your experience, your team, your judgment.... not be a FANATIC.

The real world isn't in the books, blogs or podcasts, but their have a good tips(guides, etc) from another experienced devs where in their time they had the same problems or doubts than you, good information for you that will help how to clear up doubts and solve the current problem.

When you get more experienced you would note the problems of follow a rule or maybe you would say that the rule is really good ! But the wise words from @bosepchuk are very real:

I can't think of a programming principle, technique, or "rule" that is free. Everything has a cost.

I don't hate any rule, technique or principle, is good information and why hate that ? Differents points of view and opinions are so good !

I'm trying to get more experience and train my judgment to do the correct math.

Thanks for sharing this!

Sorry if my opinion have errors, my english sometimes is so bad.

Collapse
 
codemouse92 profile image
Jason C. McDonald

Absolutely! There are so many amazing principles, techniques, and methodologies out there that we can learn from, but we must use them to, as you said, "train [your] judgment".

I'm currently studying Effective Project Management by Robert K. Wysocki, and (unsurprisingly), he makes much the same point in there about project management.

Projects are unique, and each one is different from all others that have preceded it. That uniqueness requires a unique approach that continually adapts as new characteristics of the project emerge.

Collapse
 
kip13 profile image
kip

Of course !

I think this cover many subjects and areas, not all obviously, for example:

  • Repairman -> Need to see the correct problem to apply the correct solution, if in the past he had the same trouble he could try to do the same, but sometimes the problem are not exactly equal, so.....

  • Lawyer -> He needs make the correct defense to him client, based explictly in client's case and another relevant information, so...

  • Doctor -> After read a lots of books (in university), he still need the correct evaluation from their pacient, so...

But all of this examples have another important thing in common: Everything has a cost.

I don't know if I strayed from the subject but this is my analogy.

Thread Thread
 
codemouse92 profile image
Jason C. McDonald

Hey, it's a valid point. I've met doctors and lawyers who applied principles without doing any of their own observation or evaluation, and the results were always disastrous.

Collapse
 
guneyozsan profile image
Guney Ozsan • Edited

I was reading a lot about these but I was able to grasp the idea on what to do and not to do, only after working on a single codebase as a sole developer over a year and then someone joins the team.

First, after a year you eventually start maintaining the essentials of the codebase of a less experienced and also a dumber version of yourself, and noone can help you out of this self-torture.

Then you start avoiding future self-tortures.

Second, you write more for your new teammate as you keep facing what was obvious was actually not that obvious, while facing the truth that they actually met your less experienced and also dumber version, not who you are today.

Then you start writing as if they sit next to you while coding (and include your chat as comments).

Collapse
 
_bigblind profile image
Frederik 👨‍💻➡️🌐 Creemers

Interesting read. I haven't had to maintain a codebase for years and years, so much of what I'll say here is just the theory as I understand it, without having experienced it in practice, especially regarding the long-term impact of following a specific rule.

Also, a lot of the discussion here could probably be clarified with examples, but discussing the effects of applying rules to a large code base makes this difficult because it's impractical to share entire files of code + the folder structure.

I've seen code bases where, to find out what one single-goal function did, I had to jump to no less than two dozen other functions and macros, across multiple files. To make matters worse, I wasn't even sure which of the hundreds of linked files contained the next piece of the call stack. Everything had been abstracted out to the nth degree, making the code base entirely unreadable.

A function's name should tell yo what it does without reading the code. The whole idea of an abstraction is that you don't have to care about the details of how it does what it does.

But I sympathise with you on not knowing which file contains what. I wish more programming languages had, like python, a namespace for each file, that hooks into the way you import/include files, so yo can just know which file a dependency is in from the import path.

I also agree that a little repetition can aid readability, and I'm hoping to get a better understanding of when that's the case.


  • Open-closed Principle: Every additional feature added to the code base came in the form of extension, and always via inheritance. Yes, always inheritance.

There are a lot of posts out there on why composition is often better than inheritance. I think you can apply the open/closed principle without overusing inheritance and creating deep inheritance trees.

Collapse
 
kspeakman profile image
Kasey Speakman

I have examples from a larger code base.

Here is an example where I had to chase down some code in ASP.NET Core.

You can see all the different places I had to look, and in the end I cannot be 100% certain because of various indirections through interfaces, inheritance, and registration patterns.

Similarly, I once went on safari across ASP.NET Core Kestrel and MVC code bases looking for precisely what AuthorizeAttribute actually checked from the HTTP request. And I could never find it. I found the code that I thought was ultimately called, but I could never find the chain of evidence linking them. Give it a try and see what you come up with. :)

I have a lot of respect for the accomplishments of the .NET Core teams, but their style of coding -- which appears to adhere to good practices in general -- makes it extremely difficult to trace through unless you already worked on it and know where things are. So anyway, it's an example for this discussion.

Collapse
 
codemouse92 profile image
Jason C. McDonald • Edited

I think you make a lot of great points...largely because you applied common sense and a larger understanding to each of the standards, and that's exactly my point. DRY, SOLID, TDD, and all the rest can be excellent guides. We just have to apply our own judgment. :) (But then, I'm just reiterating - you know all that!)

And yes, composition is almost always better than inheritance. SOLID doesn't state that explicitly, of course, but that's not SOLID's fault. It's one of several guidelines to apply, not a magic bullet!

By the way, that terrible code base was written in Python, although I wouldn't say it was "Pythonic" by any means.

Collapse
 
kip13 profile image
kip

"Pythonic" not always mean good readable code, most of the time is apply the syntactic sugar and other advantages that Python offers you to do something, for example List Comprehensions are so good but not always:

This:

def generatorFunction() :
    for number in range(100):
        yield number

Is better than:

generatorFunction = lambda: [(yied(number)) for number in range(100)]

Is only an example, I'm not good to create good examples.

Thread Thread
 
codemouse92 profile image
Jason C. McDonald • Edited

No, "Pythonic" means it applies the "Zen of Python", which puts a high priority on good, readable code.

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!

But once again, even this must be applied with common sense!

Thread Thread
 
kip13 profile image
kip • Edited

I'm always have had different concepts of "Zen of Python" and "Pythonic way", but you are right, I'm agree with you. Thanks for the aclaration!

Collapse
 
aleksikauppila profile image
Aleksi Kauppila • Edited

The purpose of tests is not to detect bugs. The purpose is to verify correct behaviour. You write tests first to define expected behaviour, then write the production code and after that you're free to do as much refactoring (as in improving design without changing functionality) as you will. Refactoring is an essential part of the process and cannot be done (most of the time) safely without tests.

Collapse
 
laughingraven profile image
Laughing Raven

Before one test, one must study requirements, plan a solution then break that plan into autonomous parts. Once the parts have been identified along with the reqs they satisfy, then the logic can be programmed. Then tests can be written to test that logic and also the comms interface between subsystems.

Collapse
 
codemouse92 profile image
Jason C. McDonald • Edited

Except one major type of bug is literally "incorrect behavior". ;)

In other words,

Testing shows the presence, not the absence of bugs.

Just got that from an article which literally just came through my feed: Stop lying to yourself when testing.

That said, overall I'd agree with you. Test, build, test, refactor, rinse, repeat.

Collapse
 
aleksikauppila profile image
Aleksi Kauppila

Yes, i expressed myself poorly here. I could say that purpose of writing tests in TDD is to define the expected behaviour. And of course the purpose of the is to verify that the system is working correctly or at least as defined.

Thread Thread
 
codemouse92 profile image
Jason C. McDonald • Edited

[The] purpose of writing tests in TDD is to define the expected behaviour.

...and...

Refactoring is an essential part of the process and cannot be done (most of the time) safely without tests.

I probably didn't say, but that was beautifully concise. You really should write an article about the goal of TDD. Many developers seem to lose sight of it a lot. ;)

Collapse
 
luispcosta profile image
Luís Costa

This was a breath of fresh air. I see too many developers that try to be SOLID to the extreme. I feel like the quote "Your job is not to write code, its to solve problems" is particularly important here. Yes, you need to solve problems, with your code. The code you write serves two fold: it solves the problem, and it makes sure that the next developer will be able to maintain the code you wrote that keeps the problem as solved. I cringe a bit every time I read that phrase.

Collapse
 
codemouse92 profile image
Jason C. McDonald

That's an interesting angle, @luispcosta ! Problems don't necessary stay solved.

Collapse
 
alainvanhout profile image
Alain Van Hout

You've essentially written down my mantra: make sure it's maintainable!

I'm glad you wrote this post, so that I don't need to do it anymore - and you've done it better than I could, so win-win all around :-). Kudos.

Collapse
 
tensojka profile image
Ondřej Sojka

Excellent article! I'm definitely saving this.

IMHO applying SOLID (or "even only" DRY) to a code base too strictly almost always leads to better maintainability, but worse readability. It's much easier to change things, but it's hard to understand code when everything is abstracted. The decision to abstract or not to abstract is a complex tradeoff between readability and maintainability.

TL;DR the whole article: Don't blindly follow principles and use your brain. Find a balance between maintainability and readability.

Collapse
 
codemouse92 profile image
Jason C. McDonald • Edited

Very true.

Ironically, poor readability damages practical maintainability; you can't maintain what you can't read. Meanwhile, less-maintainable code doesn't tend to be readable. So, while they're at odds with one another in one sense, they're paradoxically related in another sense.

Collapse
 
fkrautwald profile image
Frederik Krautwald

There is a way to deal with poor readability. It is known as Commenting Showing Intent (CSI): standards.mousepawmedia.com/csi.html

Thread Thread
 
codemouse92 profile image
Jason C. McDonald

Touché. :)

Of course, it doens't preclude good coding practice.

Collapse
 
ardunster profile image
Anna R Dunster

By the way, your Comment Showing Intent links have all been broken for a while (in this and other articles), presumably since the server swap shenanigans. Not sure where to find the current address for them and it's something I actually occasionally would like to reference. ;)

Collapse
 
codemouse92 profile image
Jason C. McDonald • Edited
Collapse
 
ardunster profile image
Anna R Dunster

Thanks :D

Collapse
 
xanderyzwich profile image
Corey McCarty

Thanks for the info.
Regarding TDD, I feel like there are bad possibilities in both test-first and code-first approaches. I believe that test-first should be a test for whether your feature is functioning as intended from a higher level. Unless you write functionality that isn't yet needed then you should be able to cover nearly 100% of your code with these style tests with positive and negative intention cases. I feel like a test-first approach may be best to get your base feature design, and then some additions after the code is written could assist with your edge cases. You are more likely to know that a number being valued at 257 has a weird blowup after you have written the algorithm that manipulates the data in such a way that 257 renders a zero in that one place over there.

Collapse
 
codemouse92 profile image
Jason C. McDonald

Great thoughts on that.

This is exactly what I mean about thinking through the implications of a practice or methodology in relation to any given project.

Collapse
 
telexen profile image
Jake C • Edited

So much misplaced blame here I don't even know where to begin.

Every problem you list stemming from SOLID principles is a misinterpretation. SRP is about cohesion. OCP doesn't dictate inheritance and the second you find yourself struggling with inheritance you need to wake up and realize you're going about the problem wrong (I can't believe I had to type that). Data-driven models aren't advocated by SOLID, and it was definitely a poor design decision.

In both SOLID and testing cases, you're using poor execution on the developers' part to somehow claim that the concept is bad.

This is like me saying all fast-food chains should go away because McDonald's doesn't taste very good.

Collapse
 
codemouse92 profile image
Jason C. McDonald • Edited

...and I quote...

Was SOLID at fault for that? Absolutely not! It just goes to show, SOLID isn't some sort of magic bullet for maintainability. Its principles have to be applied with common sense and attentive discernment.

...and...

There are a lot of infinitely helpful principles and standards that help us write good code [...] You will always need to be actively invested in the process of crafting good code.

(P.S. To be fair, I just edited the post to add two words to the final paragraphs to help clarify my point even further.)

Collapse
 
twigman08 profile image
Chad Smith

In my honest opinion he's not blaming anything. He says it's not SOLID fault for it. I feel all he is saying is that you can follow all these "rules" and still end up with a horrible code base. He's not saying or blaming them as the root cause for a horrible code base.

I believe he's saying you should never code blind. Don't just code to follow something. Know why you're writing something. Don't take everything as rules, take them as guidelines, but still think about what you're coding.

Collapse
 
devcamilla profile image
Camilla Santiago

Great write.

I don't quite get your point at TDD. Isn't writing your tests before the code already 'blind'?

Collapse
 
codemouse92 profile image
Jason C. McDonald

In one sense, but tests are vastly simpler than production code in most cases, so its easier for us to get into the habit of subconsciously coding around our own "traps". As it is, in TDD, we're explicitly writing to satisfy the tests.

That's not necessarily a bad thing, of course - it's one of TDD's strengths - but rewriting the tests "blind" after writing the code makes it harder for us to avoid setting off bugs.

To put it another way, our initial tests define how the code is "supposed" to work. However, afterwards, when I don't remember exactly how my code is "supposed" to work, I instead write tests based on how I expect it to work...and then it doesn't work that way. I've more closely replicated real-world usage. I've actually done this many times, and it's uncovered quite a few bugs, memory leaks, and poor design decisions.

Collapse
 
devcamilla profile image
Camilla Santiago

Thanks for the explanation.

In my experience, sometimes I do have code design prethoughts while writing unit tests. There's a danger that I purposefully write the tests the way it was because I want to shape it to the design I had in mind. Thus, writing the "trap" for myself.

I agree, though I think this problem will only likely surface on low level tests such as unit tests. That's why we couple Behevior Driven Development (BDD) automated tests with unit tests. To test the implementation of the unit tests and replicate the transactions as if how it will be used in production.

Normally, we'd write the acceptance tests (BDD) first, unit tests, then code.

Collapse
 
pacozaa profile image
pacozaa

Thank you. As a junior developer and hype follower I love your article.

Collapse
 
mikeschinkel profile image
Mike Schinkel • Edited

ALL OF THIS. Yes!

You did not mention it, but Dogma, and Cargo Cultism comes to mind.