DEV Community

Cover image for No architecture is better than bad architecture
Kirill Rogovoy
Kirill Rogovoy

Posted on • Originally published at rogovoy.me

No architecture is better than bad architecture

It took me several years to learn how to write code that scales to 10s of team members and a million lines of code. It took even more time to learn to write stupid code again.

Turns out, "building a solid architecture" in your code can easily be busy work and procrastination.

Turns out, you can waste a lot of energy trying to get rid of all code duplication and coming up with powerful abstractions to support "future use cases."

Turns out, one part of your code can intentionally be a well-thought, protected piece of engineering marvel, while the other one should get you fired.

When you first learn the "best coding practices", somehow you assume that there is a clear line between good and bad architecture. You read or hear horror stories about unmaintainable projects with too much rotten spaghetti code. Inevitably, you end up working on one.

You start grasping what it means to separate concerns, extract abstractions, invert dependencies, and so on. Once in a while, you get fewer than 50 comments on your pull requests. Now you feel like a real deal!

Those horror stories you heard... they were real. Without enough care, forethought, and discipline, a project gets messy faster than expected. You see spreadsheet after spreadsheet of "prioritized tech debt" that doesn't get fixed anyway because you need to ship. And the only person who still knew how things worked has just quit!

But all that is in the past. Over the years, you've developed a dozen heuristics and found the best rules that prevent your code from being rewritten two years down the road (now it's three years!) Yay!

Going far or going fast

So far, I've been describing what I like to call "going far with code." Anyone can write code that lasts days before becoming unmaintainable. Learning to keep codebases alive and thriving for years of active work takes practice, tears, and a few rewrites.

Now, if you work for an organization with (a) enough resources and (b) high certainty of what you are building, going far would be your most important hard skill. To a large extent, that's what would make you a Senior Software Engineer and pay your unbelievable salary.

However, if you've ever worked with startups or founded your own for-profit projects (lacking in both resource and certainty), you'd quickly point out that "going far" is not what's on your mind most of the time.

Going fast is.

Turns out, architecting code—introducing granular concepts, abstractions, relationships, and giving all those things names, scopes, and responsibilities—has a cost. On top of that, undoing such structures is 10 times more costly than building them.

Because of the time pressure, you will more likely be adding more and more stuff to leaky abstractions than tearing them down and rewriting the whole scope.

Another risk is that architecting and structuring your code is a great and fun way to procrastinate. As a technical founder, I don't like many things I have to do to run a successful project. Tinkering with code often just feels like a refuge from all the anxiety generated by things I should be doing instead.

A lot of time, the costs of creating too much structure are more nuanced than "quality takes time." For one, more often than not, you simply don't know what you are building yet, in the grand scheme of things. Most code will end up being thrown away or rewritten as you get closer to product-market fit, and rarely is it easy to know which code will stay, so you tend to treat almost all code as temporary.

So what's the optimal solution here? Do we really have to go back and start writing the same shitty code we did at the start? That doesn't sound right!

Writing stupid code well

code architecture meme

I don't think I have the best answers yet.
However, let me share a few practical heuristics I've learned, that helped me avoid poor outcomes of both extremes: an unmaintainable project or terribly slow velocity.

1. No code is equal

As with many things, there are a lot of cases of Pareto distribution in your code.

The first time it occurred to me was when I tried to figure out how to write as few tests as possible that would provide the largest improvement in stability. It quickly became apparent that only a tiny part of my code is called most often and, should it fail, would bring the most trouble.

So, I extended the same model to deciding where I should put extra time and care and sacrifice some velocity now for long-term benefits.

I have a 3-year-old SaaS with 60k lines of code. When I started to pay attention, I was surprised to see that most code I didn't read or edit for months (or ever). Most API endpoints, most UI pages, etc. Also, a lot of that code was called orders of magnitude less often than the other.

And when I saw copy-paste and giant do-everything-at-once functions, I was weirdly so relieved I didn't waste time refactoring that. I mean... it works! I can still understand it well and make changes. I could invest a couple of hours in structuring it better and saving myself a few minutes the next time I work with it... in a year.

To be clear, I'm not advocating for writing bad code with a lot of smell per se. Instead, I mean writing simple code that often mixes the higher-level flow with lower-level details without being obsessive about areas of concern. It can still be elegant to an extent. It should still represent the intent well and explain what it does.

Another helpful trick here is fencing off the most important parts from the rest so that tar doesn't spill into your honey.

2. Let it beg for structure first

If the previous analogy was to writing tests, this one is going to bring up performance optimization.

One of the best heuristics regarding performance is that all improvements should be made with a profiler in front of you – No guessing!

I'd argue the same is useful for refactoring.

It's much easier to come up with generic cases (abstractions) when you already have 3-4 specific cases (often with code duplication.) Let those cases emerge first. That way, you won't have to predict the future anymore but rather just structure what's already there.

In other words, be moderate with removing (or preventing) code duplication and repeating yourself in general.

This idea is so popular that it has a Wikipedia article that refers to a book written in 1999!

It's not an absolute rule. Sometimes, especially in the case of utility functions, you just know that you need to make a building block first and then use it everywhere. Trust your intuition, but make sure you are conscious about the choice.

3. Always start with one

I find this concept so useful that I'm going to write a whole article dedicated to it.

The rule is pretty simple: Unless you have a strong argument against it, start with file, class, function, table, etc.

As with creating abstractions based on existing cases, splitting things up is always easier once you already have some material to work with. This way, you won't have to guess which buckets things will go into in the future.

Starting with one is a reliable (though radical) way to remove many mental barriers keeping you from actually delivering something.

As with other rules, there are perfect reasons to break them. Trust your intuition, but make sure you aren't procrastinating or putting off the actual work.

One example people often bring up is how Pieter Levels essentially had one index.php file for a business that generated crazy revenue.


That's it!

I guess the one-line summary is this: If you want to move fast, you have to put off building the architecture, and if some of your code is shit, at least let it be soft.

As always, thanks for getting this far.

If you liked this post, you might like those that will follow.

Follow me on Twitter at @krogovoy.

See you next time! 🙌

Top comments (0)