We’ve all listened to them. The holy rules of software engineering are transmitted by experienced developers, articles, and presentations at conferences. They are designed to lead us, to stop us from repeating the errors of those who preceded us.
I adhered to them faithfully. I believed I was creating a strong, adaptable, timeless work of art.
Rather, I was designing a Rube Goldberg contraption of intricacy that almost crumbled under its own burden before it even took off.
Here are the three most harmful "best practices" I unquestioningly followed—and the tough lessons I grasped regarding context and dogma
- 1. "Always Use Microservices"
The Dogma: The mantra is everywhere. "Microservices offer better scalability and independent deployments and allow teams to work autonomously." I was building a new SaaS product, and from day one, I was determined to do it "the right way". So, I drew a bunch of boxes on a whiteboard: Auth Service, User Service, Billing Service, Email Service, and Analytics Service. It looked like a beautiful, distributed future.
How It Backfired: For me, building an MVP for an unproven product was a catastrophe.
Insanity-Driven Development: For two weeks, I focused not on creating features but on setting up Docker, Kubernetes manifests, and Kafka topics to enable communication between services. My "hello world" needed a data centre.
Debugging Challenges:
A straightforward user registration process has now extended across 4 or more services. Tracking down a bug involved examining logs across multiple pods. Was the mistake in Auth? Or did the user service encounter a timeout? Perhaps the API gateway routed incorrectly? The mental burden was enormous.The ultimate irony: I possessed a "scalable" design that struggled with the burden of no users at all due to overwhelming complexity. I was aiming for a scale I lacked and a team that was nonexistent
The lesson I learnt: microservices are not a solution to organisational issues; they result from organisational structure. You embrace them when managing a monolith among over 50 engineers becomes more challenging than handling the intricacies of a distributed system. For a small project or startup, a well-organised monolith (or a modular monolith) is typically the better option. You can always separate components into services afterward, once you've validated the product and pinpointed real scaling issues.
The ideal approach should be: "Begin with a monolith until there's a clear necessity to dismantle it."
- 2. "Premature Optimization is the Root of All Evil"
This Knuth quote is likely the most misapplied and misinterpreted guidance in our field. I interpreted it as "don't consider performance until it's a dire emergency." I developed features quickly. I employed fast and simple queries, with N+1 issues hiding around every corner. I selected the simplest data structures. "We can improve it later!" I announced.
How It Backfired: "Later" arrived the moment we onboarded our first beta users. The dashboard that appeared right away for me took 12 seconds for users with actual data. Basic actions have expired. My "act quickly" strategy resulted in performance problems not being separate—they were integrated into the application's fundamental structure.
- The Great N+1 Apocalypse: What I thought were "simple queries" turned into hundreds of database calls for basic pages. Fixing them required rewriting entire data access layers, not just adding an .includes() here and there.
- Death by a Thousand Cuts: Each "tiny" performance oversight compounded until the entire application felt sluggish. There's a difference between not optimising prematurely and building with zero regard for scale.
- The Psychological Toll: Users don't care about your philosophical stance on optimisation. They care that your app is slow. Our first impressions were permanently damaged because I took the quote too literally.
The Lesson Learnt:Knuth wasn't saying, "Ignore performance entirely." He was warning against optimising things that might be bottlenecks. There's a crucial middle ground: intentional, sensible design choices.
The ideal approach should be: "Write reasonably efficient code from the start, but only profile and deeply optimise when you have actual metrics showing a problem." Use pagination by default. Be mindful of N+1 queries. Choose appropriate data structures. This isn't premature optimisation—it's professional craftsmanship.
- 3. "You MUST write tests for everything."
The Dogma: "No code can go live without complete test coverage!" I sought to achieve testing nirvana. Each utility function, every component, and each API endpoint possessed its own test suite. I felt unstoppable.
How It Went Wrong: My speed dropped significantly. I dedicated 60% of my development time to writing and updating tests for features that altered daily due to user feedback. The most unpleasant aspect? The results of my tests provided me with a misleading sense of safety.
- Brittle Tests: The slightest refactor would break 20 tests, requiring hours of updates. I was testing implementation details, not user behaviour.
- The Irony of 100% Coverage: I had perfect coverage of code that was probably wrong. The tests verified that my broken logic was consistently broken.
- Missing the Forest for the Trees: I had extensive unit tests but zero integration or end-to-end tests. The system worked perfectly in isolated units but failed miserably when they had to communicate.
The Lesson Learnt: Tests are a means to an end (confidence and reliability), not the end itself. The value of a test is not in its existence but in the confidence it provides.
The ideal approach should be: "Focus testing efforts on what matters most: critical business logic, user workflows, and areas prone to regression." A few well-written integration tests are often more valuable than hundreds of brittle unit tests.
The Common Thread: Context is King
The true failure wasn't adhering to best practices; it was applying them rigidly, without grasping the trade-offs or taking my unique situation into account.
- Microservices solve scaling and team coordination problems. I had neither.
- Avoiding premature optimisation is about not wasting time on unproven bottlenecks. I used it as an excuse for carelessness.
- Testing is about reducing risk. I turned it into a ritual that increased friction and slowed learning.
The Pivot That Saved the Project
I made a harsh retreat. I merged my microservices into a unified, well-organised monolith. I recognised the three main performance bottlenecks and resolved them effectively. I discarded 80% of my tests and concentrated on creating a few essential integration tests that truly reflected user actions.
The outcome? I began delivering value once more. The item became stable. Users observed the enhancement
The Real Best Practice
The only universal best practice is this: think critically about why a practice is recommended and whether it applies to your current situation. Software development is about making thoughtful trade-offs, not following rules blindly.
Which ‘best practice’ do you secretly think causes more harm than good? Drop your hot takes 👇
Top comments (20)
The problem you are running into are two fold:
1: I agree.
2: Your N+1 problem is a problem you see with many ORM's because they automatically fetch data when accessing an attribute that is a relationship. You need to prefetch the data you want to use or write manual SQL to get the data you need. It is also a team problem that this got accepted to go live. You need to be aware of the SQL your ORM generates.
3: You need to define an API and test the API. I don't know what you wrote and how you wrote it, but you need to think about: How do I want this thing to behave or use it in such a way that is easy for me/us/others to use? You'll want to test contract of your code and not the internal mechanics. Small caveat: It is also important to test the mechanics, but you can do this for specifics. Where you know: This is important to test, because it is a crucial part of the logic of the contract.
Appreciate the detailed breakdown. You're absolutely right about ORMs hiding the N+1 problem - that's exactly what caught me off guard. And solid advice about testing the API contract versus internal mechanics. Thanks for sharing these insights!
Hmm, ever since the concept of microservices started popping up, I've never really heard this "Always Use Microservices" thing. Indeed, the advice is almost always "identify bottlenecks, then carve out that functionality into a microservice as needed".
I would also extend this advice: Use a monorepo. Deployment is easier, devex is better, and sharing libraries (or entities/models) is trivial. It requires good understanding of the project scope of course, but monorepos have become my mantra - I can flip services on/off, scale out as needed, and keep my API as my nerve centre.
As for tests - yes, absolutely write them. But focus on the happy path. 100% code coverage is nice, but blimey does it slow you down.
"Later" arrived the moment we onboarded our first beta users
yup. 100%. Big picture thinking and defensive programming will help you in the long run.Great insights!
Yo the monorepo idea is a solid upgrade to my monolith take 👌 That sounds like the perfect middle ground for when you need to scale without the deployment nightmares. Appreciate you sharing that!
these are good observations, but i would submit:
a. 'premature optimization': this really comes down to the definition of when 'mature' is.
b. 'testing': doing tdd right requires a lot of experience. attempting to test everything at the unit level can get you into the situation you described, sure, but unit testing works very well for discreet, ideally pure, functions. committing to tdd changes the way you write code in general, moving your design more towards writing stuff that is more testable.
Good point about TDD experience. It's definitely a skill that changes how you think about code design. Thanks for adding that perspective!
Thanks for sharing your experience, in this world of shallow bragging it's always refreshing to see genuine self-reflection and I think there is a valuable lesson to learn here for all of us: common sense before "best practices". Perfect is the enemy of good, and quite often it's not even perfect.
Thanks Gabor! 'Common sense before best practices' might be the real TL;DR here. Love that line about perfect being the enemy of good - so true in so many contexts 💯
Great post 👍 this is where theory meets practical implementation.
I like that you also included test coverage as an example. It's so hard to convince the industry that the 90% test coverage target is hurting their business.
My ideal test strategy is: write e2e tests for critical user flows and unit tests for the lowest order of (pure) functions. Then, when something breaks after code changes, write an integration test that catches that specific case.
Your test strategy is actually smart frfr. E2E for the main stuff and units for pure functions makes so much sense. Definitely stealing that approach man👏
This post is a great reminder that best practices are just guidelines, not rules set in stone. Adapting architectural patterns, performance optimization, and testing strategies to your project's scale and team maturity is key to sustainable success. Context really is king.
Appreciate that! And well said - 'guidelines, not rules' is the perfect way to put it. Context over everything.
I think with system design, it's important to keep in mind you want to start with a simple system before jumping to a complex system.
With microservices, why try to spin up multiple instances? It's easier to do that once with a monolith, then split off functionality in the future as needed.
For testing, I've found when working on a new project it's better to use tests to lock in the invariant logic. If something is not changing a lot, unit tests can help insure that keeps working into the future. This can be especially helpful when a refactoring is needed at some point.
Exactly - starting simple is the move. Your point about using tests to lock in invariant logic is smart too. Makes refactoring way less stressful later on.
Context is king. This applies to every prescriptive practice. It's like always using a hammer when what you need is a wrench.
Bro the hammer/wrench analogy is perfect 😂 Exactly what I was trying to say. Context really does determine everything. Appreciate you distilling it down to the core truth.
Hi! Here are my two cents. I get where you’re coming from, and your point is clear. Still, there’s one thing I think went wrong. The issue wasn’t “best practices.” It was the early assumption that you already knew exactly what to build and how.
By skipping a POC and an MVP, you missed the chance to validate the SaaS idea and to learn where, if anywhere, it made sense to introduce microservices. Prototyping helps you understand the real challenges; an MVP helps you discover the right stack.
Jumping straight to microservices because they’re labeled a best practice is what led you off track. The problem wasn’t using microservices or thinking about scale. It was doing that before validation.
This hits hard. Best practices often get repeated like laws instead of guidelines. I’ve seen the same with “add more layers of abstraction” — sounds clean in theory, but in small projects it turns into needless complexity that slows everything down. Context really is king.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.