We’ve all listened to them. The holy rules of software engineering are transmitted by experienced developers, articles, and presentations at conferences. They are designed to lead us, to stop us from repeating the errors of those who preceded us.
I adhered to them faithfully. I believed I was creating a strong, adaptable, timeless work of art.
Rather, I was designing a Rube Goldberg contraption of intricacy that almost crumbled under its own burden before it even took off.
Here are the three most harmful "best practices" I unquestioningly followed—and the tough lessons I grasped regarding context and dogma
- 1. "Always Use Microservices"
The Dogma: The mantra is everywhere. "Microservices offer better scalability and independent deployments and allow teams to work autonomously." I was building a new SaaS product, and from day one, I was determined to do it "the right way". So, I drew a bunch of boxes on a whiteboard: Auth Service, User Service, Billing Service, Email Service, and Analytics Service. It looked like a beautiful, distributed future.
How It Backfired: For me, building an MVP for an unproven product was a catastrophe.
Insanity-Driven Development: For two weeks, I focused not on creating features but on setting up Docker, Kubernetes manifests, and Kafka topics to enable communication between services. My "hello world" needed a data centre.
Debugging Challenges:
A straightforward user registration process has now extended across 4 or more services. Tracking down a bug involved examining logs across multiple pods. Was the mistake in Auth? Or did the user service encounter a timeout? Perhaps the API gateway routed incorrectly? The mental burden was enormous.The ultimate irony: I possessed a "scalable" design that struggled with the burden of no users at all due to overwhelming complexity. I was aiming for a scale I lacked and a team that was nonexistent
The lesson I learnt: microservices are not a solution to organisational issues; they result from organisational structure. You embrace them when managing a monolith among over 50 engineers becomes more challenging than handling the intricacies of a distributed system. For a small project or startup, a well-organised monolith (or a modular monolith) is typically the better option. You can always separate components into services afterward, once you've validated the product and pinpointed real scaling issues.
The ideal approach should be: "Begin with a monolith until there's a clear necessity to dismantle it."
- 2. "Premature Optimization is the Root of All Evil"
This Knuth quote is likely the most misapplied and misinterpreted guidance in our field. I interpreted it as "don't consider performance until it's a dire emergency." I developed features quickly. I employed fast and simple queries, with N+1 issues hiding around every corner. I selected the simplest data structures. "We can improve it later!" I announced.
How It Backfired: "Later" arrived the moment we onboarded our first beta users. The dashboard that appeared right away for me took 12 seconds for users with actual data. Basic actions have expired. My "act quickly" strategy resulted in performance problems not being separate—they were integrated into the application's fundamental structure.
- The Great N+1 Apocalypse: What I thought were "simple queries" turned into hundreds of database calls for basic pages. Fixing them required rewriting entire data access layers, not just adding an .includes() here and there.
- Death by a Thousand Cuts: Each "tiny" performance oversight compounded until the entire application felt sluggish. There's a difference between not optimising prematurely and building with zero regard for scale.
- The Psychological Toll: Users don't care about your philosophical stance on optimisation. They care that your app is slow. Our first impressions were permanently damaged because I took the quote too literally.
The Lesson Learnt:Knuth wasn't saying, "Ignore performance entirely." He was warning against optimising things that might be bottlenecks. There's a crucial middle ground: intentional, sensible design choices.
The ideal approach should be: "Write reasonably efficient code from the start, but only profile and deeply optimise when you have actual metrics showing a problem." Use pagination by default. Be mindful of N+1 queries. Choose appropriate data structures. This isn't premature optimisation—it's professional craftsmanship.
- 3. "You MUST write tests for everything."
The Dogma: "No code can go live without complete test coverage!" I sought to achieve testing nirvana. Each utility function, every component, and each API endpoint possessed its own test suite. I felt unstoppable.
How It Went Wrong: My speed dropped significantly. I dedicated 60% of my development time to writing and updating tests for features that altered daily due to user feedback. The most unpleasant aspect? The results of my tests provided me with a misleading sense of safety.
- Brittle Tests: The slightest refactor would break 20 tests, requiring hours of updates. I was testing implementation details, not user behaviour.
- The Irony of 100% Coverage: I had perfect coverage of code that was probably wrong. The tests verified that my broken logic was consistently broken.
- Missing the Forest for the Trees: I had extensive unit tests but zero integration or end-to-end tests. The system worked perfectly in isolated units but failed miserably when they had to communicate.
The Lesson Learnt: Tests are a means to an end (confidence and reliability), not the end itself. The value of a test is not in its existence but in the confidence it provides.
The ideal approach should be: "Focus testing efforts on what matters most: critical business logic, user workflows, and areas prone to regression." A few well-written integration tests are often more valuable than hundreds of brittle unit tests.
The Common Thread: Context is King
The true failure wasn't adhering to best practices; it was applying them rigidly, without grasping the trade-offs or taking my unique situation into account.
- Microservices solve scaling and team coordination problems. I had neither.
- Avoiding premature optimisation is about not wasting time on unproven bottlenecks. I used it as an excuse for carelessness.
- Testing is about reducing risk. I turned it into a ritual that increased friction and slowed learning.
The Pivot That Saved the Project
I made a harsh retreat. I merged my microservices into a unified, well-organised monolith. I recognised the three main performance bottlenecks and resolved them effectively. I discarded 80% of my tests and concentrated on creating a few essential integration tests that truly reflected user actions.
The outcome? I began delivering value once more. The item became stable. Users observed the enhancement
The Real Best Practice
The only universal best practice is this: think critically about why a practice is recommended and whether it applies to your current situation. Software development is about making thoughtful trade-offs, not following rules blindly.
Which ‘best practice’ do you secretly think causes more harm than good? Drop your hot takes 👇
Top comments (2)
Hmm, ever since the concept of microservices started popping up, I've never really heard this "Always Use Microservices" thing. Indeed, the advice is almost always "identify bottlenecks, then carve out that functionality into a microservice as needed".
I would also extend this advice: Use a monorepo. Deployment is easier, devex is better, and sharing libraries (or entities/models) is trivial. It requires good understanding of the project scope of course, but monorepos have become my mantra - I can flip services on/off, scale out as needed, and keep my API as my nerve centre.
As for tests - yes, absolutely write them. But focus on the happy path. 100% code coverage is nice, but blimey does it slow you down.
"Later" arrived the moment we onboarded our first beta users
yup. 100%. Big picture thinking and defensive programming will help you in the long run.Great insights!
Great post 👍 this is where theory meets practical implementation.
I like that you also included test coverage as an example. It's so hard to convince the industry that the 90% test coverage target is hurting their business.
My ideal test strategy is: write e2e tests for critical user flows and unit tests for the lowest order of (pure) functions. Then, when something breaks after code changes, write an integration test that catches that specific case.