When designing software systems, always take the simplest possible path. This advice applies surprisingly broadly—from fixing bugs and maintaining systems to designing new ones.
Many engineers imagine an “ideal” system: elegant, infinitely scalable, and easily distributed. I believe this is a flawed approach. Instead, invest that time in deeply analyzing the current system and implementing the simplest working solution.
Simplicity Can Seem Mediocre
System design requires skill with various tools: application servers, proxies, databases, caches, queues, and more. As young engineers learn these tools, they naturally want to use them. It's exciting to build systems from many components and draw complex diagrams on a whiteboard.
But true mastery often means knowing when to do less, not more. Think of martial arts films where the ambitious novice is all motion, while the experienced master remains still. The master's single, decisive attack is what counts.
In software, this means a great design looks mediocre. You know it's good when you think, “I didn’t realize it was that simple.” A great example is Unicorn, which provides web server guarantees—request isolation, scaling, and recovery—using basic Unix primitives. They are impressive because they implement the simplest working solutions.
You should do the same. Suppose you're adding rate limiting to a Go application. Your first thought might be to add a persistent store like Redis. But do you need a whole new piece of infrastructure? What if you stored request counts in memory? You'd lose data on restart, but does it matter? Or perhaps your edge proxy already supports rate limiting, requiring just a few lines of configuration.
Of course, there are cases where Redis is the simplest solution. But if a simpler option exists, why not use it? You can build an entire application this way: start simple and add complexity only when necessary. This elevates the YAGNI principle above all others.
What are the Risks of Simple Solutions?
This approach raises three main concerns: creating an inflexible system (a big ball of mud), defining what “simplest” means, and failing to build for scale. Let's address them.
The Big Ball of Mud
Are quick hacks and kludges truly simple? I argue they aren't. They add complexity by introducing another thing you have to remember. A proper fix, though harder to devise because it requires understanding the codebase, is always simpler than a hack. Implementing the simplest working solution isn't easy—it requires real engineering.
What Does 'Simple' Mean?
Developers often disagree on what “simple” means. Here is a rough, intuitive definition:
- Simple systems have fewer “moving parts”—fewer elements to think about.
- Simple systems have fewer internal connections. They consist of components with clear, simple interfaces.
When I'm unsure, I use this test: simple systems are stable. If one solution requires more ongoing maintenance than another, the other is simpler. Redis needs deployment, monitoring, and maintenance. Therefore, in-memory tracking is often simpler.
Why You Might Not Care About Scaling
Some will argue, “But that won't scale!” Is it irresponsible to build a system that only works at the current scale? No. In my view, the biggest sin in SaaS development is an obsession with scale. I've seen immense suffering from over-engineering systems for scales they never reach.
The main reason not to do this is that it doesn't work. You can't predict how a non-trivial codebase will behave at 10x traffic. You don't know where the bottlenecks will be. At best, you can prepare for 2-5x growth and be ready to solve problems as they arise.
The longer I work in tech, the less I believe in our ability to predict a system's evolution. It's hard enough to understand its current state. There are two paths: predict future requirements and design for them, or build the optimal system for today's requirements. I advocate for the second path: the simplest working solution.
Top comments (0)