DEV Community

Cover image for Optimize Your Programming Decisions for the 95%, Not the 5%
Nick Janetakis
Nick Janetakis

Posted on • Updated on • Originally published at nickjanetakis.com

Optimize Your Programming Decisions for the 95%, Not the 5%

This article was originally posted on November 26th 2018 at: https://nickjanetakis.com/blog/optimize-your-programming-decisions-for-the-95-percent-not-the-5-percent


A few weeks ago I came across an interesting post title on HackerNews which was "Why I wrote 33 VSCode extensions and how I manage them".

That title really grabbed my attention, so I did what most of us do which is head straight to the comments before reading the article.

That's where I discovered this comment:

My problem with adding plugins or extending my environment much past the default is that eventually I have to deal with a co-worker's non-extended default installation. I end up relying too much on the add-ins.

Reading that really hit home for me, because that's how I used to think a long time ago.

But then I came across this comment:

I strongly dislike the reasoning that suggests you should hamstring yourself 100% of the time to accommodate a potential situation that may affect you 5% of the time.

"I don't use multiple monitors because sometimes I'm just with a laptop".

"I don't customize my shell because sometimes I have to ssh to a server"

"I don't customize my editor because sometimes I have to use a coworkers editor".

And here we are because I think this is a really underrated topic.

"What if" Conditions

Many years ago I remember avoiding to use Bash aliases because you know, what if I ssh into a server, it might not have those aliases and then I'm done for!

I was optimizing my development environment for the 5% and all it did was set things up to be a constant struggle.

The crazy thing is, back then it made a lot of sense in my mind. It's very easy to talk yourself into agreeing with some of the quotes listed above and many more.

But optimizing for the 5% is an example of optimizing for the "what if" scenario.

You do everything in your power to make sure what you're doing is generic enough to work everywhere, but what you're really doing is making things harder for yourself in the 95% case, but the 95% is what matters most.

It Affects the Code You Write

This isn't related to only development environment decisions either. This is something that affects the code you write.

If you try to write something to be fully generic from the beginning because "what if I make another application and it needs to register users?" then you typically make your initial implementation a lot worse.

Without having a deep understanding of what you're developing and have put in the time to come up with good abstractions based on real experience, you're just shooting in the dark hoping your generic user system works for all cases when you haven't even programmed it yet for 1 use case. How is that even possible to do?

It Affects How You Architect Your Applications

When you blindly follow what Google and other massive companies are doing, you're optimizing for the 5% in a slightly different way.

Instead of just getting your app up and running and seeing how it goes, you try to make decisions so that your application can be developed by 100 different teams sprawling across 5,000 developers.

Meanwhile it's just you developing the app by yourself in nearly all cases for new projects.

It Affects How You Deploy Your Applications

When you try to optimize your deployment strategy to handle a billion requests a second from day 1, you're just setting yourself up for an endless loop of theory based research.

It often includes spending months looking at things like how to set up a mysterious and perfect auto-healing, auto-scaling, multi-datacenter Kubernetes cluster, but it leads to no where because these solutions aren't generic enough to work for all cases without a lot of app-specific details.

Do you ever wonder why Google, Netflix, GitHub, etc. only give bits and pieces of information about their deployment infrastructure? It's because it optimizes their chances of having better tools for them specifically.

What better way to get people interested and working on their open source projects than to make these tools look as attractive as possible and then back it up with "we're using this to serve 20 billion page views a month so you know it works!".

It's a compelling story for sure, but it's never as simple as just plugging in 1 tool like Kubernetes and having a perfect cluster that works in a way that you envision it all should work in your head when it comes to your app.

It's easy to look at a demo based on a toy example and see it work but all that does is make the tool look like paradise from the outside. It's not the full story.

As soon as you start trying to make it work for a real application, or more specifically, your application, it all falls apart until you spend the time and really learn what it takes to scale an application (which is more than just picking tools).

The companies that created these tools have put in the time over the years and have that knowledge, but that knowledge is specific to their application.

They might leverage specific tools that make the process easier and tools like Kubernetes absolutely have value, but the tools aren't the full story.

What if putting your app on a single $40 / month DigitalOcean server allowed for you to have zero downtime deploys and handle 2 million page views a month with tens of thousands of people using your app, without breaking a sweat -- all without Kubernetes or trying to flip your entire app architecture upside down to use "Serverless" technologies?

I Used to Do All of the Above Too

I've been saying "you" a lot in this post but I'm not targeting you specifically or talking down to the programming community as a whole.

I've done similar things to everything that was written above but with different tools and different decisions because technologies have changed over time.

I can distinctly remember when all of this switched in my head too. It was when Node first came out about 8'ish years ago.

I remember being fairly happy using PHP, writing apps, shipping apps, freelancing, etc.. But then I watched Ryan Dhal's talk on Node (he created Node) and I started to drink the kool-aid for about 6 months straight.

Thoughts like "Holy shit, event loops!", "OMG web scale!", "1 language for the back-end and front-end? Shut up and take my money." were now buzzing through my head around the clock.

So all I did was read about Node and barely wrote any code, until eventually I started writing code and while I learned quite a bit about programming patterns and generally improved as a developer, I realized none of the Node bits mattered.

And that's mainly because back-end and front-end development is always going to have a context switch, even if you use the same language for both, and lots of languages have solutions for helping with concurrency.

Those 6ish months were some of the most unproductive and unhappy days of my entire life. Not because Javascript sucks that hard, but because when you're on the outside and not doing anything and wondering "what if", it really takes a toll on you.

I'm still thankful I went through that phase because it really opened my eyes and drastically changed how I thought about everything -- even outside of programming.

Premature Optimization Is the Root of All Evil

Donald Knuth said it best in 1974 when he wrote:

Premature optimization is the root of all evil.

Optimizing for the 5% is a type of premature optimization. Maybe not so much for your development environment choices, but certainly for the other cases.

Base your decisions on optimizing for the 95%, keep it simple and see how it goes. In other words, optimize when you really need to not because of "what if".

What are some cases where you optimized for the 5%? Let me know below.

Latest comments (30)

Collapse
 
tchi91 profile image
Fethi TChi • Edited

I just started to learn DevOps and my projects are small, and i was about to learn kubernetes 🤣, and when i read this article i realised that i really don't need to do this right now.

Collapse
 
igormp profile image
Igor Moura

One exception to that is when dealing with critical or embedded code. You usually have to optimize for the edge cases, otherwise your end product may fail at unexpected times, even putting lives at risk.

Other than that, I totally agree, try to ship as fast as you can and continuously improve on following feedbacks instead of bashing your head against stuff that may not even matter at the end.

Collapse
 
budyk profile image
Budy • Edited

many Experienced developers tend to see things in more abstract way than those new comer so that they can sense and smell things that could messed-up in the future...and then the "What If" questions come up.

great post btw...

Collapse
 
fagnerbrack profile image
Fagner Brack

Donald Knuth never said “(or at least most of it) in programming.” These are your words or somebody else's. It's very unethical to quote wrong things and not even mention the source. Here's the original paper, please read it first, page 268: web.archive.org/web/20130731202547...

Having a generic understanding git/bash/ cli tools of the work environment is useful for a team like mine where we practice Mob Programming everyday to achieve 10x performance. We practice Pair Programming sometimes and solo for obvious tasks only.

All that doesn't lead us to over engineer or over design the code base on requirements that don't exist. You're more likely to do that if your work by yourself alone, and that's the main problem. We can be pretty lean and write only the simplest code in the direction of scale once that's necessary.

Only skills with practice and mentoring will give you that, it's not to optimise for the 95% because that's your knowledge of what 95% is. You don't know what you don't know.

Collapse
 
nickjj profile image
Nick Janetakis • Edited

Thanks for the heads up on the quote. I guess someone should remove it from his official "quotes" page on Wikiquote. It's listed at en.wikiquote.org/wiki/Donald_Knuth.

I changed it to what it is in the paper you linked.

Collapse
 
fagnerbrack profile image
Fagner Brack

Oh wait! That's a different paper. It looks like he said it somewhere else using different words. TIL.

The reason I raised the concern is that I've been caught on the trap. The internet can be misleading sometimes, see hackernoon.com/the-danger-of-relyi...

I didn't read that paper and I don't know where to find it. It would be good to check the original to see if that's exactly what's written there or link the source (even if it's wikiquote). I've seen multiple publications slightly changing the quote and then what we know today is a complete different one. Think an effect like the wireless telephone you played at school. Even website like Wikipedia get this wrong cause nobody bothers to check the source or the source is not available anymore!

Collapse
 
emadmokhtar profile image
Emad Mokhtar

Thank you for very nice post especially it is comming from an experience from yourself.
I really like Kent Beck qoute “Make ir work, make it right, make it faster” wiki.c2.com/?MakeItWorkMakeItRight...

I believe the execution is really matter and if you overthink/over-engineer your solution, you won’t release something that work. You will have a great plan WOW congratulations but it is no relased yet. I’m always trying to find the right solution for the existing problem instead of find a solution for waht if.

Last thing I want to share a nice joke
A QA engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 99999999999 beers. Orders a lizard. Orders -1 beers. Orders a ueicbksjdhd.

First real customer walks in and asks where the bathroom is. The bar bursts into flames, killing everyone.

twitter.com/brenankeller/status/10...

Collapse
 
rcosteira79 profile image
Ricardo Costeira

As developers, we’re constantly bombarded with information about how companies most of us admire (in terms of technical achievements at least) like Google or Uber did this amazing thing that solves all their problems. Since it’s in our nature to follow others by example, it is especially enticing to follow their lead, regardless of the domain we’re working on.

This post really opens your eyes on this matter. Keep your feet grounded, focus on your own domain, and solve your own problems in a way that also leaves the door open for the near and possible future.

Thank you for writing this.

Collapse
 
elrmndo profile image
Enrique Rimando

Well, this post really hit me through the skull. I'm just like this, trying to build something out of the thought that "how can i pull it off and make sure in time it would be scalable in the near future" and guess what, I haven't had something written and still stuck reading articles.

Great article! thanks!

Collapse
 
itsjzt profile image
Saurabh Sharma

I find myself in a infinite loop when I start customizing.

For example, This is what happened today:

I decided to write my posts in markdown instead of using a word processor or note taking app, actually vs code has enough features that you can write markdown pretty easily. But then I find out it doesn't support checkbox syntax, now the loop started and after a few wasted hours I found myself installed a bunch of plugins in VS code then sublime text then plugins for it and so on. Later I frustrated and get back to vanilla vs code.

through thanks for this great article

Collapse
 
cirotix profile image
Damien Cirotteau

Excellent article.
You might like (or maybe know) this other excellent one:
programmingisterrible.com/post/139...

Collapse
 
mattstuddert profile image
Matt Studdert

Really great post Nick, this topic should be made into a sticker for constant reminding 😂

Collapse
 
yaser profile image
Yaser Al-Najjar • Edited

I really needed to read that... shut up and take my unicorns!

Collapse
 
ohffs profile image
ohffs

One of the few things I remember from my college days was a lecturer who said 'optimise for the common case'. It's always stayed with me over the years and I try and keep it in mind when I find myself over-thinking or dwelling on 'what if...' :-)

Collapse
 
recepinancc profile image
Recep İnanç

Great post! It feels do good to know that I am not alone.

Collapse
 
ricardodnb profile image
ricardodnb

The part of researching the better architecture or dev stack to handle 1 billion requests has happened to me 3 or 4 times in weekend draft apps i developed.
Guess what, most of the time the apps were never ready to be used because of the what if's 😁

Advice: Code on whatever you know, get your software running and then if you need handle the optimizations!
Your app doesn't need to be fast from ground up because at the end of the first deploy it has only one user -> you

Collapse
 
tiguchi profile image
Thomas Werner

Great post!

Thank you for making a case for using a beefy single server instead of overengineering everything for the serverless cloud from the start. I agree 100% that it is better to thoroughly understand the challenges of scalability, how it applies to your own situation, or if it applies at all, instead of betting everything on the latest trending technology du jour that might lock you in and make you regret it down the road.

A single strong database and web server can go a long way.

I've been also making all the mistakes you mention in terms of software development. I'm 100% OK with customizing the heck out of my dev environment though :-D

My personal obsession with overthinking problems and trying to find the best approach was part unhealthy perfectionism, but also part the way I grew up with PCs and how I learned writing programs for them. I know what it means to write code for an under-powered platform where optimization was a necessity in order to get usable performance. There was little RAM, little (or no) storage. Updating the screen could be very slow unless you knew how to write directly to video RAM. Preferably in machine language. That became pretty much a non-issue over the years, but when smartphones, iOS and Android became a thing I felt thrown back to that time. Once again we were dealing with relatively weak hardware and its shortcomings that can be worked around with some trickery. I wrote a couple of graphics processing apps and hit the limits of Dalvik and the ARM CPU pretty quickly. Fixed point integer computation instead of floating point was necessary. Native code close to the metal instead of Java interpreted in Dalvik was the solution.

These are not examples for premature optimization though. It was necessary.
These examples are more of something that might have shaped my mindset over the years to look for ways to optimize things before I even run into issues just yet.

This, and also what Some Dood mentioned in his comment: our abstract methodical thinking were we are conditioned into generalizing things and finding ways to create reusable code. I also know that it can be incredibly appealing for developers to create reusable code for the sake of being reusable, as if that library was the product and not the application using the library. This is a trap I ran into time and time again. That's basically when I was caught up in programming land, and didn't care much about end user land.

In itself aiming for generalization and reusability is a good and useful practice. But like anything in life, too much of a good thing can turn into a bad thing.