In his paper "Structured Programming with GoTo Statements", Donald Knuth, one of the eminent minds of computer science, coined the phrase "premature optimization is the root of all evil". This statement is both lauded and demonized by programmers of all kinds of backgrounds and experience levels.
To be clear, let's define what we're talking about: premature optimization is the practice of spending development time improving the speed of code before knowing the actual speed profile of that code.
There's three very important points in the above statement, one obvious and one subtle that you MUST understand. First, you cannot know if the code requires an optimization. If you know that it does, then it's not premature optimization, it's now optimization. Second, if you don't spend any extra time optimizing, then it doesn't qualify as premature optimization. But more subtly, if ANYONE EVER has to spend extra time because of a speed optimization you make, then that's part of the cost of the optimization. This cost can come down the line either through bugs or increased maintenance time due to less readable code.
Let's Cut Through the BS
So, what does this mean? What guidance can we glean from this advice?
Well, I can spend a lot of time like other people do and explain various aspects - which I do below. But instead of all that, let's cut through the BS and see a working example.
I created a StackBlitz example which I link below. In this example, there are two buttons on that page. One uses a fast algorithm, one uses a slow algorithm (one uses for, one uses forEach, and for is often measured as almost 3 times faster than forEach) and they each do 1,000 operations of mutating an object. That's a fair amount. I have rarely written code that regularly operates on 1,000 different objects, it's usually far fewer, but I thought this would be a good example.
When you click each button, it changes that status field to "working", does its work, then changes the status to "done". If you open up the console, you can see log statements that show the same thing.
So now click on this link and try it out and you tell me, by clicking on the two buttons, WHICH ONE IS FASTER?
The BS
I was always told that non-performant code is generally around 8% of the code we write. By pure statistics that means any time I spend optimizing my code, it has a 92% chance of being a waste of my time. Assuming that's true, (it's almost certainly not, whatever the statistic is, I doubt anyone knows) then my return on investment is unlikely to be very fruitful in most cases.
But rather than just making a blanket statement, you should simply understand the fact that you have a limited amount of resources (developer time) that can be spent on your project. Even the biggest, most well-funded project does not have an infinite amount of resources to spend. So time you spend doing one activity is time you are not spending on another. Any decision to write more optimized code has a cost that comes out of other activities. That may be in the form of features, overall cost of the project, sanity of overworked developers, or simply time you aren't spending playing with your kids (I highly recommend playing D&D with them).
So every time you spend resources today on premature optimization, whether that's implementation time today, or maintenance time tomorrow, you are spending resources you could have spent on something else. That is the decision you are making, Also realize that every project has its own specific needs for performance, and some applications are much more performance sensitive than others, so the value of performance can vary.
To bring it down to something more digestible, here's a quick bullet list of the pros and cons of premature optimization:
Pros
- It's faster (probably, but not for sure, see items 1, 2, 3, and 5 in the cons list)
Cons
- It's possible to actually reduce performance with a misguided attempt to improve performance (compilers and runtime environments are tricky). I once saw some of the internal code of the Angular project changed from an array to a linked list, then a few weeks later back to an array when it was discovered that the linked list was actually slower, even though the developer implemented it because they thought it would be faster.
- Your anticipation of the actual runtime performance profile of your application may be inaccurate. That code you are sure is going to be executed all the time may never get executed by real users.
- You are unlikely to actually affect REAL performance - things your users notice. Users cannot detect any improvement less than about 30ms.
- The cost of a bug due to using more obscure but more performant code may or may not be higher than the benefit of performance improvement.
- If you don't actually spend time measuring the performance improvement, you don't truly know if you actually increased performance.
- Your actual runtime environment may be so different from your development environment that your optimizations result in no real performance improvement. (A user's browser with super low amounts of memory may encounter much bigger performance issues that completely hide your optimization)
As a more concrete example, there's a fair amount of documentation on the speed profiles of CSS selectors. Yet as you dig into the topic, the prevailing opinion of the experts is that for the most part, CSS selector performance should be ignored, because it's unlikely to have a measurable effect on a web app in realistic applications.
In the end, rather than saying what you should do (ok, fine, you twisted my arm, I'll say it - don't prematurely optimize), I want you to simply be fully aware of the costs and thereby make an educated decision. Every time you prematurely optimize, you are likely spending a lot more of your finite development resources than you realize, both now, and in the future. So simply make sure you are educated on the costs.
Happy Coding!
Signup for my newsletter at here.
Visit Us: thinkster.io | Facebook: @gothinkster
| Twitter: @gothinkster
Top comments (0)