loading...

What is the meaning of the 90/10 rule of program optimization?

edisonnpebojot profile image Edison Pebojot(πŸ‘¨β€πŸ’») ・1 min read

According to Wikipedia, the 90/10 rule of program optimization states that:

90% of a program execution time is spent in executing 10% of the code (see the second paragraph here).

Question:

  • I really don't understand this. What exactly does this mean?

  • How can 90% of the execution time be spent only executing 10% of the code?

  • What about the other 90% of the code then? How can they be executed in just 10% of the time?

Discussion

pic
Editor guide
 

When programming we know there is a program flow, take embedded programming for instance and an IOT device, 90% of the time the program is doing very little it’s most likely in a super loop waiting for something to occur, when it does the program flow will change, activate a door, a light, a camera and then once complete it goes back to it’s super loop.

So nearly 99% of the time is spent looking at sensors and waiting for something to occur.

It can be a similar story for a webpage, the server will server up the landing page millions of times compared to the about page.
So 90% of the time you are on the landing page.

Therefore in both cases, it might not be the best use of time to optimise the part of the system that is only used 10% of the time.
This is what the quote is talking about.

 

I really don't understand this. What exactly does this mean?

Ignore the figures as they come with different names and ratios such as 90/10, 80/20 and Pareto principle. These are all essentially the same and for me more of philosophical rules.

What these general rules say is that 10 (or 20) percent of what we’re doing accounts for 90 (or 80) percent of everything. Throughout my life experience, these are generally OK rules. The figures are just to illustrate how often we overlook or underestimate the impact of seemingly small yet important aspects in life.

How can 90% of the execution time be spent only executing 10% of the code?

When we apply the rule to software optimization context, it just means that

  • Most of the program execution time is done in the small portion of the code,
  • Most of the performance issues are usually in the small portion of the code

(See how I omit the figures there?)

I'm using Firefox right now. The most work it's doing right now is probably rendering texts and images to display my open tabs to render websites. I bet the rendering code is complex; but to put things in perspective, it might be only a small portion of the whole Firefox code base.

The general take of these rules is that, when you're optimizing your code, identify the top 10 bottlenecks of the program; e.g. by profiling it. From my experience, solving the first 1-2 bottlenecks will have huge impact to the overall performance. Many software engineers often overlook this and instead trying to optimize small things in the code that bring insignificant performance gains.

What about the other 90% of the code then? How can they be executed in just 10% of the time?

The biggest portion of the code does not have to be necessarily executed in a small amount of time. It could also be only executed rarely or occasionally as well.

Think about my Firefox. The rendering code is probably the most important thing when I'm running it. However, I only occasionally use other features, such as bookmark, history, synchronization, etc. That doesn't necessarily mean that the code for bookmarking is executed quickly in comparison to the rendering. It just means that it is executed in a small amount of time (relative to my whole usage time of Firefox) because I only use these features occasionally.