What is Big O Notation
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.
— Wikipedia’s definition of Big O notation
In simple words, Big O notation describes the complexity of your code using algebraic terms.
To understand what Big O notation is, we can take a look at a typical example, O(n²), which is usually pronounced “Big O squared”. The letter “n” here represents the input size, and the function “g(n) = n²” inside the “O()” gives us an idea of how complex the algorithm is with respect to the input size.
An Algorithm is a piece of code that takes N steps to solve a problem so we can label an algorithm as 50-step algorithm or another as 2000-step algorithm right? actually yes we can if the input for the algorithm is static but if the input for the algorithm varies the number of steps the algorithm takes is going to change and this happens the most of time here is where Big O Notation is really useful.
Here’s what the notation means. It expresses the answer to what we’ll call the “key question.” The key question is: if there are N data elements, how many steps will the algorithm take? Go ahead and read that sentence again. Then, emblazon it on your forehead, as this is the definition of Big O Notation that we’ll be using throughout the rest of our programming life.
The answer to the key question lies within the parentheses of any Big O expression. for example O(N) says that the answer to the key question is that the algorithm will take N steps, therefore, the expression O(n²) says that the answer to the key question is that the algorithm will take N^2 steps which is pronounced “Big O squared”.
Here is which function grows faster than the others.
When we calculate big O notation, we only care about the dominant terms, and we do not care about the coefficients.
Short history about how important is Big O in real life
Once upon a time there was an Indian king who wanted to reward a wise man for his excellence. The wise man asked for nothing but some wheat that would fill up a chess board.
But here were his rules: in the first tile he wants 1 grain of wheat, then 2 on the second tile, then 4 on the next one…each tile on the chess board needed to be filled by double the amount of grains as the previous one. The naïve king agreed without hesitation, thinking it would be a trivial demand to fulfill, until he actually went on and tried it…
So how many grains of wheat does the king owe the wise man? We know that a chess board has 8 squares by 8 squares, which totals 64 tiles, so the final tile should have 2⁶⁴ grains of wheat. If you do a calculation online, you will end up getting 1.8446744*10¹⁹, that is about 18 followed by 18 zeroes. Assuming that each grain of wheat weights 0.01 grams, that gives us 184,467,440,737 tons of wheat. And 184 billion tons is quite a lot, isn’t it?
The numbers grow quite fast later for exponential growth don’t they? The same logic goes for computer algorithms. If the required efforts to accomplish a task grow exponentially with respect to the input size, it can end up becoming enormously large.
Now the square of 64 is 4096. If you add that number to 2⁶⁴, it will be lost outside the significant digits. This is why, when we look at the growth rate, we only care about the dominant terms. And since we want to analyze the growth with respect to the input size, the coefficients which only multiply the number rather than growing with the input size do not contain useful information.
The Soul of Big O
Now that we’ve encountered O(N) and O(1), we begin to see that Big O Notation does more than simply describe the number of steps an algorithm takes, such as with a hard number like 22 or 400. Rather, it’s an answer to that key question on your forehead: if there are N data elements, how many steps will the algorithm take?
While that key question is indeed the strict definition of Big O, there’s actually more to Big O than meets the eye.
Let’s say we have an algorithm that always takes three steps no matter how much data there is. That is, for N elements, the algorithm always takes three steps. How would you express that in terms of Big O?
Based on everything you’ve learned up to this point, you’d probably say that it’s O(3).
However, it’s actually O(1). And that’s because of the next layer of understanding Big O, which I will reveal now.
While Big O is an expression of the number of an algorithm’s steps relative to N data elements, that alone misses the deeper why behind Big O, what it's called the “soul of Big O.”
The soul of Big O is what Big O is truly concerned about: how will an algorithm’s performance change as the data increases?
This is the soul of Big O. Big O doesn’t want to simply tell you how many steps an algorithm takes. It wants to tell you the story of how the number of steps increase as the data changes.
Viewed with this lens, we don’t care very much whether an algorithm is O(1) or O(3). Because both algorithms are the type that aren’t affected by increased data, as their number of steps remain constant, they’re essentially the same kind of algorithm. They’re both algorithms whose steps remain constant irrespective of the data, and we don’t care to make a distinction between the two.
An algorithm that is O(N), on the other hand, is a different type of algorithm. It’s an algorithm whose performance is affected as we increase the data. More specifically, it’s the kind of algorithm whose steps increase in direct proportion to the data as the data increases. This is the story O(N) tells. It tells you about the proportional relationship between the data and the algorithm’s efficiency. It describes exactly how the number of steps increase as the data increases.
I hope you guys found this resume helpful for build new high performance algorithms! 🤓🧐
Top comments (0)