DEV Community

Cover image for Big O notation
Quentin F.
Quentin F.

Posted on

Big O notation

We all know how to code. We may know how to make "clean code", or at least we though about what makes the code clean (Uncle Bob definitely has). But what about performance? How can we anticipate whether the algorithm is fast and memory efficient? How can we analyze algorithm and compare one to another? Turns out, the Big O notation is there just for that!

In short, the efficiency of an algorithm is the number of steps your code takes to execute a task.

For example, if we were to create a function that displays its input, for one one string, we'll do one action, for an array of 100, we'll do 100 prints and for nn we'll do nn . We can note that O(n)O(n) or "Big O of nn ".

If however the number of action we take does not depend of the input, such as accessing the nthn^{th} item of an array (which will always just be one action, regardless the size of the array), your algorithm will run in constant time, thus can just be written O(1)O(1) .
We are not necessarily interested in the actual precision and the actual numbers of steps nor the actual time it would take to run the code given an input, but by its magnitude. We actually are comparing if O(log2n)O(log_2n) is better or worse than O(n²)O(n²) .

If we actually were to look at the actual complexity of the print function, which we assumed to work in O(1)O(1) , we'd be quite wrong. The print function takes a string as a parameter to log it in the console. The print function is most likely to use the POSIX write function, which is a system call to dump a certain amount of data into a file descriptor (which is the representation of a file or the terminal etc...). In our case, the data would be a string which basically is an array of char. Knowing the length of the string implies going from the beginning of the string to the null character (i.e. \0) which is O(n)O(n) . Going from one char to another implies incrementing a variable (which has a computational cost taddt_{add} ) that we need to do only once. We also need to check the value of the current char which has the computation cost of tcheckt_{check} as well as the jump (loop) which has the computation cost of tjumpt_{jump} . The actual computational cost of moving from the first character of a string is actually O(n(tinc+tcheck+tjump))O \left( n \cdot (t_{inc} + t_{check} + t_{jump})\right) . And that's just without having a function which we'd call and returns a value: while(str[i++]);.

What gives the insight of the efficiency of the algorithm is nn , not the constants. In the long run, 100n100n will always be better than n22\frac{n^2}{2} . In that regard, we can always drop the non-deterministic factors, O(n)O(n) instead of O(100n)O(100n) and O(n2)O(n^2) instead of O(n22)O(\frac{n^2}{2}) .


Practice: The egg drop problem

We have two eggs and a 100 stories building. At which floors will the eggs break?

Both are strictly identical. One can throw the egg by a window an infinity of time, if it didn't break the first time, it won't. EVER. On this floor and those bellow. EVER.There is a real scientific relevance in performing this test and knowing its result.We have a teleport machine which allows us to access each floor instantly.

With just one egg...

The solution with two eggs is not that easy. Let's start to solve with one first. Intuitively, we'd go from the ground and try each floors until it breaks. Which means:
best case scenario: we do one throw, on the ground floor
worst case scenario: the egg breaks at the 100th floor...
With that procedure, if we had a 1,000 stories building we would do 1,000 throws at worse. And with a nn stories building, nn throw.
This approach is in O(n)O(n)

With an infinity of eggs...

This is the most intuitive approach, the one most people go with.

You start at floor 50. If the egg breaks, you go to floor 25. If it didn't break then, to floor 38 (depending on how you round), etc... That is, you split the building in two and try at the middle. For each iteration you split in two the upper remaining half if the egg didn't break, and the lower remaining half if it did.

If we take a 32 story building, we would tests 16, 8, 4, 2 and finally one:

  • 16=322116 = \frac{32}{2^1}
  • 8=32228 = \frac{32}{2^2}
  • 4=32234 = \frac{32}{2^3}
  • 2=32242 = \frac{32}{2^4}
  • and finally 1=32251 = \frac{32}{2^5} . That is: 1=n2xx=log2n1 = \frac{n}{2^x} \Leftrightarrow x = \log_2{n}

How does one go from n2xtolog2n\frac{n}{2^x} to \log_2{n} ?

The first thing to know about log is:

basea=blogbaseb=aso in our case:n2x=1n=2xlog2n=x \textcolor{blue}{base}^{\textcolor{red}{a}} = \textcolor{green}{b} \Leftrightarrow \log_{\textcolor{blue}{base}}{\textcolor{green}{b}} = \textcolor{red}{a} \\ \text{so in our case}: \\ \frac{\textcolor{green}{n}}{\textcolor{blue}{2}^{\textcolor{red}{x}}}= 1 \Leftrightarrow \textcolor{green}{n} = \textcolor{blue}{2}^{\textcolor{red}{x}} \Leftrightarrow \log_{\textcolor{blue}{2}}{\textcolor{green}{n}} = \textcolor{red}{x}

Why when I do log(4)\log{(4)} I get 1.39\approx 1.39 and not 2?

One must keep in mind that our numeric system is in base 10, so log\log in most calculators are also in base 10 ( log10\log_{10} ). However, we are in base 2. Not because computers are binary, but because we split the building in two, thus making two branches, and each iterations we make two more branches. In order to go from one to another we need to know this:

logbasex=logotherbasexlogotherbasebaselog2x=log10xlog102 \begin{aligned} \log_{\color{red}{base}}{x} & = {\log_{\color{blue}{otherbase}}{x} \over \log_{\color{blue}{otherbase}}{\color{red}{base}} } \\ \Rightarrow \log_{\color{red}{2}}{x} & = {\log_{\color{blue}{10}}{x} \over \log_{\color{blue}{10}}{\color{red}{2}}} \end{aligned}

The complexity would then be O(log2n)O(\log_2{n}) .

For a 100 stories building we'd have \approx 7 throws,
For a 1000 stories build we'd have \approx 10
That is way better than our previous try. However, we don't have an infinity amount of eggs, but two.

Back to two eggs...

We need an approach that's a bit a mix of both.
If we split the building in equal chunks, and try the top of each chunk, we'd know that if the egg breaks we have to try each floor of that chunk and that'd work with two eggs. 10 chunks seems an attractive number, but before using that, let's make sure it's the proper one.

We want a balance between the number of floor we'd try and the number of chunks, that is, the number of chunks must be equal to the number of floors in that chunk:

floorschunks=buildingchunkschunks=buildingchunks2=buildingchunks=building \begin{aligned} floors \cdot chunks & = building \\ \Rightarrow chunks \cdot chunks & = building \\ \Leftrightarrow chunks^2 & = building \\ \Leftrightarrow chunks & = \sqrt{building} \end{aligned}

Which in our case is 10, lucky.Now our worst case, would be floor 99, because we'd have to test floors 10, 20, ..., 80, 90, 100 and then 91, 92, ..., 98, 99. That is 19 tests.
Now the complexity is not really surprising: O(2n)O(2\sqrt{n}) , and dropping the constants O(n)O(\sqrt{n})

Is there better?

In the worst case scenario we'll do 19 tests, as we just saw. But if the floor is 13, we'd have done 6 tests: 10, 20, 11, 12, 13, 14. Perhaps we can redistribute the floors in the chunks so that, regardless of the floor, we're guaranteed to do the same amounts of tests.

In other words, for each chunk that has passed, we test one less floor. For example, if the first chunk is 10, the next will be 9 and the next 8 and so on. That way, if it breaks at 9, we'll test 10, 1, 2, 3, ..., 8, 9 (10 tests), if it breaks at 18, we'll test 10, 19, 11, 12, ..., 17, 18 (10 tests) and so on.

So basically we need to resolve k+...+3+2+1=buildingk + ... + 3 + 2 + 1 = building and figure kk out. And that's building=n(n+1)2building = \frac{n(n+1)}{2} :

1+2+3+4+5+6=(1+6)+(2+5)+(3+4)=(s0+sn)+(s1+sn1)+(s2+sn2)and (s0+sn)=(s1+sn1)=(s2+sn2)1+2+...+n=(1+n)+(1+n)+(1+n)=n2(1+n) \begin{aligned} 1 + 2 + 3 + 4 + 5 + 6 & = (1 + 6) + (2 + 5) + (3 + 4) \\ & = (s_0 + s_n) + (s_1 + s_{n - 1}) + (s_2 + s_{n - 2}) \\ \text{and } (s_0 + s_n) &= (s_1 + s_{n - 1}) = (s_2 + s_{n - 2}) \\ \Rightarrow 1 + 2 + ... + n & = (1 + n) + (1 + n) + (1 + n) \\ & = \frac{n}{2}(1 + n) \end{aligned}

In our case we want to resolve n(n+1)2=nn22+n2n=0\frac{n(n + 1)}{2} = n \Leftrightarrow \frac{n^2}{2} + \frac{n}{2} - n = 0 .

This is a quadratic equation: ax2+bx+c=0ax^2 + bx + c = 0 that you should be able to solve

ax2+bx+c=0ax2+bx=cx2+bax=ca we divide by a which is non 0x2+bax+(b2a)2=ca+(b2a)2 which will help us refacto(x+b2a)2=ca+(b2a)2remember: (a+b)2=a2+2ab+b2(x+b2a)2=ca+b24a2(x+b2a)2=b24ac4a2x+b2a=±b24ac2ax=±b24ac2ab2ax=b±b24ac2a this should be familiar \begin{aligned} ax^2 + bx + c & = 0 \\ ax^2 + bx & = -c \\ x^2 + \frac{b}{a}x &= -\frac{c}{a} \mathit{\text{ we divide by a which is non 0}} \\ x^2 + \frac{b}{a}x + \left(\frac{b}{2a}\right)^2 &= -\frac{c}{a} + \left(\frac{b}{2a}\right)^2 \mathit{\text{ which will help us refacto}} \\ \left(x + \frac{b}{2a}\right)^2 &= -\frac{c}{a} + \left(\frac{b}{2a}\right)^2 \mathit{ \text{remember: } (a + b)^2 = a^2 + 2ab + b^2} \\ \left(x + \frac{b}{2a}\right)^2 &= -\frac{c}{a} + \frac{b^2}{4a^2} \\ \left(x + \frac{b}{2a}\right)^2 &= \frac{b^2 - 4ac}{4a^2} \\ x + \frac{b}{2a} &= \pm \frac{\sqrt{b^2 - 4ac}}{2a} \\ x &= \pm \frac{\sqrt{b^2 - 4ac}}{2a} - \frac{b}{2a} \\ x &= \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} \mathit{\text{ this should be familiar}} \end{aligned}

In our case:

12n2+12n100=0n=12±(12)2412100212n=12±14+200n13.65 \begin{aligned} \frac{1}{2}n^2 + \frac{1}{2}n - 100 &= 0 \\ n &= {-\frac{1}{2} \pm \sqrt{\left(\frac{1}{2}\right)^2 - 4 \cdot \frac{1}{2} \cdot -100} \over 2 \cdot \frac{1}{2}} \\ n &= -\frac{1}{2} \pm \sqrt{\frac{1}{4} + 200} \\ n &\approx 13.65 \end{aligned}

That gives a complexity of O(14+2n12)O\left(\sqrt{\frac{1}{4} + 2n} - \frac{1}{2}\right) , that is, for a 100 stories building, we would do at worst approx. 14 throws to find the right floor.

The complexity actually is O(n)O(\sqrt{n}) which is the same as the previous algorithm we found. Let's compare the four:

Alt Text

red: with 1 egg: O(n)O(n)
green: with an infinity of eggs: O(log2n)O(\log_2{n})
orange: our first 2 eggs solution: O(2n)O(2\cdot\sqrt{n})
blue: our second 2 eggs solution: O(14+2n12)O\left(\sqrt{\frac{1}{4}+2n} - \frac{1}{2}\right)
We can clearly see the difference between the O(n)O(n) , O(n)O(\sqrt{n}) and O(log2n)O(\log_2{n}) , regardless of the constant we dropped. Of course, the constant make sense when comparing two algorithm of the same overall complexity (for 1600 floors, there is a 30 try offset between the 1st two eggs solution and the second).


Going further...

Up until this point we've counted the number of steps. We call this the "time complexity" because it gives a sense of how long the algorithm will take to run.

The same logic can apply to the amount of memory the algorithm would take: "space complexity". In this case, you just need to count how many variable will need to be created.

For example, if you want to count the number of unique entries in an array, your first bet would probably to use a Set. For each new entry, you'd check the Set if the entry exists and create it otherwise. This algorithm will run in O(n)O(n) time (because you'd have to go through each entry) and O(n)O(n) space because, worst case scenario, the array is full of unique item thus you create nn new entry in the Set (A bloom filter would probably be an interesting alternative in O(n)O(n) space and O(1)O(1) space, but that would come with some false positive - we won't get into that now).

But... there's more.

We don't have to only talk about the "worst case scenario" like we did during this article. The worst case scenario actually is the "asymptotic tight upper bound" of an algorithmic function, which means that this is exactly how wrong it will be if everything goes wrong.
For example, if you say "I'll arrive in 20mn, 1h if there's traffic", 1 hour is your upper bound. That's basically Big O. 20mn however is the "tight lower bound", or Big Omega: Ω(n)\Omega(n)
You could have just said "I'll arrive to work before the end of the day", that would be a "loose upper bound", or little o o(n)o(n) . But if you say "I'll arrive in an instant" would be a "loose lower bound" or little omega ω(n)\omega(n) .
If you know that'll you will arrive in 40mn, that would be the "expected average case scenario" or Big Theta Θ(n)\Theta(n) . It is in fact the tight bound of the function, both upper and lower.

Alt Text

If we look at the graph, our algorithm f(n)f(n) runs in Θ(n)\Theta(n) and is bounded between O(n)O(n) and Ω(n)\Omega(n) . In the long run, we know it won't get worse than o(n2)o(n^2) nor better than ω(logn)\omega(\log{n})

Common complexity

  • O(1)O(1) : direct access (e.g. in an array or a Hashmap / Set)
  • O(n)O(n) : looping through the input a constant amount of time (e.g. index lookup, counting)
  • O(n2)O(n^2) : for each input, look at every other input (e.g. bubble sort)
  • O(log2n)O(\log_2{n}) : divide the problem (e.g. binary tree lookup)
  • O(nlogn)O(n\log{n}) : sorting
  • O(!n)O(!n) : combinatorics

Top comments (0)