DEV Community

Alexandru Nastase
Alexandru Nastase

Posted on

Demystifying Time Complexity and Big O Notation 👨‍💻

“Great job, you found a solution! Now, what is the time complexity of your solution?” This question resonates in my head from the sheer amount of times I’ve heard it. Time Complexity and Big O notation are absolutely essential to any developer, regardless of whether they are interviewing, working, freelancing, or just learning. However, it can be a bit of an abstract topic, and thus more difficult to grasp. Let’s have a crack at demystifying the oh-so scary Big O!

Background

First and foremost, let’s understand why Time Complexity matters. Any application or piece of code runs in some time. Even if sometimes it seems like it’s running instantaneously, it’s still taking some time. Here’s an example:

import time

time_start = time.time()
for i in range(100):
    print(i)
time_end = time.time()
print(time_end - time_start)
Enter fullscreen mode Exit fullscreen mode

Here, we make use of the time module in python to asses the runtime of our code. Even though we might perceive it as virtually instant, I can see that in my case, this simple program that prints 100 numbers took about 0.007s.

That doesn’t seem like a lot! Remember though, this is a toy example. In real life, we would most likely not have an arbitrary number like 100 in there, but we’d be iterating over some data points, and there might be over hundreds of thousands of entries, if not more. And that is a single function call, we might call that multiple times.

It’s easy to see how it can all add up to quite noticeable amounts of time. This naturally leads to the question “How can I make my program as efficient as possible?” But before we get there, we need a way to measure this efficiency.

Time Complexity 101

Simply put, Time Complexity is a way of measuring this efficiency, i.e. how long a program takes to run in terms of it’s input size. Sort of. That’s a simple way to think about it. A more technical (and accurate) answer would be that it measures how many times an operation is executed given the input size. This is to abstract away from the overhead of having to consider things like hardware, operating system and programming language which can drastically change how much time one operation takes.

Example

Let’s take an example to see how Time Complexity can help us determine the efficiency of our program.

Consider you are at the gym and looking at the rack of weights. Because the gym has really nice people, the rack is always in increasing order of weights.

Array of Integers

Now, suppose you’d want to pick the lowest weight available. In this case, you’d visit the rack and pick the first element - 3. That was easy! Now, let’s think in terms of Time Complexity - remember, it means the number of operations we’ve done given our input size. We have 7 weights on the rack, but we only cared about the first weight, the smallest one. Hmm, what would happen if, say, instead of 7 weight on the rack, we had 100. The time our program would take would still be the same, as we’d still look only at the first element. This is what we call Constant Time Complexity.

Now, consider the case where you’re an advanced gym goer and would want to pick the highest weight available. One way to solve this problem would be to traverse the whole array, keep track of the current maximum, and at every step check “Is this heavier than my current maximum?”. To reach weight 20, we’d have to check all the weights before that. That’s 7 whole operations! And now if we increase the weights on the rack to 100, we’d have to do 100 operations. We can see that as the input size increases, the number of operations increases proportionally, or linearly. We call this Linear Time Complexity.

Big O

Big O notation is a way to express these categories of Time Complexities using a notation that is more mathematical than wordy. We note the letter O and use it as a function in math, with an argument. This argument corresponds to how the number of operations increases as the input increases. But to which extent are they related?

In our first example, we achieved a Constant Time Complexity, meaning the number of operations remains the same as input increases. The Big O notation for that is O(1). 1 denotes a constant, a way of saying “regardless of how big my input is, I will perform an independent and constant amount of operations.”

In out second example, we achieved Linear Time Complexity, meaning the number of operations increases proportionally with the input size. The Big O notation for that is O(n). n denotes linearity, a way of saying “if my input size is n, I will perform n operations.”

Other examples include O(n^2) meaning my code will execute n^2 operations for an input size of n (for example, traversing all possible pairs in an array), and O(log n) meaning the search space of n is cut in half at each operation (for example, most divide and conquer algorithms such as Binary Search).

If we plot those functions, we get the following:

Graph depicting different time complexities

A lil’ math behind Big O

The Conundrum

Let’s take a simple linear search example on the same array as above.

Array of Integers

Suppose we’re looking for item 12. One approach would be to traverse the array in search for our element. In the worst case, 12 would sit the end of the array and we’d traverse every single element before reaching our destination. That sounds like O(n). However, if 12 awaits us in the first position, we read the first element and return immediately, regardless of the size of the input. That sounds like O(1)! So which one is it? 🤔

Upper Bound

Big O is used to refer to the upper bound of the number of operations based on a given input. In other words, what is the worst possible number of operations my program has to execute. Our program always executes that amount or less. For our above example, the correct answer is O(n).

Input size plays a role

This is reflective of how the program will execute on larger input sizes, but not really for smaller ones. Big O notation doesn't specify how fast something is. A O(1) program doesn't mean it’s fast, but that the speed is constant as the input scales. Therefore, a O(n^2) program can be faster than a O(n) program, for some small inputs.

So which one do I choose?

There is no direct answer to this. In the vast majority of cases, in real life you’d want to pick the program with the least operations for a given input, and this is also what you’d be searching for in technical interviews. However, reality is always more nuanced and the choice sometimes depends on the situation.

Recap ✍️

  • Time Complexity is an essential concept to grasp for writing more efficient code
  • Time Complexity is a measure of operations executed by our program given a certain input size
  • Big O helps us grasp mathematically how the number of operations increases with the input
  • Big O is an upper bound, meaning our program always executes that amount or less

Top comments (0)