DEV Community

Cover image for Big O, The Confusing Stuff
arindavis
arindavis

Posted on • Updated on

Big O, The Confusing Stuff

Time Complexity is without a doubt one of the most difficult topics to understand when you're first finding your bearings in computer science. I'm here to make the confusing parts a little easier. Specifically, any confusion you may still have regarding exponential, quadratic and logarithmic time complexities.

But before we jump head first into the deep end, let's make sure we are comfortable with the easy stuff.

Simply put, time complexity is a tool for analyzing the efficiency of our programs in relation to how they scale. A low time complexity will always result in faster programs, and a higher complexity will always result in slower programs.

Constant and linear complexities are represented in Big O notation as-

O(1) && O(n)

respectfully, and are easy enough to understand on their own. A constant time complexity will take the same amount of time to perform its operations no matter the size of the input, so we denote it as "1".

The time complexity of linearly scaled programs will always exist in direct relation to the size of the input, represented above as n.

Popping off an array will always be constant because the length of the array doesn't matter.

let arr = ["one", "two", "three"];
arr.pop() ---> "three"
Enter fullscreen mode Exit fullscreen mode

And looping through that same array will always be linear, because the length of our operation is directly tied to the length of the array.

for(let i = 0; i < arr.length; i++)
Enter fullscreen mode Exit fullscreen mode

It might help to conceptualize if you replace the 'n' with the length of the array itself. The time complexity of looping through our 'arr' array is technically O(3) before popping off the end, and O(2) after popping off the end.

But that's the easy stuff, and I'm explicitly here to demystify the confusing parts. So let's get into it.

Exponential
image

Before we talk about exponential time complexity, we should first strive to understand what the word itself means. 'Exponential' refers to 'an increase that is becoming more and more rapid'.

Another way to put that is that it's an increase that is increasingly increasing. We represent exponential time complexity as

O(2^n)

where n is, you guessed it, the size of the input.

As you can imagine, having your program take an exponential amount of time to complete its operations could be an absolute disaster in most use cases, especially when working with large sets of data.

For example, if you had a function with an exponential time complexity traversing over an array with a length of 20, you could calculate the potential maximum operations it would need to perform by filling out O(2^n) and analyzing the solution.

   2 ^ 20 = 1,048,576
Enter fullscreen mode Exit fullscreen mode

That means our exponential function would, in its absolute worst case scenario, have a time complexity of O(1,048,576).

I've charted out below how the time complexity of our fictional program scales upwards to over a million as the size of the array scales upwards to twenty. This should help visualize what an "increase that is increasingly increasing" actually means.

Follow the number to the right, and see what you notice.

2 ^ 1 = 2
2 ^ 2 = 4
2 ^ 3 = 8
2 ^ 4 = 16
2 ^ 5 = 32
2 ^ 6 = 64
2 ^ 7 = 128
2 ^ 8 = 256
2 ^ 9 = 512
2 ^ 10 = 1,024
2 ^ 11 = 2,048
2 ^ 12 = 4,096
2 ^ 13 = 8,192
2 ^ 14 = 16,384
2 ^ 15 = 32,768
2 ^ 16 = 65,536
2 ^ 17 = 131,072
2 ^ 18 = 262,144
2 ^ 19 = 524,288
2 ^ 20 = 1,048,576

Enter fullscreen mode Exit fullscreen mode

As the length increments the time complexity doubles.

This starts as steady growth at first, taking nine steps to even break a thousand, then five more after that to break ten thousand. But only another four steps from there is an increase by hundreds of thousands, and only two more steps after that we are at a million. There is more of a difference between 2^18 and 2^20 than there is between all of the previous results combined.

The most common practical example you'll see for exponential complexity is a function designed to obtain a fibonacci number.

let fibonacci = function(number) {
  if (number <= 1) {
    return number;
  }
  return fibonacci(number - 2) + fibonacci(number - 1);
};
Enter fullscreen mode Exit fullscreen mode

The larger the input, the more and more operations it will take to get our output fibonacci number. Play around with this some and see if you can make a non-recursive version yourself. You know, just for fun.

Quadratic (polynomial)

image

Again, to first understand how it relates to time complexity, let's lay out what exactly quadratic actually means. In simple terms, it is a fancy word that mathematicians use to denote when something has been squared, which is why we represent it as-

O(n^2)

Simple enough, right? The maximum number of operations in a quadratic function would be the length of the input times itself.

Something like this:

let repeatArrayByLength = function(arr){
let newArr = [];
for(let i = 1; i <= arr.length; i++){
  for(let j = 1; j <= arr.length; j++){
    newArr.push(j);
  }
}
return newArr;
};
Enter fullscreen mode Exit fullscreen mode

Scales quadratically because as the length of the input grows, the amount of processes our function must complete increases by a factor of the length of the array times itself, n^2.

Notice that if we pass in an array with the length of three, like so:

repeatArrayByLength([1, 2, 3])

We will get back-

[1, 2, 3, 1, 2, 3, 1, 2, 3]

Our function carried out 9 pushes into the array, from an input length of 3.

3^2, or 3 * 3, is 9.

If we were to pass in a similarly structured array with a length of 5, we would get back:

[1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5]

Our function pushed into the array 25 times.

5^2, or 5 * 5, is 25.

Logarithmic

image

I like to think of logarithmic time complexity like the lawful good version of exponential complexity, represented in Big O as-

O(log n)

The number of operations in an exponential function will increasingly increase as the size of the input increases, whereas the number of operations in a logarithmic function will increasingly decrease as size of the input increases. The larger the set of data, the more efficient your program will run.

Here's a simple example:

let doubleStepper = (number) => {
  let arr = [];
  for (let i = 1; i < number; i *= 2) {
    arr.push(i);
  }
  return arr;
};
Enter fullscreen mode Exit fullscreen mode

When we pass in 5, we will get back:

[1, 2, 4]

Which is only three operations. Now if we passed in 100, we will get back:

[1, 2, 4, 8, 16, 32, 64]

Which is only seven operations. Functions with logarithmic complexity will become exponentially more efficient as the input data grows larger.

Finally, let's go crazy and pass in 5000, getting back:

[1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048,4096]

Which isn't really that much of a jump, only thirteen processes in total. Notice that, like I said before, it becomes increasingly efficient as the size of the input grows. There are more jumps between 1 and 32 than there are between 1024 and 4096.

I wasn't kidding about it being the lawful good version of exponential, here's the proof:
image

They share the same exact curve, just different directions.

Generally speaking, we should strive to keep our programs below exponential and quadratic complexity, and instead aim for constant, logarithmic, or at the bare minimum linear.

It's also important to keep in mind that you should only formulate time complexity based on the worst case scenario, and nothing more. Because even a ridiculous function like this-

image
-known as bogosort could, in theory, produce a correct result on the very first permutation. That doesn't make it constant though, far from it actually. The above function actually has a factorial time complexity, which is a discussion for another time.

Top comments (0)