DEV Community

Kostas Bariotis
Kostas Bariotis

Posted on

Big-O Notation Cheatsheet

This post is my notes from this excellent article on complexity by Greek Cryptography/Blockchains PhD candidate, Dionysis "dionyziz" Zindros. I highly recommend it to anyone interested in learning about algorithm's complexity even for the first time. If you already have an idea of what time complexity is, I hope this post to be a fresh reminder.

Scalability is probably one of the most used terms nowadays where everyone is struggling to serve more users. But while we can set up an infrastructure to handle such a load, we also need to write the code that will do the job. To analyze our code and figure out how efficient it is we use the Big-O notation. It can also help us figure out how our code will behave as the load increase. Last but not least, we use Big-O notations to compare algorithms.

Counting Instructions

A function takes a number of instructions to run and may defer based on the language is written in or the architecture is running on.

let element = list[0];

for (let i = 0; i < n; ++i) {
  if (list[i] >= element) {
    element = list[i];
  }
}
Enter fullscreen mode Exit fullscreen mode

The above function needs 4 + 2n instructions to run and we can assign it to f(n). (Can you figure out how we came up with this?)

Worst-case analysis

With the worst-case analysis term, we are referring to the scenario where the function will run all the instructions it can run, thus the worst-case scenario.

Asymptotic behavior

Due to our need to examine a function in general and not its performance on a specific architecture or language, we are simplifying the number of instructions by throwing away the terms that grow slowly. So, f(n) = 4 + 2n becomes f(n) = n. The function still grows at the same rate as before and it looks much simpler.

Complexity

We refer to the asymptotic behavior of the worst-case scenario of an algorithm as complexity and writing it as Θ. So, the complexity of the function f(n) = 4 + 2n is Θ(n), read as theta of n.

Big-O notation

As we start to analyze complex functions, we will see that sometimes is hard to figure out the exact instructions of the worst-case scenario. We can though alter the algorithm to make it slightly worse (in our minds of course) so we can find an even worse scenario than the actual. Then if we are sure that the complexity we came up with is actually worse, we can refer to it as O(Θ(n)) read as Big oh of theta of n. E.g. O(n^2) means that our function can perform asymptotically no worse than n^2.

Upper bound

O(n^2) is the upper bound of the actual time complexity. While that's true, it could also be the actual time complexity. Thus this gives us a tight bound. If we were sure that Θ(n) is certainly not O(n) but O(n^2), then we could write it as o(n^2). So, Θ(n) is O(n) (tight bound) and o(n^2) (not tight bound).

Lower bound

While we have found the upper bound or the worst-case scenario, it's also useful to find the best scenario. The lower bound gives us the complexity that our function won't be better than. We use the notation Ω, read big omega, for the lower bound that is tight and ω for a not tight lower bound.

Logarithms

A logarithm takes a number and makes it much much smaller. In the same way, square roots are the inverse operation of squaring a number, logarithms are the inverse operation of exponentiating a number.

Certain algorithms that split their input on every iteration tend to have O(log(n)) complexity and those are usually the most efficient. Examples are quick-sort and merge-sort algorithms.

Examples

O(1)

function getLast (items) {
  return items[items.length-1]; 
};
Enter fullscreen mode Exit fullscreen mode

O(n)

function findIndex (items, match) {
  for (let i = 0, total = items.length; i < total; i++) {
     if (items[i] == match) {
        return i;
     }
  }

  return -1;
};
Enter fullscreen mode Exit fullscreen mode

O(n^2)

function buildSquareMatrix (items) {
  let matrix= [];

  for (let i = 0, total = items.length; i < total; i++) { 
    matrix[i] = [];

    for (let j = 0, total = items.length; j < total; j++) {
      matrix[i].push(items[j]);
    }
  }
  return matrix;
};
Enter fullscreen mode Exit fullscreen mode

O(log(n))

 function quickSort (list) {
  if ( list.length < 2) {
    return list;
  }

  let pivot = list[0];
  let left = []; 
  let right = [];

  for (let i = 1, total = list.length; i < total; i++ ){
    switch (true){
      case (list[i] < pivot):
        left.push( list[i] );
        break;
      case (list[i] >= pivot):
        if( list[i] ) {
          right.push( list[i] );
        }
        break;
    }
  }

  return [].concat(quickSort(left), pivot, quickSort(right));
};
Enter fullscreen mode Exit fullscreen mode

Top comments (4)

Collapse
 
qm3ster profile image
Mihail Malo • Edited

How is slice so much faster?

function buildSquareMatrix2(items) {
    const len = items.length
    let matrix = []

    for (let i = 0; i < len; i++) {
        const next = []

        for (let j = 0; j < len; j++) {
            next[j] = items[j]
        }

        matrix[i] = next
    }
    return matrix
}

function buildSquareMatrix3(items) {
    const len = items.length
    let matrix = []

    for (let i = 0; i < len; i++) {
        matrix[i] = items.slice()
    }
    return matrix
}
buildSquareMatrix        9818
buildSquareMatrix2       9162
buildSquareMatrix3       733
Collapse
 
kbariotis profile image
Kostas Bariotis • Edited

Most of JavaScript's functions on objects (Array objects included) are using some sort of a hashing algorithm to iterate over their properties [Citation needed]. It's not just a for loop as in your example.

Collapse
 
qm3ster profile image
Mihail Malo • Edited

And WOW, creating the array at the correct size really makes a huge difference:

function buildSquareMatrix2(items) {
    const len = items.length
    let matrix = new Array(len)

    for (let i = 0; i < len; i++) {
        const next = new Array(len)

        for (let j = 0; j < len; j++) {
            next[j] = items[j]
        }

        matrix[i] = next
    }
    return matrix
}
buildSquareMatrix        9504
buildSquareMatrix2       1528
buildSquareMatrix3       678

Less than 3x slowdown over .slice() while still allowing you to modify each value!

Collapse
 
qm3ster profile image
Mihail Malo

Yeah, in theory. But V8 does a lot of runtime optimization. A for loop incrementing by 1 over specifically a continuous array surely doesn't actually do hashmap lookups with the integer at every iteration. I'm sure that no kind of SIMD is taking place, but when it sees that there are no break/continue branches, surely it at least iterates over the values directly?