If you haven't heard of big O notation before it may seem like an intimidating concept. You do a simple google search just to look up the definition just to see what the heck it is and are even more intimidated. You see phrases like "run time analysis", or "time and space complexity" and are even more intimidated than you were before looking it up. As intense as it may seem after taking time to learn the ins and outs it really isn't as bad as it sounds.
Big O notation is a symbolism used in complexity theory, mathematics, and computer science. In mathematics it describes the limiting behavior of a function when the argument tends towards a particular value or infinity. In computer science (which is what we are concerned with) big O notation is a formula used to calculate how much time and space our algorithm takes as the size of the input grows.
In this article, I will go over the most common RUNTIME complexities that you will come across in your journey. I will have to write a part two as to how and why to compute these different complexities but for now this is a good start. I will say when you are computing complexities especially in an interview setting you will always judge your computation off of the worst case scenario. You want to do it off of worst case because you're not expecting your algorithm to run in the best or even average cases. It allows you to make analytical statements such as, “well, in the worst case scenario, my algorithm will scale this quickly and why”. Here are the most common complexities you will have to know and understand.
1) Constant time - O(1)
Constant time describes algorithms that take the same amount of time no matter how big or small the input grows. If an input with two or even three elements takes the same amount of time to run as an input that has one hundred elements, that algorithm runs in constant time. If you had an array say
let array = [1, 2, 3, 4, 5] and you were instructed to get the first element no matter what, (array[0]) that is a basic example of constant time complexity. Here is a picture:
2) Linear time - O(n)
Linear runtime describes an algorithm that the time to execute is directly proportional to the size of the input. One non code example to compare to linear time is reading a book. Where n (the size of the input) is the number of pages. Here is a code example of linear runtime.
3) Logarithmic time - O(log (n))
One of the more complicated or "mathy" terms to understand when pertaining to big O notation. Logarithmic time complexity takes place if the time it takes to run the algorithm is proportional to the logarithm of the input size n. If you are familiar with binary search, then you're familiar with implementing a logarithmic algorithm. Here is an example:
4) Quadratic time - O(n^)
Quadratic time complexity if the time to execution it is proportional to the square of the input size. If any solution you have that involves nested loops or iterations that typically falls in the category of quadratic time. If you were in an interview setting I would recommend not solving the problem using quadratic time unless you know how to optimize it. Here is an example:
These are the four most common time complexities you will encounter, ranking these four from best to worst would be constant, logarithmic, linear, quadratic. Here is a picture showing how these algorithms perform as the size of the input grows:
Top comments (0)