DEV Community

Cover image for Big O Notation
Samuel Adafia
Samuel Adafia

Posted on • Updated on

Big O Notation

Almost all coding challenges have multiple approaches to solving them. Though all the different approaches get the job done, it is possible to rank them according to performance and efficiency. A standard system used by engineers to achieve this is the Big O Notation.

Table Of Contents

What is Big O Notation?

Big O Notation is a way of classifying algorithms by assigning mathematical expressions based on the algorithm's runtime (time complexity) and memory/space requirement (space complexity).

Benefits

Big O Notation is important because

  • It provides a precise vocabulary or terminology in talking about the performance of algorithms.
  • It is useful for discussing trade-offs between different approaches employed in algorithm design.
  • It helps in debugging code and figuring out reasons for slow performance, crashes, identifying parts of the code that are inefficient and identifying pain points.

Time and space complexity

In this section, I'll be attempting to explain the basic concepts of Big O Notation by analysing and classifying two valid solutions to a simple code challenge.

Question 1

Write a function that calculates the sum of all numbers from 1 up to and including a given number n.

Solution 1
1. function addUpto(n) {
2.   return n * (n + 1) / 2;
3. }
Enter fullscreen mode Exit fullscreen mode
Solution 2
1. function addUpto(n) {
2.   let total = 0;
3.   for (let i = 1; i <= n; i++) {
4.     total += i;
5.   }
6.   return total;
7. }
Enter fullscreen mode Exit fullscreen mode

Which of the above solutions is better?

Discussion

Before we answer the above question, we would have to define what better means.

  • Does better mean faster?
  • Does better mean less memory-intensive (the data that is stored in memory each time the function is called)?

Time Complexity

Let's take the first question " Does better mean faster?"

In thinking of the speed of algorithms, it may seem intuitive to record the time it takes for an algorithm to complete its task but doing that will lead to inconsistencies. This is because computers vary in hardware and computing power. A computer with a lot more resources will take a shorter period to complete a given algorithm than a computer with fewer resources.

So instead of counting seconds, which change based on computing power, consistent and standardized results can be achieved by counting the number of simple operations the computer has to perform within an algorithm. The time it takes for a computer to execute an algorithm is directly proportional to the number of operations it has to perform. Hence the greater the number of operations the longer it takes to execute.

Now that we have established the criteria for analysing an algorithm's runtime, let's count the number of simple operations in our solutions

Solution 1
1. function addUpto(n) {
2.   return n * (n + 1) / 2;
3. }
Enter fullscreen mode Exit fullscreen mode

Solution 1 has a total of three (3) simple operations.

  • one multiplication n * ...
  • one addition n + 1 and
  • one division ... / 2 The number of operations in solution 1 does not change with increasing or decreasing the magnitude of the input(n). No matter how small or large n is, the number of operations will always be 3.
Solution 2
1. function addUpto(n) {
2.   let total = 0;
3.   for (let i = 1; i <= n; i++) {
4.     total += i;
5.   }
6.   return total;
8. }
Enter fullscreen mode Exit fullscreen mode

In solution 2

  • line 2 shows 1 variable assignment let total = 0;
  • line 3 shows:
    • 1 variable assignment let i = 1;,
    • n comparisons i <= n;,
    • n additions and n variable assignments i++.
  • line 4 shows n additions and n variable assignments.

It also makes use of a loop which runs n times. Thus, if n is 5, the loop will run five times. However, depending on what you count, the number of operations can be as low as 2n or as high as 5n + 2. Hence regardless of being precise, the number of operations grows roughly proportionally with n.

Consequently, unlike solution 1, counting the number of operations in solution 2 is a lot more complicated. Big O Notation doesn't necessarily care about precision only about general trends.

Based on the above, "time complexity with respect to big O notation is the measure of how the runtime of an algorithm grows as its input grows."

The various terminology available via Big O notation in classifying the runtime / time complexity of algorithms includes the following.

Constant O(1)

An algorithm with a constant time complexity does not have its runtime significantly affected by an increase or decrease in the magnitude of its input(n). As the value of the input(n) grows the runtime of the algorithm fundamentally stays constant.

Linear O(n)

An algorithm is described as having a linear time complexity when its runtime scales proportionally with its input(n).

Quadratic O(n^2)

An algorithm with a quadratic time complexity has its runtime scaling at a rate which is approximately the square of its input(n). Thus, as the value of n increases the runtime of the algorithm increases by n^2.

Solution 1
1. function addUpto(n) {
2.   return n * (n + 1) / 2;
3. }
Enter fullscreen mode Exit fullscreen mode

So the Big O of solution 1 is O(1) (read as O of 1). This means that the number of simple operations the computer has to do is constant as n increases.

Solution 2
1. function addUpto(n) {
2.   let total = 0;
3.   for (let i = 1; i <= n; i++) {
4.     total += i;
5.   }
6.   return total;
8. }
Enter fullscreen mode Exit fullscreen mode

Solution 2 has a big O of O(n) (read as O of n). This means that the number of operations is bound by a multiple of its input(n).

Space Complexity

Space complexity is usually concerned with the amount of memory needed to run a given algorithm.

Auxiliary space complexity: space/memory required by the algorithm without considering the space taken by the inputs.

For the scope of this blog post, space complexity refers to auxiliary space complexity.

Helpful tips when thinking about space complexity

  • Most primitives (booleans, numbers, undefined, null) are constant space O(1).
  • Strings require O(n) space (where n is the length of the string).
  • Reference types are generally O(n), where n is the length of the array or the number of keys in the case of objects.
1. function sum(arr) {
2.   let total = 0;
3.   for (let i = 0; i < arr.length; i++) {
4.      total += arr[i];
5.   }
6.   return total;
7.  }
Enter fullscreen mode Exit fullscreen mode

The above function adds all numerical elements of a given array and returns the total.
Since we are only concerned about the memory/space required by the function/algorithm alone, we are going to disregard the length of the array provided to the function. So the only space-consuming elements we are concerned about are on line 2 and 3.

  • On line 2 and 3, no matter the size/length of the array provided to the function/algorithm, it going consume the space of a single digit (0). line 2 => let total = 0. and line 3 => let i = 0

So the big O of the sum function with respect to auxiliary space complexity is O(1).

1. function double(arr) {
2.   let newArr = [];
3.   for (let i = 0; i < arr.length; i++) {
4.      newArr.push(2 * arr[i]);
5.   }
6.   return newArr;
7.  }
Enter fullscreen mode Exit fullscreen mode

The function double takes in an array of numbers and returns a new array with each element of the input array doubled.

One thing to note is that an empty array is instantiated and based on the length of the input array, the memory required by the instantiated array increases proportionally on line 4. So the space complexity of the double function is O(n).

Brief introduction to Logarithms

Although some of the common algorithms can be classified under constant O(1), linear O(n) and quadratic O(n^2) space and time complexities, there are a lot of algorithms that are not adequately described by the above. These involve more complex mathematical expressions such as logarithms.

Logarithms are the inverse of exponentiation. Just as division and multiplication are a pair, exponentiation and logarithms are a pair.

Logarithmic space and time complexities are expressed with O(log n) and O(nlog n)

Conclusion

To sum this all up, here are few things to remember about Big O Notation.

  • To analyze the performance of an algorithm, we use Big O Notation
  • Big O Notation can provide us with a high-level understanding of the time or space complexity of an algorithm.
  • Big O Notation doesn't necessarily care about precision only about general trends (linear, quadratic or constants)
  • The time or space complexity (as measured by Big O) depends only on the algorithm, and not the hardware used to run the algorithm.

Cover image by Jeremy Perkins on Unsplash

Top comments (19)

Collapse
 
xapuu profile image
Xapuu

Great introduction to Big O, I enjoyed the read, also i think it will be verry helpfull for guys with no CS background, as everytihng is explained from ground zero and kept simple and at the same time it is scientific enough. Im looking forward to your next article :)

Collapse
 
adafia profile image
Samuel Adafia

Thank you so much Xapuu for your feedback. I do not have a CS degree myself and I'm glad I can bring clarity to concepts like this to anyone who needs it. I'm currently writing another piece on Analysing objects and arrays through the lens of Big O Notation. You can also suggest topics you would want me to research and write about. Once again thank you very much Xapuu.

Collapse
 
thamaraiselvam profile image
Thamaraiselvam

Its like recollecting older memories. Nice one @samuel

Collapse
 
adafia profile image
Samuel Adafia

I'm glad this served as a reminder of the early part of your journey to becoming a developer.

Collapse
 
dploeger profile image
Dennis Ploeger

Thanks so much. This is the first time I finally understood, what Big O really means.

Collapse
 
adafia profile image
Samuel Adafia

Thank you for your feedback, Dennis.

Collapse
 
mcfrank16 profile image
MCFrank16

Good read Adafia.

Collapse
 
adafia profile image
Samuel Adafia

Thank you, Frank. I appreciate your feedback.

Collapse
 
nignanthomas profile image
thomas nignan

A delicacy to the eye and the brain!
Take you for this post. Very insightful!

Collapse
 
adafia profile image
Samuel Adafia

Thanks Thomas

Collapse
 
absher786 profile image
A. Rashid

It's a great article. I loved the way the theory of Big O notation has been simplified for human intuition. Awaiting more from the author on other topics as well. Best of luck!

Collapse
 
adafia profile image
Samuel Adafia

Thank you, Rashid.

Collapse
 
andrewbaisden profile image
Andrew Baisden

Good read thanks for the article.

Collapse
 
adafia profile image
Samuel Adafia

You are welcome, Andrew. I am glad this was helpful.

Collapse
 
karamuka profile image
Patrick Shema Karamuka

Great article Samuel

Collapse
 
adafia profile image
Samuel Adafia

Thank you, Patrick.

Collapse
 
prondubuisi profile image
Onyemenam Ndubuisi

Good Read Chief

Collapse
 
adafia profile image
Samuel Adafia

Thank you, Onyemenam. Much appreciated.

Collapse
 
thamaraiselvam profile image
Thamaraiselvam

I can understand basic Constant, Linear and Quadratic but this log part I am very much confused how does people says by looking at a problem what complexity is that with log?