ðŸŽ¯ Do you know that ZERO ( 0 ) is one of the greatest inventions of all time?

Stop that nonsense!

How is zero (0) related to Big O(n) notation?

ðŸ‘‰ Without it, we probably won't have numbers and programming as we have it today.

Many people find it hard to understand O(logn) and some others because they don't understand numbers and their groupings.

I discovered something interesting when I tried to understand the Big O Notation by understanding the origin of numbers and their grouping.

Before the use of numbers, the ancient people used their fingers and other countable things for counting.

Those counting systems (algorithms) metamorphosed into what we use today in programming and mathematics generally.

Whenever they wanted to count and group a lot of items, they needed a lot of space and time to do so.

So, they had to design writing symbols to make it easier.

This is probably how they started.

They represent a count with the Hindu-Arabic number 1 as in:

```
1
```

Whenever they wanted to count two items they needed to write 1 twice as in:

```
11
```

And they repeated the same system for more than two counting as in:

```
111
1111
11111
111111
1111111
11111111
111111111
```

Oops!

There was a problem.

They had to take a lot of time and space to re-write 1 many times.

So, they decided to abstract each of them to another symbols to save time and space just like we do in computing:

# Abstraction in Number

They started abstracting multiple 1s into another symbols as in:

```
1
2 = 11
3 = 111
4 = 1111
5 = 11111
6 = 111111
7 = 1111111
8 = 11111111
9 = 111111111
```

Wow!

That was a great achievement for mankind.

Before I continue the story, let me quickly explain how the above explanation helps in understanding some parts of Big O Notation.

# What is Big O Notation?

"In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows." - Wikipedia

In this case, we only care for the worst case scenario.

For example, check these numbers again:

```
1
11
111
1111
11111
111111
1111111
11111111
111111111
```

Thinking about the numbers above, you should realize that the time and space we need to compute(write) grow as more 1s are added.

And that was why they abstracted them to another interfaces/APIs or symbols as in:

```
1
2 = 11
3 = 111
4 = 1111
5 = 11111
6 = 111111
7 = 1111111
8 = 11111111
9 = 111111111
```

Now, you can now simply do 9 instead of 111111111.

Wow! We saved a lot of space and time.

# Let's Dig a bit Deeper

### 1. Big O(1) - Constant runtime

input size(n) | O(1) |
---|---|

1 | 1 |

4 | 1 |

8 | 1 |

16 | 1 |

1024 | 1 |

Once again, check these numbers:

```
1
2 = 11
3 = 111
4 = 1111
5 = 11111
6 = 111111
7 = 1111111
8 = 11111111
9 = 111111111
```

Thinking about the numbers, you should realize 1 is constant because other numbers are made up of it. It is present or constant in every other number as in:

```
2 = 11
3 = 111
```

And so on!

Also, 1 is an input (that requires a count) so it passes through an operation. That means the time and space we need to use it is always the same.

Similarly, an algorithm, that has an input(with a constant size) and operates once has Big O(1) or it has a constant time and a constant space complexity.

Let's express it in pseudo code as in:

```
func peal(fruit) {
return "pealed fruit";
}
peal(banana);
```

This function has a constant time and a constant space because it has to store an input and also takes a constant time to peal the fruit because the size of the input is constant.

What if we pass an input with a linear or more than one size to it?

Get it straight. Once the algorithm operates once on an item, whether the item is passed along some other items doesn't really matter much here as in:

```
func peal(fruits) {
if(fruits.iterable) {
return "pealed" + fruits[0];
}
return "pealed" + fruits;
}
peal([banana, orange, tomato])
```

So, it is still constant in terms of time and space required because the function operates once on an element regardless of the size of the input.

In this case, we only peal banana.

# Grouping of Numbers

The ascient folks realized they had another problem because they wanted to count more than 9.

So, they decided to invent 0 to mean no count or no place and when it is added to 1 as in 10, it means 10 counts in one place.

1 2 3 4 5 6 7 8 9 - 10 counts and one place.

20 means 10 counts in 2 places.

30 means 10 counts in 3 places.

The above means we are counting in base 10.

And base 10 means a group of ten numbers below 10.

So numbers, I mean 1, 2, 3 to "gazillion", are in base 10.

Do you see that?

We can also have base 2 that is a group of 2 numbers less than 2 as in:

00 01 10 11 100 110 111 1000 1100 1110 1111 and so on.

Operations over numbers have linear time and space complexity.

### 2. Big O(n): Linear Runtime

input size(n) | O(n) |
---|---|

1 | 1 |

4 | 4 |

8 | 8 |

16 | 16 |

1024 | 1024 |

Whenever we group numbers, we pair more than two numbers together as in:

```
Sizes . Elements . Arrays
2 = 11 = [1, 1]
3 = 111 = [1, 1, 1]
2 = 23 = [2, 3]
3 = 235 = [2, 3, 5]
```

All I want to explain with the illustration above is that any input that is more than one in size is linear and if an algorithm operates on each of its elements one after another, such algorithm is said to have linear time and space complexity.

Any algorithms that take more than two inputs and perform the same operation on each of them until there is none left is said to have linear time and space complexity.

For examples,

```
func peal(fruits) {
bowl: [];
for(fruit in fruits)
bowl.put( "pealed" + fruit);
}
return bowl;
}
peal([banana, orange, tomato]);
```

In this case, the algorithm will peal the fruit first, put it in the bowl and return it. That is the constant operation.

After passing the arguments, the algorithm will peal banana first, put it in the bowl and return it.

Let's assume the algorithm uses a space and 1 second for banana.

The algorithm has to repeat the same operation on orange.

That means it has used 2 spaces and 2 seconds now.

By the time it repeats the same operation on tomato, it will have used 3 spaces and 3 seconds.

You can see the time and space complexity is increasing by one as it operates on additional fruit.

In short, linear complexity always increases by 1 as we operate on or add one more item.

# Big O(logn): Logarithmic Complexity

input size(n) | O(logn) |
---|---|

1 | 1 |

4 | 2 |

8 | 3 |

16 | 4 |

1024 | 10 |

A lot of people find this part difficult to understand and a lot of tutorials just glanced through it.

You need to understand what logarithm means in the real world to get things straight.

2^3 = 8 means base 2 to log 3 is equal to 8.

By that we mean, 3 is the logarithm of 8, that is, you can only multiple 2 by itself three times to get 8 as in:

```
2 * 2 * 2 = 8.
```

What does that mean in plan English?

Logarithm is about cause and effect.

The cause in the above example is multiplying 2 by itself three times while 8 is the effect of the action.

Let's dig deeper.

```
sortedArray = [2,3,4,5,6,7,8,9];
```

In the above array, how many search operations can we do to get 7 from the array?

Do you say 5 by looping through the array?

That seems correct but there will be a problem by the time you have an array with one million or billion items.

Without thinking, I will tell you it is 3 times at most if we keep dividing the array into two and making some guesses.

Oh, wait! Is that not binary search?

Yeah, that is binary search.

I am able to guess that because I know 2^3 = 8 which means 8 items can be group into 3 groups if the next group is the double of the previous group as in:

Let's assume 1 means an item.

11 = 2

11 11 = 4

1111 1111 = 8

8 is the double of 4 and 4 is the double of 2.

Logarithm time is about carrying out a certain operation a number of times to get a known result as in binary search.

Let's search for 30 in a sorted array of numbers as in below.

Searching for a known value of an index of an array by constantly dividing the array into two.

Since we know what we are searching for, we don't have to start from the beginning to the end.

We are checking mid points until we find the item we are looking for.

Let's start.

In the above image, the bigger red point is the first midpoint of the array and we get it by using this formula:

midpoint = (lower + higher)/2.

midpoint = (0 + 9)/2 = 4.5.

So, the midpoint is 4 since 4 is the integer of 4.5

Then we will check whether 21 ,which is the item at the midpoint index(4), is equal to 30 which we're searching for.

If it is equal, we will return the index 4, if not, we will check whether 21 (the item at the midpoint) is less or greater than 30 (the number we're looking for).

If 21 is greater than the number we are searching for, we will calculate another midpoint to the left but in this case, 21 is less than 30 so we will calculate another midpoint to the right of the array as in:

midpoint =(lower + higher)/2

Since we're sure 21 can't be the number we are searching for, our lower index is now 5 and higher index is still 9 since we still don't know if its value is equal to the number we are searching for or not.

The small red circle in the picture represents the second midpoint with index 7.

34 is the number at index 7 and it not equal to 30. As 34 is greater than 30, our new higher number index is 6 and lower index is still index 5 as we still don't know whether its value is equal to 30 or not.

midpoint = (lower + higher)/2

midpoint = (5 + 6)/2 = 5.5.

Our new midpoint is 5 since 5 is the integer of 5.5.

And the number at index 5 is 30.

Woohoooo!

30 is what we have been searching for, so we return index 5.

That is binary search.

For your information, O(log N) basically means time and space grows by one (linearly) when the input grows exponentially.

# O(n logn): Linearithmic Complexity

O(n logn) simply means O(n) x O(logn).

Linearithm depends on running a logarithm operation n times.

Let's use the searching example we use for logarithm again here.

Check the picture again.

Previously, we search for 30 and we came up with binary search with O(logn) complexity.

If we are to search for 16,30 and 36, what do you think will happen.

We have to repeat the binary searching for each of the numbers, that is, O(n * logn).

In this case, "n" is the size of the search input and "logn" is the search operation performed for each of the search.

# O(n^2): Quadratic Complexity

O(n^2) is like multiplying O(n) by itself as in:

O(n^2) = O(n) x O(n)

Or

input size(n) | O(n^2) |
---|---|

1 | 1 x 1 |

4 | 4 x 4 |

8 | 8 x 8 |

16 | 16 x 16 |

1024 | 1024 x 1024 |

In short, a quadratic algorithm grows by twice the size of the input.

An array is linear. If such array is the base and also has arrays as its elements, operating on it and the children of its elements one after another becomes quadratic as in:

```
//Two nesting
[ [2,3,4], [3,4,2], [6,8,7] ]
```

In the above, we have a group of 3 arrays and each of the arrays is also a group of 3 elements.

That means we have two array nested and operating on the above array will lead to linear space and time complexity.

Let's operate on it as in:

```
// Two nesting
s:[ [2,3,4], [3,4,2], [6,8,7] ]
counts: 0;
for (numbers in s){
for (number in numbers){
counts++;
}
}
```

In the above example, we have to loop through the array and its children's elements to make the counts.

The first child array takes 3 operations, the second takes 3 operations and the last takes 3 operations.

9 operations in totality.

That is O(n^2) complexity.

### O(2^n): Exponential Complexity

Whenever the time and space an algorithm needs double with each addition to the input data set, we have exponential complexity.

Such exponential complexity occurs in the Fibonacci sequence algorithm as in:

input size(n) | O(2^n) |
---|---|

1 | 2 |

2 | 4 |

3 | 8 |

4 | 16 |

5 | 32 |

6 | 64 |

7 | 128 |

Code example:

```
func fibonacci(num) {
if (num <= 1) return 1;
return fibonacci(num - 2) + fibonacci(num - 1)
}
```

In short, Fibonacci is a sequence in which a number is sum of the previous two numbers as in:

```
0,1,1,2,3,5,8,13,21, etc.
```

# O(n!): Factorial Runtime.

input size(n) | O(n!) |
---|---|

1 | 1 |

2 | 2x1 |

3 | 3x2x1 |

4 | 4x3x2x1 |

5 | 5x4x3x2x1 |

6 | 6x5x4x3x2x1 |

Factorial is "the product of an integer and all the non-negative integers below it; e.g. factorial four ( 4! ) is equal to 24."

If we want to arrange 5 books without repetition, how many selection can be possible?

The possibilities are:

In the 1st selection can be any of the 5 books so the first selection can be in 5 times

Since we have already selected a book, next selection can be any of the remaining 4 books so it can be for 4 times.

3rd = 3

4th = 2

5th = 1

Selection possibilities = 5 x 4 x 3 x 2 x 1 = 120;

So, it is possible to select 5 books in 120 ways or times.

An algorithm that checks for all possibilities of selecting or arranging the input is said to have O(n!) complexity as in:

```
let storage = [];
func factorial (n) {
if (n == 0 || n == 1){
return 1;
}
if (storage[n] > 0) {
return storage[n];
}
return storage[n] = factorial(n-1) * n;
}
```

# O.C(n) Big O Notations with Context.

I lot of people have learnt from experience that the Big O notation may not be suitable to all cases in the real world.

An algorithm expected to perform greatly may perform woefully in the real world.

The structure of the data, caching and other things can influence the complexities of an algorithm.

So, it is always good to know your context very well to determine an algorithm that is suitable for you.

It is always important to understand your infrastructures, tools and others in respect to how they can impact an algorithm of your choice.

# Conclusion

Big O notation is about testing an algorithm for its possible worst-case complexity.

You can simply use the knowledge here to understand the efficiency of your programs and their subprograms.

Thanks for reading.

# One more thing

Are you having difficulties to learn and understand JavaScript and build projects with it? ** JavaScript for a Total Novice** teaches JavaScript and Project Making Fundamentals with simple illustrations and examples that make everything so easy. You can now handle any difficult projects without fear.

Don't trust me, get a free previous to judge by yourself: https://bit.ly/3o3TMyg

## Top comments (0)