# Big O Notation for beginners!!

#### Why beginners should not be afraid of AL

As a code newbie, I have read a few posts that say algorithms are not useful if you want to be a front-end dev or as a beginner in web development in general. For sometime, I brushed it off saying it was a difficult topic, only for advanced engineers and beginners "should not try it". The thing is, learning AL helps you write better code and easily identify what is slowing down your program.

I spent a few days learning it and I can say, as long as you have the syntax and fundamentals down of any programming language, you can start learning algorithms. You don't have to be programming for x years, you can learn as you go by. The early you start, the better and no you don't have to be a genius in maths.

So to all my code newbies don't be afraid to learn, go for it and when you fail, try it again. You can't be good at something if you have never tried. As for me, I have failed and solved some of the questions I went through but I have learnt from them and I keep on growing.My problem solving skills keep becoming stronger πͺπΎ. Yes we are all learners, in this field you will never stop learning.

#### What is an algorithm?

This are steps taken to solve a problem. We are identifying patterns, creating a solution that will improve the speed of our programs.Increasing Performance matters a lot in algorithm not just writing code that works.

#### What is big O notation

Big O notation is used to explain the performance or complexity of an algorithm.
we can also say, It shows how the runtime of the algorithm grows as the input grow. Example if you are in a large company that deals with a lot of user data, writing an efficient algorithm that takes less time when it runs compared to one that's takes more time.

#### Why is big O notation is important

• It helps us look at the worst case scenario of an algorithm.
• Describes the time execution which is called Time complexity
• Describes the space used (memory). This is called Space complexity.

### Common Time complexities

#### 1) O(n) - Linear runtime

As the input of a function increases the runtime also increases.
Let's look at the example below.

function logUpTo(n) {
for (var i = 1; i <= n; i++) {
console.log(i);
}
}


In the above function, we don't care as much if the input(n) is 5 or 1000. We want the order of magnitude( big O)which will be O(n)- ( f(n) = n ). As the input size increases the time it takes for the loop to run also increases.

The runtime is directly proportional to the square of the input(n^2). Hence as the input grows, the runtime grows n * n .
To understand it better, let's look at the example below.

const pairs = (n)=> {
for (var i = 0; i < n; i++) {
for (var j = 0; j < n; j++) {
console.log(i, j);
}
}
}
pairs(2);
/*
output
0 0
0 1
1 0
1 1
*/


The function above has a nested loop. When n grows the number of times the loop runs increases in the first loop and the number of times the second loop runs also increases. This is = ( f(n) = n ^ 2 )

#### O(1) Constant runtime

As the input of a function increases the runtime doesn't change it remains constant.
Let's take a look at the example below.

function logAtMost10(n) {
for (var i = 1; i <= Math.min(n, 10); i++) {
console.log(i);
}
}


In the function above when the input is more than 10, it returns 1-10. Even when the input is 1M, the output will still be 1-10. As n increases the runtime to the function remains the same, ( f(n) = 1 ).

In big O notation the smaller terms are not important. Example:

#### O(n + 50) => O(n) '

If you remove the 50 it will be O(n)

#### O(n^2 + 10n + 3) => O(n * n)/ O(n2)

On a larger scale 5n + 6 is not important but n^2 is.

#### Arithmetic operations(+, -, /, *) are constant.

If you add, subtract or multiple, it takes the same amount of runtime, hence been constant.
When you do 1 + 1 and 3 + 1000000000 in your computer, it roughly takes the same amount of time to do the operations.

#### Assigning variable is constant.

Assigning variable x to 10, takes the same amount of time as assigning variable y to 1,000,000.

### Auxiliary Space

Auxiliary space is the amount of memory or space needed to run the algorithm. But with space complexity, the total amount of space needed, grows as the input size increases.

Let's take a look at some few examples.

#### Question 1

//O(1)
const total= (n) => {
let total = 0;
for (let i = 0; i < n.length; i++) {
total += n[i];
}
}


O(1) space - this means the space is the same no matter the input. Therefore the input increasing or decreasing doesn't affect the space.

#### Question 2

const double = (n) => {
let total = [];
for(let i = 0; i < n.length; i++) {
total.push(2 * n[i]);
}
//O(n) space
}


In the function above, if the input has 10 items, the new array created will have 10 items that are doubled. The space needed will be O(n)

#### A simple table for all the runtime complexities

Big O notation Names
O(1) Constant runtime
O(n) Linear runtime
O(log n) Logarithmic runtime
O(n * log n) Linearithmic runtime
O(n^3) Cubic runtime
O(2 ^ n) Exponential runtime
O(n!) Factorial runtime

#### Questions to practice with.

What is the time complexity and Auxiliary space of the following questions
Question 1

function subtotals(array) {
var subtotalArray = Array(array.length);
for (var i = 0; i < array.length; i++) {
var subtotal = 0;
for (var j = 0; j <= i; j++) {
subtotal += array[j];
}
subtotalArray[i] = subtotal;
}
return subtotalArray;
}


Question 2

function onlyElementsAtEvenIndex(array) {
var newArray = Array(Math.ceil(array.length / 2));
for (var i = 0; i < array.length; i++) {
if (i % 2 === 0) {
newArray[i / 2] = array[i];
}
}
return newArray;

}


Question 3

function logAtMost10(n) {
for (var i = 1; i <= Math.max(n, 10); i++) {
console.log(i);
}
}


Conclusion
This is what I have learnt so far and I hope it helps. As I continue learning algorithms I will be posting.
I am grateful you have read all the through.

#### A few resources

Posted on by:

I am a User Interface Designer learning web development. I don't know what I am doing but I am trying my best to grow as a self taught developer. Welcome to my newbie tech journey.π©βπ»

### Discussion

It's not that algorithms are too difficult to learn for beginners, it's more because, in many situations, it's not that useful. Less useful than making your code work, a minimum scalable, and maintainable.

Focusing on performance from the beginning on as often little benefits. Worst: it makes your code more complicated, and you should fight that instead of trying to get back some nanoseconds.

Don't get me wrong. This knowledge is very useful in some scenarios. In 10 years of programming, I didn't saw many.

Another thing: you have to be careful with the O notation because, as you said, it describes the upper bound depending on the input.

const pairs = (n)=> {
for (var i = 0; i < n; i++) {
for (var j = 0; j < n; j++) {
console.log(i, j);
}
}
}
pairs(2);


This algorithm is O(n^2) because i and j goes from 0 to n, not because it's a nested loop. For instance, if you have:

const pairs = (n)=> {
for (var i = 0; i < 2; i++) {
for (var j = 0; j < n; j++) {
console.log(i, j);
}
}
}
pairs(10);


Your time complexity is now more something like 0(n).

Also, it depends for which company you work. I worked for one company where graphs algorithms are a must and I actually used these algorithms. Of course I have also worked for a company where algorithms donβt matter as much.

Agreed. That's what I meant when I was writing:

Don't get me wrong. This knowledge is very useful in some scenarios.

To be one the safer side it's better to know it. You never know where you land for your first or second job.

The problem is: you need to know A LOT of stuff as a software engineer. I don't think big O notation and algorithm is what you should prioritize, at least as a beginner.

Just my opinion of course :)

I know you need to learn a lot. But the thing is, you will never stop learning.
Now, that I am learning frameworks, I can add AL on the side.
It's like how you go to coding boot camp, you learn and add extra work by yourself to go deeper.
But it all comes down to the individual and their goals. π―π

Focusing on performance from the beginning on as often little benefits. Worst: it makes your code more complicated, and you should fight that instead of trying to get back some nanoseconds.

I agree that over-optimization can be counter-productive. I will say, though, that the kind of performance "optimization" that squeezes out nanoseconds or milliseconds is not the same as algorithm optimizations measured by big-O. The later often actually makes a meaningful difference between whether you can get a result within a reasonable time frame.

It depends on the context. Going from linear time to logarithmic time for example won't bring much, except if your input is very big. Of course, going from factorial time to logarithmic time will do more than improving your execution time, it will allow you to run your algorithm :D

Logarithmic time is actually faster than linear time. You are of course correct that these differences are meaningful only when the input is big.

I find the O notation extremely confusing. I've been writing code professionally in the past six years in two different companies and different languages (C#, C++, C..) and it never not a single time occurred to me that someone talked about the complexity in the O notation. The only time I needed this notation was during a coding interview. Which is sad IMHO, because in the end one doesn't have to know the O notation to write a great algorithm but one should be familiar with the programming language and the task: what should this algorithm do and how am I going to archive this.

Yeah, most interviews test on algorithms. It always good to learn it.

For those who want a TL;DR of big-O, it usually boils down to loops. If you have one loop over a data structure, it's probably O(N). If you have one loop inside another, it's probably O(N^2). Three loops deep, O(N^3). And so on. Loops inside loops inside loops aren't always obvious! One function may have a loop and, inside that loop, call another function that has a loop which, in turn, calls another function that has a loop, and so on. Eventually you get WordPress.

Algorithmic complexity is more than just loops. They involve data structures. And, depending the underlying data structure, your own algorithms will be impacted. A map, for instance, has O(log N) time for all operations. A hash table tends to have O(1) time for all operations unless there are massive key collisions, which can turn it into a O(N^2) data structure really fast and take out your full stack architecture at the same time (e.g. youtube.com/watch?v=R2Cq3CLI6H8).

By the way, big-O has an often-ignored little brother called C. C involves the time and resources required to complete a single operation. So you could have an amazing O(N) algorithm BUT a massive C value that makes it perform as badly as an equivalent O(N^2) algorithm with a very small C value. An example of this is inserting 1 million nodes into an array. If you insert them one at a time and individually allocate RAM for each and every node, it will take much, much longer to complete than to add a precalculation step to determine exactly how many nodes you will need, perform a single memory allocation big enough to hold all the nodes, and then run the main algorithm using the allocated buffer. The difference can be substantial - waiting for 500ms vs. waiting for 15 seconds for the operation to complete (or even minutes vs. hours/days in some cases). Heap allocations are the single most expensive operation in any OS and the major cause of a large C value.

At the end of the day, big-O isn't terribly critical if the software runs well. Some exceptions: A satellite that will be placed in orbit or a medical device that will be used in a hospital. When the software does NOT run well, there are usually tools available to debug the bottleneck. And, if that doesn't work, having a really good developer handy to look at the code can help pinpoint the issue. A good rule of thumb though is: If you can keep your function call chain fairly flat (under 10 functions deep), you'll probably get decent performance out of your application and not run into too many issues related to algorithmic complexity.

Those are my thoughts on the topic.

This is great. I am definitely learning a lot about the topic.
Thanks for sharing your thoughts, much appreciated. :)

Good article thanks for posting.

Thank you Andrew for reading it :)

Thanks for this, it's really useful! I have saved it.

Thank you for reading it :)