## Introduction

I have been in Software Engineering for a decade and a half and I am not a Computer Science graduate. Hence, I've never truly been taught Data Structures or Algorithms as a part of the course curriculum. Having said that, it did not stop me from being a good software engineer.

However, I was always curious to find out what it was all about and finally got some time to understand the absolute fundamentals. I'm hoping I can share my learnings in a way most of the non-computer science graduates can relate to.

In this article, we will look at the following three concepts:

- Big O Notation
- Time Complexity
- Space Complexity

## Big O Notation

Big O notation is used in computer science to describe the performance or complexity of an algorithm. In simple words, it is used to denote how long an algorithm takes to run and how much memory it takes as the input to the algorithm grows over time.

With Big O Notation we express the runtime in terms of *how quickly it grows relative to the input, as the input gets larger*.

Let's expand on the words highlighted in 'italics' above:

How quickly the runtime grows - It is generally hard to pin down the exact runtime used by your program, since, it is dependent on the processor speed and the other programs running on the computer at the moment. So, instead of talking about the 'runtime' directly, we use Big O Notation to talk about how quickly the runtime grows.

Relative to the input - If we were measuring our runtime directly, we could express our speed in seconds and minutes. Since we are measuring how quickly our runtime grows, we need to express our speed in terms of something else. With Big O, the size of the input is denoted as 'n'. So, we can say that as the runtime grows “on the order of the size of the input” (O(n)) or “on the order of the square of the size of the input” (O(n²)).

As the input gets larger - Our algorithm may have steps that seem expensive when the input size 'n' is small but is eclipsed eventually by other steps as '

*n'*gets larger. For Big O Notation analysis, we care more about the stuff that grows fastest as the input grows, because everything else is quickly eclipsed as '*n'*gets very large.

## Time Complexity

We have sort of defined time complexity in the above section while introducing Big O Notation.

Time complexity is a function describing the amount of time an algorithm takes in terms of the amount of input to the algorithm. In layman’s terms, We can say time complexity is the sum of the number of times each statement gets executed.

There are three different types of time complexity:

- Constant Time ⇒ O(1) - The runtime does not change irrespective of the input size.
- Linear Time ⇒ O(n) - The runtime grows linearly as the input 'n' grows.
- Quadratic Time ⇒ O(n²) - The time it takes to complete a function increases like a quadratic function.

Let us look at a few code examples to understand time complexity in a better way.

### Constant Time - O(1)

```
function displayFirstItem(inpArray) {
console.log(inpArray[0]);
}
```

The above function runs in **O(1) time (or “constant time”)** relative to its input. This means that the input array could be 1 item or 1,000 items, but this function would still run once and display the first item in the array.

### Linear Time ⇒ O(n)

```
function displayAllItems(inpArray) {
inpArray.forEach(function(item) {
console.log(item);
});
}
```

The above function runs in O(n) time (or “linear time”), where '*n*' is the number of items in the array. This means that if the array has 10 items, the function will run 10 times. If it has 1,000 items, the function will run 1,000 times.

### Quadratic Time ⇒ O(n²)

```
function displayAllPossibleOrderedPairs(inpArray) {
inpArray.forEach(firstItem => {
inpArray.forEach(secondItem => {
console.log(firstItem, secondItem);
});
});
}
```

In the above example, there are two loops nested. If the array has '*n'* items, the outer loop runs '*n'* times, and the inner loop runs '*n' times for each iteration of the outer loop*, giving us n² total prints.

Thus, the function runs in **O(n²) time (or “quadratic time”)**. If the array has 10 items, the function will run 100 times. If it has 1,000 items, it will run 1,000,000 times.

## The Worst-Case Scenario

The worst-case scenario is generally discussed with Big O Notation, since, there are often cases when the worst-case runtime is significantly worse than the best case runtime.

```
function contains(haystack, needle) {
// Does the haystack contain the needle?
for (let i = 0; i < haystack.length; i++) {
if (haystack[i] === needle) {
return true;
}
}
return false;
}
```

In the above example, there might be 100 items in the haystack, but the first item might be the needle, in which case I would return in just 1 iteration of the loop. In such a case, it would appear O(1) to the runtime.

However, if the needle was to be found in the last iteration, then the runtime would be O(n). Hence, to be more specific, we could always say that the best-case scenario would have an O(1) runtime, and the worst-case scenario would be O(n) runtime.

## Space Complexity

Space complexity is a function describing the amount of memory (space) an algorithm takes in terms of the amount of input to the algorithm. When we say “this algorithm takes constant extra space,” because the amount of extra memory needed doesn’t vary with the number of items processed.

So, the question is when would we compromise the speed of execution? A simple answer to that is the availability of resources that you can allocate to the running program.

Let us look at a few code examples:

### O(1) - Fixed Space (For a fixed input size)

```
function greet(inpArray) {
for (let i = 0; i < inpArray.length; i++) {
console.log('Hello');
}
}
```

In the above example, the function takes O(1) i.e. fixed space, since the size of the input array does not change and we are only printing a String on the console.

### O(n) - Space takes grows linearly

```
function greetNTimes(n) {
const greetArray = [];
for (let i = 0; i < n; i++) {
greetArray[i] = 'Hello';
}
return greetArray;
}
```

In the above function, as the size of input 'n' grows, the size of greetArray also grows. This requires additional memory resources to be allocated to accommodate the larger array.

### O(1) - Fixed Space (For a variable input size)

```
function getLargestItem(items) {
let largest = -Number.MAX_VALUE;
items.forEach(item => {
if (item > largest) {
largest = item;
}
});
return largest;
}
```

In the above example, while it the size of 'items' might change, it does not result in any allocation of memory resources. Hence, the function takes a fixed space or O(1) and does not depend on the input.

## Conclusion

### So how to optimize our code?

Before optimizing the code, we should first understand the goal behind the optimization. Are we optimizing the code that is running a part of a batch job or are we optimizing a code that is being used in real-time while being accessed by thousands of users?

The goal is very important, since, then we can work towards finding out alternate solutions for the problem, and we can compare their runtimes to satisfy our goal.

While it might appear that constant time O(1) is probably better than linear time O(n), you must consider other factors such as is the code readable enough and is the code easy to maintain, how much memory is it consuming overall, etc.

I hope you enjoyed the article. I sincerely hope this article can act as a useful guide especially for the non-computer science graduates who want to venture into the world of Software development.

You can follow me on twitter @skaytech

## Top comments (3)

Nice post!

For a quick intro to many data structures, I recommend this article by @iuliagroza .

Thanks for sharing.

Nice, insightful article.

A little observation though. Under the Linear Time ⇒ O(n), the function doesn't run 10 times or 1000 times i.e If n = 10 or n = 1000. Rather, the function runs only once, but produces output that depends on the number of inputs e.g 10 or 1000.