Just like Sunrise in the 90s, I've decided to do a series on "Big O" (if you don't get that reference, look it up; decent show)

# Part 1: Overview

Diving into algorithms and data structures, you most likely will have seen Big O notation, or at least seen some references to it.

But, what does it mean? How can it be calculated? And what is it useful for?

In this series, I will try to break down Big O notation in a way that is easy to understand and allows you to feel confident when dealing with Big O notation in the future.

So, let's get into Part 1:

## What is Big-O notation?

Before we get started with what Big O is and what is does, I think that I should take a second to clear up a common misconception.

### Big-O does not describe how much time (or space) an algorithm takes to run

If you think about it, there's really no way to actually know how long an algorithm or function is going to take to evaluate beforehand. System specs, programming language used, details about the input, all of these things will affect our function at runtime.

For example, Going through a sorted data structure using Linear search (checking every element) is usually less efficient than a binary search (targeting the center and dividing the search space in half).

But, what if the element we're searching for is the leading element?

In that case, the linear search will finish almost immediately while the binary search will go through the maximum number of iterations.

So, Big-O notation will not give an absolute description of the complexity, but rather describes the *worst case scenario*.

*(if you're interested look up Big Theta and Big Omega, for average and best case scenario noatation)*

## What is Big-O, though

Big O notation, in the simplest terms I can think of, is a mathematical function that describes how algorithm complexity grows **as the number of elements increases** and in the **worst case scenario**

I already explained the worst case scenario part, but the "as the number of elements increase" part is key.

If you look at the chart, you'll see different ways that the number of operations(complexity) increases as a function of the number of items (size). We'll go into each of the different functions shown later.

## "n"

Even though it's called "Big O" notation, the key to understanding it lies in the n variable.

The "n" variable represents the size of the input, usually the number of elements that the algorithm has to work on. This is the main variable we will be looking at.

## Next Time

That's a basic overview of big-O notation, what it means and how to read it. This will be a series, so the next article will tackle how to figure out big-O for your own algorithms.

If you have any comments, questions or corrections, Feel free to leave a comment!

See you next time...

## Top comments (0)