# What is Big O Notation?

Big O notation is a way to simply represent how the time it takes for the function to run is impacted by the amount of data being passed into that function, or the *time complexity* of that function. This representation will appear as a capital "O" (representing nothing of importance) followed by parenthesis containing a lowercase "n" (representing the amount of data passed into the function) and whatever modifying symbol is appropriate for the complexity of the function.

(Note: Don't worry about what the "O" means. Some really old person a long time ago decided it meant "order," but it has no relation to the equations you'll find inside the parenthesis. The "O" is just convention.)

If you do any sort of research on Big O notation, you'll really quickly come across graphs that look like this:

This graph shows us how each notation relates to the amount of operations that are necessary as the number of elements increase. The amount of time for the function to run will increase with the number of operations, however the relation between time and data and the relation between operations and data are not always the same. It is, however, safe to say that more operations equate to more time. With that said, lets start understanding the notations listed above, starting with the simplest one and moving on from there, stopping (for now) at O(log n)

## O(1)

A function that has a time complexity of O(1) will take the same amount of time to run and use the same number of operations no matter how much data is being passed into that function. This will usually mean a function that either does nothing with the data (which seems not very practical to me), or does something to a very specific part of the data, like the `.push()`

function in JavaScript. No matter how many elements are in the array you want to push to, it will always take the same amount of time to add it to the array.

## O(n)

A function with a time complexity of O(n) will take time directly *linearly* proportional to the amount of data passed in.

Forget code for a second: How long does it take you to count 1 second? Hopefully around 1 second. How long does it take you to count to 2 seconds? How about 3? 4? The ratio between the time it takes for you to count *any number* (n) of seconds and the amount of seconds that pass while you count will always be the same, no matter if you count 4 seconds or a hundred billion seconds.

Back to code: Any function that maintains the ratio between amount of data passed in and amount of time to finish running will have a time complexity of O(n). Example:

```
function printArr (arr) {
for (let i = 0; i < arr.length; i++) { // for every element...
console.log(arr[i]) // console log it.
}
}
```

The amount of time it takes to run `console.log()`

will stay the same no matter how many elements are in that array. The only thing that changes is the amount of elements it needs to log. That ratio between data and processing time will always stay the same, regardless the amount of data.

##
O(n^{2})

In the case of a function with a time complexity of

O(n^{2}), the number of operations ran in the function is directly proportional to the number of elements squared.

Forgetting code: Imagine you need to count to water 5 plants, but for every plant you water, you need to recount all the plants. To water all the plants, you'll have to have counted each of the 5 plants 5 times, for a total of 25 plants counted. And 5^{2} is 25! No matter the number of plants you need to water, if you need to count all of them every time you water one, the amount of plants you'll have counted will always equal n^{2}.

Back to code: In code, this will manifest in the form of loops within loops. Example:

```
const arr = [1, 2, 3]
const printArrArrs = (arr) => {
// for every element: "i loop"
for (let i = 0; i < arr.length; i++) {
// for every element: "j loop"
for (let j = 0; j < arr.length; j++) {
// console log that element
console.log(arr[j])
}
}
}
printArrArrs(arr)
```

In this code, our "i loop" will loop once for every element in `arr`

. For every loop our "i loop" does, our "j loop" will loop over `arr`

*again*, and console logs every element. There should be 9 numbers in the console, because 3^{2} is 9. For any length that `arr`

is, the number elements logged in the console will always be directly proportional to the length of the array squared.

##### 2 Quick Notes!

Note 1: When determining the proper notation for a function, the only thing that matters is the *most complex* layer of the function. In the example above, there are three layers of code: The `console.log()`

at the bottom, the "j loop" in the middle, and the "i loop" at the top. The `console.log()`

has a time complexity of O(1), as it's going to run the same amount of operations every time. The "j loop" has a time complexity of O(n), as it will run the code contained in it once for every element of the array. The "i loop" is the only part of the function that has a time complexity of O(n^{2}), therefore the entire function's time complexity is O(n^{2}).

Note 2.a: What we've talked about in relation to O(n^{2}) can scale up for any number of nested loops. If you've got three nested loops in a function then that function's time complexity is O(n^{4}). If you've got 4 loops nested in each other then its O(n^{4}).

Note 2.b: As a result of note 2.a, when determining the proper notation for a function with multiple loops in it, keep in mind that 2 loops does *not* necessarily mean the time complexity of the function is O(n^{2}). This would only be the case for *nested loops*.

*(Warning: Beyond this point I am less comfortable in my understanding than in the above parts. Some information, analogies, or code examples may be partially or entirely incorrect. This is largely a learning experience for me.)*

## O(log n)

Wait what? What is "log"...?

#### What Is "log"?

"log" is shorthand for "logarithm". A logarithm is the number(y) that some other number (called a *base*) needs to be raised to in order to get a given number. The notation for a logarithm would be "log_{base}x = ?" In computer science, our *base* is always going to be 2, so instead of writing the base, we just... don't. So the notation is written O(log n), with the blank being the excluded base.

#### Back To O(log n)

Given what we've learned about logarithms, a function with a time complexity of O(log n) is a function where the number of operations ran equals log_{2}n, where "n" represents the amount of times the function loops. The best way to think about the relation between operations and data here, is that the amount of operations increases linearly as the data increases exponentially.

Forgetting code: Imagine you want to find the name of your friend, "John Stockton", in an alphabetically sorted high school yearbook with 100 people in it. Each page has one person on it. You could search through every page, but that'd be slow. Instead, you open the book to the half way point, check if the name there matches your friend's, and if not then pick the side that is closer to their name alphabetically and divide again. Keep doing this until you find their name. This method would have a time complexity of O(log n), because no matter if the amount of names was 10, 100, or 1000, the maximum amount of operations that could be run would be the log_{2}10 or 100 or 1000 (n)

Back to code: This time, I didn't write my own example. The simplest code example I could find for O(log n) was this one. Frankly if you want a better O(log n) explanation, I recommend you watch this whole video.

```
const n = 8
function logn(n) {
while (n > 1) { // while n is greater than 1
n = Math.floor(n/2) // divide n by 2 (and remove remainder)
}
}
logn(n)
```

In the above code, 8 is being passed into the `logn()`

function. Using the pseudocode to guide us, lets walk through what happens to `n`

. First `n`

is passed in as 8. 8 is greater than 1, so it is divided in half (8/2=4). Then it is passed again as 4, which is greater than 1, so again it is halved. Then its passed *again* as 2, halved, and then passed again as one, at which point n is no longer greater than 1. In this code example, it took 3 iterations to get to 1, meaning that the log of n is equal to 3. While this is great for calculating the log of a number, how does this help us identifying/writing algorithms with O(log n) time complexity? In general, as is the case here, if the data passed into is being divided by 2 and used again, the function is probably O(log n) time complexity. But more specifically, we can tell by going through this process for one number that the amount of operations done will always scale proportionally with the log_{2} of any number we pass in as `n`

## Conclusion

All things considered, from what I've covered, Big O notation isn't so hard. I largely didn't know about logarithms before this, so it took a bit longer for O(log n), but its basically just the inverse of O(n^{2}), which I feel I have a pretty good handle on. Anyways, I had a lot of fun researching all this and I hope that what I've written is helpful to anyone who needs it.

## Top comments (0)