DEV Community

Jason Stathopulos
Jason Stathopulos

Posted on

A Complete Beginner’s Guide to Big O Notation

Since in programing there can be thousands of ways to solve a problem, how do you tell which one is the 'best' or let say that there is a function that is implanted in two different ways one with loops the other way by recursion which one of those ways should be the use in the function? That is where Big O comes in it a system of find out which algorithm would take longer to run and understand. The algorithm that takes less amount of time and less complex to understand is going to win.

Big O uses numbers to tell which algorithm is excellent and which ones need to be though again.

Why Use Big O?

The reason for even using Big O is not only for job interviews but also when there are bigger datasets as performance matters at that point. Since there are a lot of data, the algorithms that take an hour to sort or search a large dataset is not as productive as the algorithm that takes seconds to do the same task.

Big O gives a precise vocabulary to talk about how our algorithm performs; furthermore. It could help in discussing trade-offs between different approaches and have a fruitful discussion among developers and engineers on which algorithm is going to perform better.

Big O can also help identify which parts of the algorithm are inefficient and can help us find pain points in our applications.

What does 'better' or 'best' mean?

Say we have a function that sums up 'n':

function addUpTo(n) {
  let total = 0
  for (let index = 1; index <= n; index++) {
    total += index
  }
  return total
}
Enter fullscreen mode Exit fullscreen mode

or

function addUpTo(n) {
  return (n * (n + 1)) / 2
}
Enter fullscreen mode Exit fullscreen mode

Which of those would consider better is it because one is:

  • Faster?
  • Less memory-intensive?
  • More readable?

All of those are valid concerns, how would we know which those are better in general?

The ideal algorithm would have all three of those concerns; however, in most cases, speed is considered better than a less memory-intensive algorithm. Sadly, readable is usually the one that gets overlooked the most.

Time Complexity

So let's examine the most considerable concession speed or commonly refer to as Time Complexity is how we analyze the runtime of an algorithm as the size of the inputs increases.

Timing Operation

How does one examine speed one way is to use timer functions, which measured how long the service has executed. By measuring the performance before the function runs, then measures the returns after, then subtract the difference to show how fast the function has completed.

As shown below for our addUpTo example:

function addUpTo(n) {
  let total = 0
  for (let index = 1; index <= n; index++) {
    total += index
  }
  return total
}

let t1 = performance.now()

console.log(addUpTo(1000000))

let t2 = performance.now()

// Will be slower then the single line version of `addUpTo`
console.log(`Time Elapsed: ${(t2 - t1) / 1000} seconds.`)
Enter fullscreen mode Exit fullscreen mode

or

function addUpTo(n) {
  return (n * (n + 1)) / 2
}

let t1 = performance.now()

console.log(addUpTo(1000000))

let t2 = performance.now()

// Will be faster then the for loop version of `addUpTo`

console.log(`Time Elapsed: ${(t2 - t1) / 1000} seconds.`)
Enter fullscreen mode Exit fullscreen mode

Manually doing the timing is not as practical. In the long term, as there's an overhead of setting it up, then is how would you communicated the results in percentages or seconds.

Furthermore, there is an issue with time, in general. As different machines can record different times since each machine can set up differently so which results can be completely different. Even the same device can have different results as well. For fast algorithms, speed measurements may not be precise enough.

Simple Operations

Another way besides counting time is to count the number of simple operations the computer has to perform to remove the issue of different computer setups since simple operations are coincident in a function. So what counts as simple operations. Those can be math operations such as the plus (+), minus (-), multiplication (*), or division (/) signs. Here is a link to all of the simple operations. However, it would be best if you did not count every operation should think where the most impact on memory is happening in the function.

function addUpTo(n) {
  // Here there is only 3 operations (*, + and /)
  return (n * (n + 1)) / 2
}

function addUpTo(n) {
  let total = 0
  for (let index = 1; index <= n; index++) {
    // Here + is n additions since n is controlling the number of times the loop is going around i.e. n = 20 then it is 20 
operations Furthermore, = is a assignment operation so it can be count as well so with the example above the number of operations is 40
    total += index
  }
  return total
}
Enter fullscreen mode Exit fullscreen mode

Counting simple operations can be hard, so it looks like the number of simple operations is going to be more in functional one version vs another functional version. It can be safe to say that the one with the least amount of operations could be faster.

A Proper Intro to Big O

Here is the overview again, Big O Notation is a way to formalize fuzzy counting, it allows us to talk formally about how the runtime of an algorithm grows as the inputs grow. Also, we do not care about the details only what is the trends as the inputs grow. Big O is always the most a function can handle before it starts to break down.

Definition

  • f(n) = meanings a function with an input of n

We say that an algorithm is O(f(n)) if the number of simple operations the computer has to do is eventually less than a constant time f(n), as n increases.

  • f(n) could be linear (f(n) = n) Meaning that the runtime scales as well with so represents with an n
  • f(n) could be quadratic (f(n) = n²) Meaning that the runtime could be squared so represents with an n²
  • f(n) could be constant (f(n) = 1) Meaning there is no impact to the runtime, so it is in coincident time so represents with a 1
  • f(n) could be something entirely different!

Here is a chart showing the growth of operations and their impact on the algorithm:

Big O Notation Chart

An example of what the definition is trying to say:

// Always 3 operations or O(1) since the number of operations does not grow
function addUpTo(n) {
  return (n * (n + 1)) / 2
}
// Number of operations is (eventually) bounded by a multple of n (say, 10n) or O(n) since the operations is bound by n
function addUpTo(n) {
  let total = 0
  for (let index = 1; index <= n; index++) {
    total += index
  }
  return total
}
function printAllPairs(n) {
  // The Big O For this function is O(n²) since the operations is bound by n twice
  for (let index = 1; index <= n; index++) {
    for (let innerIndex = 1; innerIndex <= n; innerIndex++) {
      console.log(index, innerIndex)
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Simplifying Big O Expressions

When determining the time complexity of an algorithm, there is some critical rule of thumbs for significant O expressions. Those rules are consequences of the definition of Big O notation. So constants do not matter, so O(2n), i.e. 2 For loops, have two O(n), so the function is O(n) since the function be slow no matter what. The same goes for O(500) since 500 means the number of operations in the function since the number of operations does not matter if the operations are not n so we use O(1) to show that the runtime is coincident.

Another example is O(13n²) can be O(n²) since 13 operations do not matter all. O(n²) is the weakest part of the function.

To further simplify the point, you should look at the trends in and see which expression is the one that is the weakest of them all. You can also use math when comparing expressions. So another example is O(n² + 5n + 8) would be O(n²) since if n is 10, then n² would be 100, 5*n would be 50 and 8 would be 8. So since n² has more operations than the other two, it is the weakest part of the function.

Rules of Thumb

  1. Analyzing complexity with Big O can get complicated, but there are rules of thumb that could help; however, those rules won't ALWAYS work. However, they are an essential starting point.
  2. Arithmetic operations are constant as most computers can do those operations roughly at the time for 2 + 2 operations would be approximately the same time on most computers.
  3. Variable assignment is also constant, so attaching a value to a variable in-memory should be roughly the same time for all machines.
  4. Accessing elements in an array (by index) or object (by key) is constant.
  5. In a loop, the complexity is the length of the loop times the complexity of whatever happens inside of the loop; it depends on what is the loop. If more loops or other operations are happening, that could lengthen the time when the loop completes.

Auxiliary Space Complexity (Less memory-intensive)

How much additional memory do we need to allocate, to run the code in our algorithm? That is the question space complexity tries to answer. The less memory the algorithm takes up, the smaller your program is used to run it on the user computer. You are thus saving your customers money with not having to upgrade their computer to run your application.

However, first Space Complexity is both the space the inputs take up and the variables inside the algorithm. Nevertheless, we are going to look at the algorithms that do not include the space taken up by the inputs called Auxiliary Space Complexity. We are going to focus on what happens inside the algorithm, not the space of the input takes up. That said, from here on, I'm going to be referred to it as Space Complexity.

Rules of Thumb

  1. Those rules are for using Space Complexity.
  2. Most primitives (booleans, numbers, null) are constant space as they take to update the same amount of space.
  3. Strings require O(n) space (where n is the string length) since strings make up is an array of characters, so the number of characters in a string is n
  4. Reference types are generally O(n), where n is the length (for arrays) or the number of keys (for objects) since those made up with primitives, and the range is n.

For example:

function sum(array) {
  let total = 0
  // For Space Complexity the Big O is O(1) since there is only two places where memory is being alloweded and is not in a loop so no n
  for (let index = 0; index < array.length; index++) {
    total = array[index]
  }
  return total
}

function double(array) {
  let newArr = []
  // For Space Complexity the Big O is O(n) since there is newArr is adding n elments in memoary so the amount of memoary can run out at Infintiy
  for (let index = 0; index < array.length; index++) {
    newArr.push(2 * array[index])
  }
  return newArr
}
Enter fullscreen mode Exit fullscreen mode

Logarithmic

Well they are not as common as the other Big O Notations, O(1), O(n), O(n²), they are sometimes Big O expressions involve more complex mathematical expressions, which is the expression of O(log n)

Logs are the inverse of expressions, for example, log²(8) = 3 → 2³ = 8 or log²(value) = exponent → 2^exponent = value so you can move the exponent around and get the same results similar to multiple and division. Logs require a base to use them for more information on the Logarithmic look at this link.

So when you see the log in Big O notation, it is the same as log².

To simplify the log², think of it using a consistent value of 2. If n is always two, then it is an excellent candidate for O(log n).

Cliff Notes

The logarithm of a number roughly measures the number of times you can divide that number by two before you get a value that's less than or equal to one.

For example, 8 / 2 = 4 / 2 = 2 / 2 = 1 or log(8) = 3 since 8 is the base and 3 is the number of times log divide before reaching 0.

What Does Logarithmic Complexity Matters

O(log n) is the second-best expression to get next to O(1) since it does not go as much as O(1). In the same vane, there is O(n log n), which is the second-worst expression to get. Specific searching algorithms have logarithmic time complexity, efficient sorting algorithms involve logarithms, and recursion sometimes involves logarithmic space complex since they can take up n space.

Analyzing Performance of Array & Objects

Object

Since objects are a type of data structure as they are key / values, unordered list. We can use Big O Notation on objects to do when you use them.

Big O of Objects

Here are the basics of how an object works with what you can do and how fast it takes.

  • Insertion: O(1)
  • Removal: O(1)
  • Searching: O(n)
  • Access: O(1)

Since objects are an unordered list searching throw, it means that JavaScript would have to throw each value until the key is the same amount found. For Objects, in general, we would go in deeper depth when we get to the Hash Table.

Big O of Object Methods

  • Object.keys: O(n) - Needs to throw each key to get the key
  • Object.values: O(n) - Needs to throw each key to get its value
  • Object.entries: O(n) - Needs to throw each key to get both the key and value
  • hasOwnProperty: O(1) - Since hasOwnProperty has an argument that tells which key to search and return the value of that key

Array

Since arrays are a type of data structure as they are key / values, ordered list. We can use Big O Notation on arrays to do when you use them.

Big O of Arrays

Here are the basics of how an array works with what you can do and how fast it takes.

  • Insertion: It depends
  • Removal: It depends
  • Searching: O(n)
  • Access: O(1) - Since an array is linked to each element in memory, JavaScript knows which part of memory that element is located and gets the data from that part of memory.

When inserting or removing elements for an array, if you add or remove elements at the end of the array, then it is O(1) anywhere else it is O(n) since the array has to reindex each element after the new element.

Big O of Array Operations

  • push: O(1) - Adds to the end of the array
  • pop: O(1) - Removes the end of the array
  • shift: O(n) - Removes to the start of the array
  • unshift: O(n) - Adds the start of the array
  • concat: O(n) - Since this method merges tow or more arrays the size of those arrays are unknown n
  • slice: O(n) - Since slice returns a copy part of an array since it is unknown how big that part is.
  • splice: O(n) - Since splice adds or removes elements from the middle of an array so it is unknown how many elements would be affected by this function.
  • sort: O(n * log n) - this one is too complex at the moment, have a look at sorting section
  • forEach/map/filter/reduce/etc.: O(n): Needs to go over each of the element in an array

Sources

Top comments (0)