Table of Contents:
Big O Notation
Big O Notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.
It will completely change how you write code. It is used to help make code readable and scalable.
Readable code is maintainable code. It is easy to read and contains meaningful names of variables, functions, etc.
Scalable code refers to speed and memory. The reason code needs to be scalable is because we don't know how many users will use our code. We have to be able to determine solutions for algorithms that weigh in on the costs of speed and memory.
Big O Notation will be used in two ways:
- To classify the time complexity(speed) of an algorithm.
- To classify the space complexity(memory) of an algorithm.
Runtime
In computer science, runtime, run time, or execution time is the final phase of a computer program's life cycle, in which the code is being executed on the computer's central processing unit (CPU) as machine code. In other words, "runtime" is the running phase of a program.
Essentially, the runtime is the period of time when an algorithm is running. This is an important term to know for later on.
Time Complexity
The time complexity is the computational complexity that describes the amount of time it takes to run an algorithm.
There are many pros and cons to consider when classifying the time complexity of an algorithm:
- What's the worst-case scenario?
The worst-case scenario will be considered first, as it is difficult to determine the average or best-case scenario.
- What data structure should you use?
Some notations are used specifically for certain data structures. I will show you down below in the Notations section. There is also a Big O Cheatsheet further down that will show you what notations work better with certain structures.
- Which one would be a better solution over another?
Which structure has a time-efficient notation? A more memory-efficient notation?
- You'll have to trade-off between the pros and cons of space and time.
Big O is used to determine the time and space complexity of an algorithm. There may be solutions that are better in speed, but not in memory, and vice versa. Just depends on which route is advocated for.
Space Complexity
The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input.
Space complexity is caused by variables, data structures, allocations, etc. What you create takes up space. Space complexity is determined the same way Big O determines time complexity, with the notations below, although this blog doesn't go in-depth on calculating space complexity.
Notations
The order of the notations is set from best to worst:
-
Constant:
O(1)
-
Logarithmic:
O(log N)
-
Linear:
O(n)
-
Log Linear:
O(n log(n))
-
Quadratic:
O(n^2)
-
Exponential:
O(2^n)
-
Factorial:
O(n!)
In this blog, I will only cover constant, linear, and quadratic notations. The other notations will include a description with references to certain data structures and algorithms.
Constant: O(1)
Constant Notation is excellent. In terms of speed, the runtime of the function is always the same. If the input increases, the function will still output the same result at the same amount of time.
Let's say we had an array:
let array = ['A', 'B', 'C', 'D'];
An Array is an ordered data structure containing a collection of elements.
An Associative Array is an unordered data structure consisting of key-value pairs.
let associativeArray = {message: "hello", name: "Ethan"};
When accessing an element of either one of these data structures, the Big O will always be constant time.
array[2]
// => C
associativeArray.message
// => hello
This is because neither element had to be searched for. The location of the element was known by its index or identifier.
Logarithmic: O(log N)
A Binary Search Tree would use the Logarithmic Notation. A Binary Tree is a tree data structure consisting of nodes that contain two children max.
In a Binary Search Tree, there are no duplicates. The left subtree of a node contains children nodes with a key value that is less than their parental node value. The right subtree is the opposite, where children nodes have values greater than their parental node value.
Linear: O(n)
As the input increases, the amount of time needed to complete the function increases. The runtime grows as the input size increases. Also, the n
can be anything. An x
, an o
, etc.
An example of O(n)
would be a loop on an array:
let group = ['jack', 'jolene', 'ethan', 'ali', 'logan', 'bob'];
function findEthan(array) {
for (let i = 0; i < array.length; i++){
if (array[i] === 'ethan'){
console.log("I found him!");
break;
}
}
}
findEthan(group);
The input size of the function can dramatically increase. What if there were 500 people in the crowd? The function would take longer to execute, especially if my name is the very last item in the array.
Can you imagine having an input way higher? Let's say 10,000? The length of time it takes to execute the algorithm is dependent on the size of the input. As the size increases, the length increases. This is Linear Notation.
Quadratic: O(n^2)
Quadratic Notation is Linear Notation, but with one nested loop.
function something(array) {
for(let h = 0; h < array.length; h++) {
for(let i = 0; i < array.length; i++) {
// console.log('')
}
}
}
We don't know the size of the input, and there are two for
loops with one nested into the other.
Log Linear: O(n log(n))
The Quicksort algorithm has the best time complexity with Log-Linear Notation.
Exponential: O(2^n)
There are not many examples online of real-world use of the Exponential Notation.
Factorial: O(n!)
This Notation is the absolute worst one. When you have a nested loop for every input you possess, the notation is determined as Factorial.
Big O Cheatsheet
The cheatsheet shows the space complexities of a list consisting of data structures and algorithms.
Top comments (0)