Hello again 🙋🏻♀️
If you’re starting your programming journey, you’ve probably heard the term Big O notation or Time complexity of an algorithm. You may have also wondered why a solution is better than another, though they both do the same thing! At first, it might look like some scary math formula but don’t worry! In this post, we’ll break it down into simple words and examples so you can understand exactly what it means and why it matters.
This post is part one of a two-part series:
- Part 1 (this post): Time Complexity
- Part 2 (next post): Space Complexity
📑 Table of Contents
- What is Big O Notation?
- Why is Big O Notation Important?
- How to Find Big O
- Properties of Big O
- Common Time Complexities
- Big O, Big Omega, and Big Theta
- Practice Exercises
- Resources
What is Big O Notation?
Big O time is the language and metric we use to describe the efficiency of algorithms. Not understanding, not only will leave you judged harshly for not understanding it, but also you will struggle to judge your algorithms and solutions for coding problems.
Why is Big O Notation Important?
Big O notation is important for several reasons:
- It provides a way to compare the performance of different algorithms and data structures, and to predict how they will behave as the input size increases.
- It helps analyze the efficiency of algorithms.
- It provides a way to describe how the runtime or space requirements of an algorithm grow as the input size increases.
- Allows programmers to compare different algorithms and choose the most efficient one for a specific problem.
- Helps in understanding the scalability of algorithms and predicting how they will perform as the input size grows.
- Enables developers to optimize code and improve overall performance.
How to Find Big O
- Ignore the lower order terms and consider only highest order term.
- Ignore the constant associated with the highest order term.
Properties of Big O
Transitivity: If f(n) = O(g(n)) and g(n) = O(h(n)), then f(n) = O(h(n)).
Constant Factor: For any constant c > 0 and functions f(n) and g(n), if f(n) = O(g(n)), then cf(n) = O(g(n)).
Sum Rule: If f(n) = O(g(n)) and h(n) = O(k(n)), then f(n) + h(n) = O(max( g(n), k(n) ) When combining complexities, only the largest term dominates.
Product Rule: If f(n) = O(g(n)) and h(n) = O(k(n)), then f(n) * h(n) = O(g(n) * k(n)).
Composition Rule: If f(n) = O(g(n)), then f(h(n)) = O(g(h(n))).
Common Time Complexities
Name | Notation | Meaning | Examples |
---|---|---|---|
Constant | O(1) | Runtime is constant regardless of input size | Accessing an array element by index, Stack push/ pop, if statement |
Logarithmic | O(log n) | After each iteration, input size shrinks | Binary Search |
Linear | O(n) | Runtime grows linearly with the size of input | Linear Search |
Super Linear | O(n*log n) | Input of size n is processed across log n levels (a shrinking for loop idea) | Quick Sort, Heap Sort, Merge Sort |
Polynomial | O(n^k) | ex: O(n^2) Runtime is proportional to the square of the input size | Strassen’s Matrix Multiplication, Bubble Sort, Selection Sort, Insertion Sort, Bucket Sort |
Exponential | O(k^n) | ex: O(2^n) Runtime doubles with each addition to the input data set | Tower of Hanoi |
Factorial | O(n!) | Runtime grows factorially with the size of the input | Determinant Expansion by Minors, Brute force Search algorithm for Traveling Salesman Problem |
Graphs
If we plot the most common Big O notation examples, we would have graph like this:
Mathematical Examples of Runtime Analysis
Below table illustrates the runtime analysis of different orders of algorithms as the input size (n) increases.
O notation | Runtime (ex1) | Runtime (ex2) |
---|---|---|
n | 10 | 20 |
log(n) | 1 | 2.996 |
n * log(n) | 10 | 59.9 |
n^2 | 100 | 400 |
2^n | 1024 | 1048576 |
n! | 3628800 | 2.432902e+1818 |
Big O, Big Omega, and Big Theta
Notation | Explanation |
---|---|
Big O (O) | Describes the upper bound of the algorithm's running time. (It describes the worst case scenario of an input) Most commonly used when analyzing time complexity |
Big Ω (Omega) | Describes the lower bound of the algorithm's running time. (It describes the best case scenario of an input) |
Big θ (Theta) | Describes a tight bound, meaning both the upper and lower bounds are the same order. (When the best and worst scenarios are the same) It represents the asymptotically exact growth rate. |
Note: Average case is often analyzed separately and doesn’t always equal Θ.
Diagram for Big O, Big Omega, Big Theta
- Ω(n) → lower bound curve (algorithm will never be faster than this).
- O(n) → upper bound curve (algorithm will never be slower than this).
- Θ(n) → sits between them when both bounds match.
Practice Exercises
Check out the following Blog that explain how to calculate time complexity for most common scenarios in details.
Also, I would really recommend checking these exercises to test your knowledge: Big O Notation Exercises - GitHub
Resources
- Cracking the Coding Interview Book
- Big O Notation Tutorial - A Guide to Big O Analysis - GeeksforGeeks
Thanks for reading!
Happy Coding!💻😀
Top comments (0)