DEV Community πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’»

DEV Community πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’» is a community of 963,864 amazing developers

We're a place where coders share, stay up-to-date and grow their careers.

Create account Log in
Cover image for Big-O Notation Cheat Sheet: quick answers to Big-O questions
Amanda Fawcett for Educative

Posted on

Big-O Notation Cheat Sheet: quick answers to Big-O questions

This article was written by Jerry Ejonavi. Curious about Big-O Notation? Today, we put together a quick guide to answer all the commonly asked Big-O questions.


Big O notation (sometimes called Big omega) is one of the most fundamental tools for programmers to analyze the time and space complexity of an algorithm. Big O notation is an asymptotic notation to measure the upper bound performance of an algorithm.

Your choice of algorithm and data structure matters when you write software with strict SLAs or large programs. Big O Notation allows you to compare algorithm performance to find the best for your given situation. The Big O rating helps to ensure you have a lower running time than competitors.

This article aims to provide quick answers about common questions on Big O notation you might have or you might face in an interview.

Let’s get started!

Here’s what we’ll cover today:

What is Big O Notation?

Big O Notation is a mathematical function used in computer science to describe an algorithm’s complexity. It is usually a measure of the runtime required for an algorithm’s execution.

But, instead of telling you how fast or slow an algorithm's runtime is, it tells you how an algorithm’s performance changes with the size of the input (size n).

The biggest factors that affect Big O are the number of steps and the number of iterations the program takes in repeating structures like the for loop.

For our formal definition, we define

O(g(n))O(g(n))
as a set of functions and a function f(n) can be a member of this set if it satisfies the following conditions:
0≀f(n)≀cg(n)0≀f(n)≀cg(n)0 ≀ f(n) ≀ cg(n)0 ≀ f(n) ≀ cg(n)

constant c is a positive constant and the inequality holds after input size n crosses a positive threshold n0

If an algorithm performs a computation on each item in an array of size n, we can say the algorithm runs in

O(n)O(n)
(or it has constant time complexity) and performs the following work for each:
O(1)O(1)

What is space complexity and time complexity?

The time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the input. Similarly, the Space complexity of an algorithm quantifies the amount of space or memory taken by an algorithm to run as a function of the length of the input.

Both depend on several external factors such as computer hardware, operating system, etc, but none of these factors are considered when running an analysis on the algorithm.

Here's some common complexities you'll find for algorithms:

Alt Text

What are functions?

A function is a routine that accepts an input and returns an output. It is expressed as below, where the output is defined in terms of the input n.

f(n)=n2f(n)=n^2

Big O Notation for data structures

Data Structure Insert Retrieve
Array
O(1)O(1)
O(1)O(1)
Linked List At Head:
O(1)O(1)

At Tail:
O(n)O(n)
O(n)O(n)
Binary Search
O(n)O(n)
O(n)O(n)
Dynamic Array
O(1)O(1)
O(1)O(1)
Stack
O(1)O(1)
O(1)O(1)
Hash Table
O(n)O(n)
O(n)O(n)

Big O Notation for sorting algorithms

Sorting Algorithm Worst-case scenario Average Case Best-case scenario
Bubble Sort
O(n2)O(n^2)
O(n2)O(n^2)
O(n)O(n)
Insertion Sort
O(n2)O(n^2)
O(n2)O(n^2)
O(n)O(n)
Selection Sort
O(n2)O(n^2)
O(n2)O(n^2)
O(n2)O(n^2)
Quick Sort
O(n2)O(n^2)
O(nlogn)O(n log n)
O(nlogn)O(n log n)
Merge Sort
O(nlogn)O(n log n)
O(nlogn)O(n log n)
O(nlogn)O(n log n)
Heap sort
O(nlogn)O(n log n)
O(nlogn)O(n log n)
O(nlogn)O(n log n)

Note: even though the worst-case quicksort performance is quadratic, but in practice, quicksort is often used for sorting since its average case is logarithmic.

Wrapping up and next steps

Now that you’ve gotten a handle on the common topics in Big O, you should be suitably prepared to tackle most questions you come across. There are other more technical topics that we didn’t cover such as,

  • Complexity Theory
  • Big Theta notation
  • Lower Bounds Analysis
  • Probabilistic Analysis
  • Amortized Analysis
  • Recursion

If you’d like to learn these topics, check out Educative’s course, Big-O Notation For Coding Interviews and Beyond. The course explains the concepts in layman's terms and teaches how to reason about the complexity of algorithms without requiring an extensive mathematical skillset.

Happy Learning!

Continue reading about algorithm optimization

Top comments (2)

Collapse
 
pouncer29 profile image
Bennett Lewis

Excellent article! Keep in mind thoigh that Big O != Big Omega. True, big O is the asymptotic upper bound, but big omega is the asymptotic lower bound. Big O represents the best case time complexity, big omega, the worst. Then theres big theta which is the average case. geeksforgeeks.org/difference-betwe...

Collapse
 
ad0791 profile image
Alexandro Disla • Edited on

Polynomial should be n**c with c>2

Nlog(n) is log linaire time

E**n is exponential time, where you can assume the vale of e as 2

Log time is just log(n)

🌚 Browsing with dark mode makes you a better developer by a factor of exactly 40.

It's a scientific fact.