*Disclosure: This post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article.*

credit - DesignGuru.io

Hello Devs, if you are preparing for **coding interviews** then apart from System Design, you also need to prepare Data Structures and Algorithms and one thing you must learn is **Big(O) notation**.

In the computer science and software development world, efficiency is key.

Whether you're optimizing code, designing algorithms, or architecting systems, understanding the performance characteristics of your algorithms and solutions is crucial.

**This is where Big O notation comes into play.**

Big O notation provides a **standardized way to describe the time and space complexity of algorithms**, enabling developers to analyze and compare different approaches quantitatively.

In almost all the coding interviews when you present your solution, the interviewer will ask you about the time and space complexity of your algorithm.

They also ask about how you can improve time and space and that's where the time vs space trade-off comes into the picture.

In the past, I have shared several system design interview questions like how to design WhatsApp or Facebook Messenger or System design conceptual questions difference between API Gateway vs Load Balancer and Horizontal vs Vertical Scaling, Forward proxy vs reverse proxy.

And, this article, we'll explore eight essential Big O notations that every developer should know to write efficient and scalable code.

By the way, if you are preparing for coding interviews and want to learn Data Structures and Algorithms and System Design in-depth then you can also check sites like **Design Guru**, **Exponent**, **Educative**, and **Udemy** which have many great coding interview courses

And, If you in a hurry, here is a quick guide to essential `Big(O)`

notations cheat sheet for coding interviews from DesignGuru.io and to measure the performance of your algorithms, from better to worse:

P.S. Keep reading until the end. I have a free bonus for you.

## 8 Essential Big(O) Notations Every Developer Should Learn

Without any further ado, let's jump into the essential Big(O) notation guide for software developers.

This is useful for your daily coding as well as for doing well on coding interviews.

### 1. O(1) --- Constant Time Complexity

Algorithms with O(1) time complexity have constant runtime, meaning their execution time does not depend on the size of the input.

This is the most desirable performance for any algorithm.

For example, accessing an element in an array by index, performing basic arithmetic operations, or accessing a property of an object all are constant time `O(1)`

operations.

It doesn't matter whether your array holds 1 element or **1 billion elements,** you can access them with the index at the same time.

Here is how it looks in the graph:

### 2. O(log n) --- Logarithmic Time Complexity

Algorithms with `O(log n)`

time complexity have a runtime that grows logarithmic as the size of the input increases.

This is the second most desirable performance after O(1) and if you cannot optimize for O(1) you should try to at least get `O(logN)`

performance for your algorithms.

For example, Binary search algorithm is a good example of an algorithm with logarithmic time complexity. In this case, the input space is repeatedly divided in half.

Here is what logarithmic complexity looks like

### 3. O(n) --- Linear Time Complexity

Algorithms with `O(n)`

time complexity have a runtime that grows linearly with the size of the input. This means as your data increases, the performance of your algorithms decreases in the same proportion.

Most of the algorithms fall into this category like linear search algorithms and various linked list operations like traversing a linked list or finding length of linked list.

For example, Iterating through an array or list to perform a specific operation on each element is an example of linear time complexity. As the number of elements increases, the iterating time also increases in the same proportion.

Here is **how the graph of liner time complexity** looks like:

### 4. O(n log n) --- Linearithmic Time Complexity

Algorithms with `O(n log n)`

time complexity have a runtime that grows proportionally to n times the logarithm of n.

This is worse than linear time algorithms because of the additional log factor.

For example, efficient sorting algorithms like merge sort, quicksort, and heapsort have O(nlogn) time complexity.

Here is how it looks in the graph:

### 5. O(n²) --- Quadratic Time Complexity

Algorithms with `O(n²)`

time complexity have a runtime that grows quadratically with the size of the input.

These are the slow algorithms and if you present any solution that has Quadratic time complexity, most likely your interviewer will ask you to improve it and bring it down to an acceptable O(n) or O(logN) level.

For example, nested loops where each iteration performs a linear operation, such as bubble sort or selection sort.

Here is what it looks like in the graph:

### 6. O(2^n) --- Exponential Time Complexity

These are again the slowest algorithms and it's almost always undesirable. Algorithms with O(2^n) time complexity have a runtime that doubles with each additional input element.

You can improve the performance of such algorithms using caching and memorization to avoid re-calculating the same data.

For example, Recursive algorithms with a branching factor of 2, such as the naive recursive solution for the Fibonacci sequence.

Here is what it looks like in the graph:

### 7. O(n!) --- Factorial Time Complexity

Computer Science Algorithms with `O(n!)`

time complexity have a runtime that grows factorially with the size of the input. These also fall under the categories of slow algorithms and not desirable.

If your solution has factorial time complexity then be ready to improve it as the interviewer will not accept it in most cases unless it's a very complex problem and that's the only solution possible.

Brute force algorithms that generate all **permutations or combinations of a set** are examples of factorial time complexity algorithms.

Here is how it looks in the graph:

### 8. O(n^c) --- Polynomial Time Complexity

Algorithms with O(n^c) time complexity have a runtime that grows polynomially with the size of the input, where c is a constant.

This is the slowest algorithm and also the last of the essential Big(O) notation a developer should know.

For example, Matrix multiplication algorithm is like the Strassen algorithm has Polynomial time complexity.

Here is a graph with all the Big(O) notations for time, you can see how things get worse from O(1) to O(2^n)

### Best Coding Interview Resources

And, here are curated list of the best coding interview books, online courses, and practice websites which you can check to better prepare for coding interviews and topics like DSA, High-level design, low-level design, etc.

### DSA

If you are rusty, start with the top interview questions:

Educative-99 - https://buff.ly/3LFG4zL (Available in both Python and Java) will teach you 26 key coding interview patterns

Algomonster - http://shrsl.com/483tp (Coding patterns + questions)

Blind 75: lnkd.in/g5wx7QSq

Grind 75: lnkd.in/gvZ7_pnp

Practice C++ STL or Java Collections or data structure libraries in the language of your choice -- essential for fast coding

### Low-Level Design (LLD)

1. Design Principles: Read "Head First Design Patterns" (read 2nd edition)

- OOP concepts should be crystal clear like Virtual Methods in C++ and Abstract class vs interface, overloading vs overriding, method hiding, etc.

3. Questions: Awesome Low-Level Design - https://github.com/ashishps1/awesome-low-level-design by Ashish Pratap Singh

of AlgoMaster newsletter, I highly recommend that one to programmers.

4. Practice questions with a 45-minute timer

5. Solutions: Low-Level Design Playlist - lnkd.in/gkVZgK4b (Credits to Soumyajit Bhattacharyya)

### High-Level Design (HLD)

1. Books: Start with Alex Xu's Volumes 1 and 2 or Grokking the System Design Interview course on Designgurus.io

2. Videos: Good channel for basic concepts of System Design Interview - lnkd.in/gfEJppS3

4. Mock interviews on Pramp, **tyrExponent (bit.ly/3cNF0vw **and other platforms - medium.com/javarevisited/3-best-mock-in...

5. Practice System Design Problems in Leetcode style on Codemia - https://bit.ly/46AyaRJ

### CS Fundamentals

Learned from GateSmashers videos - lnkd.in/gs6m5RQb

### Behavioral

1. Use the **STAR method (Situation, Task, Action, Result)**

2. Keep each section concise: 4-5 sentences per section so that it can be covered in the given time during interviews

3. Prepare both a detailed and a short version of your answers

### Conclusion

That's all in this list of **8 essential Big(O) notations every developer should learn**. Understanding Big O notation allows developers to analyze the efficiency and scalability of their algorithms objectively.

By understanding and remembering these essential Big O notation, you can make informed decisions when selecting or designing algorithms, optimizing performance, and improving the scalability of your software solutions.

Continuously honing your skills in algorithm analysis and complexity theory will undoubtedly enhance your ability to write efficient and robust code across various domains of software development.

### Bonus

As promised, here is the bonus for you, a free book. I just found a new free book to learn Distributed System Design, you can also read it here on Microsoft --- https://info.microsoft.com/rs/157-GQE-382/images/EN-CNTNT-eBook-DesigningDistributedSystems.pdf

Thank you

## Top comments (2)

A comprehensive post with visual explanation for every algorithm introduced in it. Love the graphical representation of each execution.

May I ask what is the target audience for this?