DEV Community

Cover image for Understanding Big-O Notation.
Juan Garcia
Juan Garcia

Posted on

Understanding Big-O Notation.

To understand Big-O notation, it's important to first know what an algorithm is. In computer science, Big-O is used to analyze how an algorithm's time and space requirements grow as the input size increases. By understanding Big-O, we can compare algorithms and choose the most efficient one for a given problem, ensuring better performance as input sizes grow. Big-O focuses on the scalability of algorithms, helping us optimize code without getting caught up in hardware-specific details.

In this blog, we'll break down all of these topics so that the reader can gain a clear understanding of Big-O notation and its significance, enabling them to analyze and optimize algorithms effectively.

What is an Algorithm?

An algorithm is a step-by-step set of instructions to solve a problem or perform a task. It takes an input, processes it through a series of steps, and produces an output.

I've outlined an algorithm below that lists the most efficient steps for learning Big-O notation:

The Best Way to Learn Big-O Notation

  1. Gather Your Tools:

    • Grab a notebook, pen, and a computer with internet access. Maybe some coffee too!
  2. Start with the Basics:

    • Learn what algorithms are and why efficiency matters. Watch an introductory video or read an article about algorithms and complexity.
  3. Understand Big-O Notation:

    • Break down Big-O as the “language” to talk about how algorithms scale. Start with common terms: O(1), O(n), O(n²), O(log n).
  4. Practice with Examples:

    • Look at simple code examples (like searching and sorting) and try to guess their Big-O complexities. Check your answers online or in a textbook.
  5. Visualize the Growth:

    • Use graphs to compare how different time complexities grow with input size. See how O(n) grows slower than O(n²) for large numbers!
  6. Solve Problems:

    • Use coding platforms like LeetCode or HackerRank. When solving problems, analyze the time complexity of your solutions.
  7. Iterate and Improve:

    • Refactor your code to make it more efficient. As you learn new algorithms, try to determine their Big-O complexity.
  8. Celebrate Small Wins:

    • Every time you recognize a time complexity, take a moment to celebrate your progress. You're mastering the art of algorithm efficiency



Before we dive into the breakdown of the algorithm, take a moment to think about what the input and output might be for this specific algorithm. Try to make an educated guess based on the information we’ve covered so far.

Break Down:

Input:

The input for this algorithm consists of your tools (like a notebook, pen, computer, and internet access) and your motivation to learn Big-O notation. You also need some basic knowledge of coding concepts, as it will help you understand algorithms and how their performance is measured.

Steps:

The steps begin with preparing your tools and diving into the basics of algorithms. You'll then break down Big-O notation, learning the most common complexities like O(1), O(n), O(n²), and O(log n). By practicing with examples and visualizing how different complexities grow, you reinforce your understanding. Solving problems on coding platforms, refactoring your solutions to improve efficiency, and celebrating small milestones further cement the knowledge.

Output:

The output of this algorithm is a solid understanding of Big-O notation and the ability to recognize and analyze the time complexity of algorithms. You'll gain improved algorithm skills and a better approach to solving problems efficiently, optimizing your code based on time and space complexity.

Time Complexity

Time complexity measures how long an algorithm takes to run, based on the size of the input. It tells us how the execution time increases as the input grows.

For example, if you have a list of items and need to check each one, the time it takes will grow linearly with the number of items in the list. We describe this as O(n) time complexity, meaning the algorithm’s running time increases proportionally to the input size.

Here are some common time complexities, accompanied by code examples, to help you start recognizing and understanding them more effectively.

O(1) - Constant Time Complexity

The algorithm takes the same amount of time to execute regardless of the size of the input.
Example:

const getFirstElement = (arr) => {
    return arr[0];
}
Enter fullscreen mode Exit fullscreen mode

Explanation: Accessing an element by index in an array is a constant-time operation.

O(n) - Linear Time Complexity

The algorithm's running time increases linearly with the size of the input.
Example:

const printElements = (arr) => {
    for (let i = 0; i < arr.length; i++) {
        console.log(arr[i]);
    }
}
Enter fullscreen mode Exit fullscreen mode

Explanation: The loop iterates through each element of the array once, so the time complexity is O(n).

O(n^2) - Quadratic Time Complexity

The algorithm’s time increases quadratically with the size of the input, often seen in nested loops.
Example:

const printPairs = (arr) => {
    for (let i = 0; i < arr.length; i++) {
        for (let j = 0; j < arr.length; j++) {
            console.log(arr[i], arr[j]);
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Explanation: Two nested loops result in n * n operations, making the time complexity O(n^2).

O(log n) - Logarithmic Time Complexity

The algorithm's running time grows logarithmically with the input size. Common in algorithms that divide the input in each step (e.g., binary search).
Example:

const binarySearch = (arr, target) => {
    let left = 0;
    let right = arr.length - 1;
    while (left <= right) {
        let mid = Math.floor((left + right) / 2);
        if (arr[mid] === target) {
            return mid;
        } else if (arr[mid] < target) {
            left = mid + 1;
        } else {
            right = mid - 1;
        }
    }
    return -1;
}
Enter fullscreen mode Exit fullscreen mode

Explanation: Binary search reduces the search space by half each time, making the time complexity O(log n).

O(n log n) - Linearithmic Time Complexity

This time complexity often arises in divide-and-conquer algorithms like merge sort or quicksort.
Example:

const mergeSort = (arr) => {
    if (arr.length <= 1) return arr;

    const mid = Math.floor(arr.length / 2);
    const left = mergeSort(arr.slice(0, mid));
    const right = mergeSort(arr.slice(mid));

    return merge(left, right);
}

function merge(left, right) {
    let result = [];
    let i = 0, j = 0;

    while (i < left.length && j < right.length) {
        if (left[i] < right[j]) {
            result.push(left[i]);
            i++;
        } else {
            result.push(right[j]);
            j++;
        }
    }

    return result.concat(left.slice(i), right.slice(j));
}
Enter fullscreen mode Exit fullscreen mode

Explanation: Merge sort divides the array into halves (log n steps), then merges them in linear time for each level of recursion, resulting in O(n log n) time complexity.

Space Complexity

Space complexity measures how much memory an algorithm uses based on the input size. Some algorithms might require a lot of extra memory, while others might need very little.

For example, if you're sorting a list in place (without creating extra copies of the list), the space complexity could be O(1), meaning it uses a constant amount of memory. But if you need to store additional data for each input element, the space complexity might be O(n), meaning the memory usage grows linearly with the input size.

Here are some common space complexities, accompanied by code examples, to help you start recognizing and understanding them more effectively.

O(1) - Constant Space Complexity

The algorithm uses a constant amount of memory, regardless of the input size.
Example:

const sumOfTwo = (a, b) => {
    return a + b;
}
Enter fullscreen mode Exit fullscreen mode

Explanation: This function uses a fixed amount of space to store the parameters a and b.

O(n) - Linear Space Complexity

The algorithm uses space that grows linearly with the size of the input.
Example:

const duplicateElements = (arr) => {
    const result = [];
    for (let i = 0; i < arr.length; i++) {
        result.push(arr[i]);
    }
    return result;
}
Enter fullscreen mode Exit fullscreen mode

Explanation: We create a new array (result) that grows in size as the input array grows, so the space complexity is O(n).

O(n^2) - Quadratic Space Complexity

The algorithm uses space that grows quadratically with the input size.
Example:

const createMatrix = (n) => {
    const matrix = [];
    for (let i = 0; i < n; i++) {
        matrix[i] = new Array(n).fill(0);
    }
    return matrix;
}
Enter fullscreen mode Exit fullscreen mode

Explanation: The function creates an n x n matrix, which takes O(n^2) space.

O(log n) - Logarithmic Space Complexity

The algorithm uses space that grows logarithmically with the size of the input, often found in recursive algorithms.
Example:

const recursiveFactorial = (n) => {
    if (n === 0) return 1;
    return n * recursiveFactorial(n - 1);
}
Enter fullscreen mode Exit fullscreen mode

Explanation: The space complexity here is O(log n) because of the depth of the recursion stack.

O(n) - Linear Space Complexity in Recursion

In some recursive algorithms, the function call stack can grow linearly with the input size.
Example:

const fibonacci = (n) => {
    if (n <= 1) return n;
    return fibonacci(n - 1) + fibonacci(n - 2);
}
Enter fullscreen mode Exit fullscreen mode

Explanation: In this recursive Fibonacci function, the function calls grow linearly as n increases, so the space complexity is O(n).

Why Big-O?

Big-O Notation is a way to express the efficiency of an algorithm. It helps us compare different algorithms by describing how their time or space requirements grow as the input size increases. Big-O ignores specific details (like exact time or memory usage) and focuses only on the growth rate.

In simple terms, Big-O gives us a way to talk about how an algorithm scales and whether it will maintain its efficiency with large inputs or crumble under the strain of excessive slowness or memory demands.

Keep these rules in mind as you learn and practice, and continue applying these concepts to various problems. With consistent practice, you'll move closer to mastering the art of analyzing algorithms and optimizing their efficiency.

Top comments (0)

👋 Kindness is contagious

Please leave a ❤️ or a friendly comment on this post if you found it helpful!

Okay