DEV Community

Cover image for Complexity Uncomplicated: Your Guide to Big O and Algorithm Analysis
Terrence Jung
Terrence Jung

Posted on

Complexity Uncomplicated: Your Guide to Big O and Algorithm Analysis

This blog assumes prior knowledge of fundamental data structures/algorithms. Examples are in JavaScript. Information references "Cracking the Coding Interview" by Gayle Laakmann McDowell

Introduction

Mastering technical interviews demands a diverse skill set, but two abilities stand out as crucial: understanding Big O notation and analyzing algorithm efficiency. These skills are the keys to unlocking your potential in algorithm complexity discussions.

What is Big O? It is the language and metric we use to describe the efficiency of algorithms.

In this blog, we'll review time/space complexity, how to measure them, amortized time, and logarithmic/recursive runtimes.

Time Complexity

Time complexity is our way of describing how an algorithm's runtime changes as the input size grows.

  • Best Case: the minimum time an algorithm needs to run.
  • Worst Case: the maximum time an algorithm might take.
  • Expected Case: the average time an algorithm usually takes.

While it's tempting to focus on the best case, it's often misleading. Any algorithm can perform well with a perfectly crafted input. That's why we typically focus on the worst and expected cases - they give us a more realistic picture of our algorithm's performance.

Interestingly, for many algorithms, the worst and expected cases are identical. But sometimes, they can differ (explained later).

Space Complexity

Space complexity is a parallel concept to time complexity, measuring the memory (space) required by an algorithm.

For instance, if we create an array of size nn , we're looking at O(n)O(n) space complexity. It grows linearly with our input size.

But here's a twist: space isn't just about data structures. Every time we make a recursive call, we're adding to the call stack. Take this simple recursive sum function:

function sum(n) {
    if (n <= 0) {
        return 0;
    }
    return n + sum(n - 1);
}
Enter fullscreen mode Exit fullscreen mode

this would take O(n) time and O(n) space because sum(4), sum(3), sum(2), sum(1), sum(0) all exist in the call stack simultaneously.

Measuring Complexity

When it comes to measuring the complexity of algorithms, we're not after pinpoint accuracy. Instead, we're looking for a big-picture view that helps us understand how our algorithm scales. Let's dive into some key "rules" that will help you with this.

Multi-Part Algorithms: Add vs. Multiply

For an algorithm that has 2 steps, when do you multiply the runtimes and when do you add them?

  • if your algorithm is in the form “do this. when done, do that” then you add the runtimes
  • if your algorithm is in the form “do this for each time you do that” then you multiply the runtimes. Take a look at the examples below:
// O(A + B) - "Do this, then do that"
arrA.forEach(a => console.log(a));
arrB.forEach(b => console.log(b));

// O(A * B) - "Do this for each time you do that"
arrA.forEach(a => {
    arrB.forEach(b => {
        console.log(`${a},${b}`);
    });
});
Enter fullscreen mode Exit fullscreen mode

In the first example, we're doing two separate things sequentially. In the second, we're doing something for each combination of A and B.

Drop the Constants

In the world of Big O, we're more interested in the overall rate of growth than absolute speed. That's why we drop constants. Whether your algorithm runs in O(2N)O(2N) or O(N)O(N) time, we simplify it to O(N)O(N) . As NN grows larger, the constant factor becomes less significant. Consider the two code snippets:

// Version 1
let min = Number.MAX_VALUE;
let max = Number.MIN_VALUE;
array.forEach(x => {
    if (x < min) min = x;
    if (x > max) max = x;
});

// Version 2
min = Number.MAX_VALUE;
max = Number.MIN_VALUE;
array.forEach(x => {
    if (x < min) min = x;
});
array.forEach(x => {
    if (x > max) max = x;
});
Enter fullscreen mode Exit fullscreen mode

Both have the same O(N) time complexity, despite Version 2 having two separate loops.

Drop the Non-Dominant Terms: Focus on the Heavyweight

When you have multiple terms in your Big O expression, focus on the most significant one.

  • O(N²+N)O(N² + N) becomes O(N²)O(N²)
  • O(N+logN)O(N + log N) simplifies to O(N)O(N)
  • O(52+1000N¹⁰⁰)O(5*2ᴺ + 1000N¹⁰⁰) boils down to O(2)O(2ᴺ)

However, be cautious. If you have O(B² + A), and A and B are independent variables, you can't simplify further. Each represents different parts of the algorithm’s complexity

Amortized Time: The Secret Behind Efficient Operations

Amortized time is a fascinating concept in time complexity analysis that shines when there's a stark difference between the worst-case and average-case scenarios.

Take Java's ArrayList, for example—a dynamically resizing array. When you add an element to an ArrayList that still has space, the operation takes O(1)O(1) time. When the array is full, the ArrayList has to create a new, larger array (typically double the size) and copy over all the existing elements, which takes O(N)O(N) time. This O(N)O(N) operation is a rare occurrence. Most of the time, adding an element is a quick O(1)O(1) operation. When you look at the average time per operation over a long sequence of insertions, the amortized time remains a highly efficient O(1)O(1) . This clever balance is what makes data structures like ArrayList incredibly efficient in practice.

Logarithmic Runtimes

What does O(log(N))O(\log(N)) runtime really mean? An O(log(N))O(\log(N)) runtime typically means that the algorithm divides the problem in half each time, such as with binary search. This logarithmic complexity grows much slower (as input grows) compared to linear or quadratic time complexities, making it very efficient for large inputs.

In the example below, we are performing binary search. Every time we make a "guess", we halve the input space in half each time. This means that we've reduced the space in which we search for "target" by one half! In the worst case, it would take log(N)log(N) iterations to find "target".

function binarySearch(arr, target) {
    let left = 0;
    let right = arr.length - 1;

    while (left <= right) {
        let mid = Math.floor((left + right) / 2);

        if (arr[mid] === target) {
            return mid;
        } else if (arr[mid] < target) {
            left = mid + 1;
        } else {
            right = mid - 1;
        }
    }

    return -1; // target not found
}
Enter fullscreen mode Exit fullscreen mode

Recursive Runtimes

The code below may at first seem like it runs in O(N2)O(N^2) time. It actually runs in O(2N)O(2^N) time because for each call, we call again twice. You can visualize the recursive calls as a tree. The tree would have a depth (height) of NN where each node has two children. At each level, we have twice as many nodes as the previous level. For a recursive algorithm that makes multiple calls, the runtime will often look like O(branchesdepth)O(branches^{depth}) where branches is the number of times each recursive call branches.

function f(n) {
    if (n <= 1) {
        return 1;
    }
    return f(n - 1) + f(n - 1);
}
Enter fullscreen mode Exit fullscreen mode

The space complexity is O(N)O(N) . Although we have O(2N)O(2^N) nodes in the tree, only O(N)O(N) exist at any given time.

Ending

Hope this helps with your technical interviews or your understanding of algorithm analysis. Good luck and look out for more blogs on Big O analysis, focusing on important examples.

Top comments (0)