Forem

Stop Guessing Why Your App is Slow: A Pragmatic Guide to Big O Notation

Let’s be real for a second. When we start coding, our main goal is just making things work. If the feature ships and the bugs are squashed, we’re happy. But then, your user base grows, your database swells, and suddenly that simple array iteration is freezing the browser.

This is where Big O Notation comes in.

Forget the heavy academic math for a minute. Big O is simply a way to talk about how your code's performance scales as your data grows. It answers one question: If my input gets 10x bigger, how much slower does my app get?

Let’s break down the most common complexities you’ll actually face in your day-to-day TS/JS code, and how to fix the bad ones.


The Space-Time Tradeoff ⚖️

Before we dive into the code, we need to talk about the golden rule of optimization: the Space-Time Tradeoff.

Most of the time, making an algorithm faster (Time Complexity) requires using more memory (Space Complexity) to store temporary data, like lookup tables. If you're building an app for older mobile phones with limited RAM, you might actually choose a slower algorithm to prevent out-of-memory crashes. Optimization is always about context!


The O(n²) Trap: Nested Loops

Imagine you are building a Yard Management System (YMS) to track logistics. You have an array of newly arrived truck license plates, and an array of plates already registered in your database. You need to find which arriving trucks are already in the system.

The intuitive, brute-force way is a loop inside a loop:

function findRegisteredTrucks(arriving: string[], registered: string[]): string[] {
  const matches = [];

  for (let i = 0; i < arriving.length; i++) {
    for (let j = 0; j < registered.length; j++) {
      if (arriving[i] === registered[j]) {
        matches.push(arriving[i]);
      }
    }
  }

  return matches;
}
Enter fullscreen mode Exit fullscreen mode

This is O(n²) (Quadratic Time). If both lists have 1,000 trucks, your computer just did 1,000,000 comparisons. Yikes. As the yard gets busier, your app's performance will tank.

The Fix: Let's trade some memory for a massive speed boost using a Set. By turning our registered trucks into a Set, we get instant O(1) lookups!

function findRegisteredTrucksOptimized(arriving: string[], registered: string[]): string[] {
  const matches = [];
  const registeredSet = new Set(registered); // Costs O(n) space, but gives O(1) lookups!

  for (let i = 0; i < arriving.length; i++) {
    if (registeredSet.has(arriving[i])) { 
      matches.push(arriving[i]);
    }
  }

  return matches;
}
Enter fullscreen mode Exit fullscreen mode

Boom! We just dropped the time complexity to O(n) (Linear Time). It now only takes 2,000 operations instead of a million.


O(log n): The Magic of Binary Search

What if you need to find one specific dispatch log out of 50,000 sorted timestamps? You could check them one by one (O(n)), but that's like reading a physical dictionary page by page to find the letter "M".

Instead, you open the dictionary in the middle. If you hit "P", you know "M" comes before it, so you instantly throw away the entire second half of the book. You repeat this until you find your word.

This is Binary Search, and it runs in O(log n) (Logarithmic Time).
Because you cut the dataset in half every single step, finding one item in an array of 1,000,000 elements takes a maximum of 20 steps. Yes, just 20. It’s arguably the most powerful concept in computer science.


O(2^n): The Recursion Nightmare

If you've ever played around with a purely recursive Fibonacci function (a classic computer science exercise), you've met O(2^n) (Exponential Time). Every time you add just one item to the input, the execution time doubles.

// Do not use this in production 😅
function fibonacci(n: number): number {
  if (n <= 1) return n;
  return fibonacci(n - 1) + fibonacci(n - 2); 
}
Enter fullscreen mode Exit fullscreen mode

Because the function calls itself twice, it ends up recalculating the exact same numbers over and over again. An input of n = 50 might literally freeze your machine.

The Fix: Memoization 🧠
If we've already calculated a value, why do it again? Let's use an object to "remember" previous results:

const memo: Record<number, number> = {};

function fibonacciOptimized(n: number): number {
  if (n <= 1) return n;

  if (memo[n]) return memo[n]; // Instant O(1) return

  memo[n] = fibonacciOptimized(n - 1) + fibonacciOptimized(n - 2);
  return memo[n];
}
Enter fullscreen mode Exit fullscreen mode

By caching the results (trading space for time again!), we collapse an impossible O(2^n) algorithm down to a smooth O(n).


The Built-in Reality Check: O(n \log n)

What happens when you need to sort that massive array of dispatch logs by timestamp? You probably just call dispatchLogs.sort(). It's a single line of code, so it must be O(n), right? Not quite.

Sorting is inherently more complex than just iterating through a list. Modern JavaScript engines (like V8 in Chrome or Node.js) use highly optimized algorithms under the hood (often Timsort or a variation of Quick Sort) that run in O(n \log n) (Log-Linear Time).

Think of it as a middle ground: it’s slower than O(n), but it completely destroys the O(n²) brute-force approach.

// Under the hood, this runs in O(n log n)
const sortedLogs = dispatchLogs.sort((a, b) => a.timestamp - b.timestamp);
Enter fullscreen mode Exit fullscreen mode

You don't usually need to write your own sorting algorithm from scratch, but as a pragmatic dev, you do need to know that calling .sort() on a massive dataset isn't entirely "free". It has a specific mathematical cost, and O(n \log n) is the best standard rate you can get for sorting.


Wrapping up

You don't need a whiteboard and a math degree to use Big O. You just need to be mindful of your loops, respect the power of Sets and Maps for O(1) lookups, and remember that caching is your best friend when things get heavy.

What was the worst performance bottleneck you ever had to fix? Let's chat in the comments! 👇

Top comments (0)