DEV Community

Cover image for Am I Ready for FAANG? A Better Test Than Solving More LeetCode
Prakhar Srivastava for codeintuition

Posted on • Originally published at codeintuition.io

Am I Ready for FAANG? A Better Test Than Solving More LeetCode

You've solved 200 problems. Mediums you've already seen take fifteen minutes. The next one you haven't seen freezes you cold inside of five. And every time you ask yourself if you're ready for a FAANG loop, the honest answer is "I don't know."

The "I don't know" isn't a feeling problem. It's that you've been asking a feelings question about a performance outcome.

TL;DR

  • Self assessed readiness is unreliable because confidence swings with every session.
  • Solving a familiar problem and constructing a solution to a novel one are different skills.
  • The fix is a measurable test: an unseen medium in a pattern family you've studied, under timed conditions, with a hard cap on execution attempts.
  • Pass three of those across three different families and you likely have the level of transfer real interviews actually reward. Fail one and you know exactly where to work.
  • The signal isn't the count. It's whether the recognition holds when the problem name is hidden.

Why "do I feel ready" is the wrong question

Solving a problem you've seen before and reasoning your way through one you haven't are different skills. Practice on the same patterns repeatedly builds recognition. Recognition only fires on shapes you've encountered. The interview gives you a shape you haven't, and asks you to construct the approach from the constraints alone.

That's the gap between near transfer and far transfer. Near transfer is what 300 problems will buy you, applying what you've practised to similar setups. Far transfer is what FAANG selects for, applying what you've understood to genuinely new ones.

There's a second issue with self assessed confidence. It swings with your last session. Crush five tree problems on Saturday morning and you feel ready. Freeze on an unfamiliar graph problem on Saturday afternoon and the confidence evaporates. Neither data point reflects your stable ability across the families an interview actually draws from.

The result is a loop. You prep, you feel uncertain, you prep more, you still feel uncertain. The method of evaluation is wrong. You're asking a feelings question about a performance outcome.

Interview readiness isn't confidence. It's repeatable performance under unfamiliar conditions.

The three pattern family test

Readiness is a performance threshold you can measure. The protocol takes about two hours and produces a binary answer.

  1. Pick an unseen medium from a family you've studied. Sliding window, tree traversal, graph BFS, DP subsequence. The problem has to be genuinely novel. You haven't solved it, browsed its discussion thread, or read hints for it.
  2. Solve it under real interview constraints. Twenty minute timer. The problem name covered or aliased so you can't reverse engineer the family from the title. No hints. A hard cap on code execution attempts so you can't trial and error your way through. You have to identify the approach, build it, and trace it before running code.
  3. Repeat across two more families you've studied but haven't over practised. One pass isn't signal. Three passes across different families confirms the readiness is broad, not narrow.

Pass means: solve within twenty minutes with fewer than two failed execution attempts. Anything else is useful data.

If you pass three across different families and you likely have the level of transfer real interviews actually reward. If you fail one, you know precisely where to work.

What that looks like on graph BFS

Pick the worst case for this exercise: a medium graph BFS problem you haven't seen. The constraints describe a grid, an adjacency list, or some traversal where shortest distance is the answer.

Two minutes in, you've identified the family. Not from the problem title because that's hidden. From the constraints: shortest distance, unweighted edges, layered exploration. That recognition came from training what makes BFS the right approach, not from spotting "BFS" in the problem name.

The solution builds from BFS's invariant. At any moment, the queue contains every node whose shortest distance from the source is exactly the distance you're currently processing. You aren't recalling "this one used a deque." You're reasoning: enqueue the start at distance zero, expand neighbours level by level, return as soon as you pop the target.

from collections import deque

def shortest_path(graph, start, target):
    queue = deque([(start, 0)])
    visited = {start}
    while queue:
        node, dist = queue.popleft()
        if node == target:
            return dist
        for nbr in graph[node]:
            if nbr in visited:
                continue
            visited.add(nbr)
            queue.append((nbr, dist + 1))
    return -1
Enter fullscreen mode Exit fullscreen mode

Before you run anything, you trace it on a four node example. Walk the queue. Check visited. Verify the returned distance matches what you'd expect for a path you can see in your head. The mental dry run catches bugs the random test and submit loop misses, and it's the exact behaviour interviewers watch for: verifying correctness without leaning on the compiler.

You submit. It passes. Eight minutes still on the timer. Not because you rushed, but because identification took two minutes instead of fifteen, and the construction followed the invariant rather than trial and error.

Recognition under pressure matters more than recall in comfort.

That is what FAANG ready looks like. Not "I feel confident." A repeatable, observable performance.

A second pass on a different family

Now you change family. Pick a variable sliding window problem you haven't seen. The constraint shape: a contiguous range over an array or string, a flexible boundary that grows and shrinks, an objective that asks for the longest, shortest, or maximum window meeting some condition.

The recognition again happens within the first three minutes, before any code. The constraints match the variable sliding window's three triggers, you can name the invariant the window has to maintain, and you write the same expand then contract skeleton you'd write for any problem in the family.

def variable_window(arr, valid):
    left = 0
    best = 0
    state = init_state()
    for right in range(len(arr)):
        state = include(state, arr[right])
        while not valid(state):
            state = exclude(state, arr[left])
            left += 1
        best = max(best, right - left + 1)
    return best
Enter fullscreen mode Exit fullscreen mode

You fill in init_state, include, exclude, and valid for the specific problem. The skeleton stays the same. That's the marker of a pattern that's actually generalised in your head: you write the skeleton first, then specialise.

When a pattern generalises, you stop memorising solutions and start specialising frameworks.

If you pass this one too, you've got two of three. One more, on a third family you haven't over practised, decides it.

When you fail the test

Most engineers don't pass all three on the first attempt. That's expected. A clean three for three on the first try usually means the families were too comfortable.

  • One family failed. You know the pattern at a surface level but haven't internalised the identification triggers or the construction skeleton. Go back to the foundational material for that family. Don't just grind more problems in it. Study what makes the pattern applicable, the constraint combinations that point to it, the invariant every problem in the family shares. Once you can articulate that without notes, retest with a different unseen problem.
  • Two families failed. You likely have one strong area where you've over practised and shallow gaps everywhere else. Common for engineers who spent months on arrays or trees because the work felt productive. Broaden the coverage. Spend focused time on the families where the understanding is thin.
  • All three failed. The preparation has been building near transfer without building far transfer. That's a method gap, not a talent gap. Shift from solving high volumes to studying fewer problems more deeply. Focus on identification and constraint analysis rather than just reaching a correct solution.

One catch. Don't retake the test with the same problems. A retest on a problem you've already seen, even if you failed it, measures recall instead of reasoning. Find a different unseen problem in the same family.

The four signals most engineers use instead

Before the test, it helps to name the signals you've probably been using, and why each one lies.

  • Problem count. Tells you nothing about how the problems were solved. Someone at 120 problems with genuine pattern depth outperforms someone at 400 who relied on hints for half of them.
  • Topic completion. You finished sliding window two months ago and haven't touched it since. Completion isn't retention. Spacing matters. The performance you had on week three doesn't survive without revisits.
  • Speed on familiar problems. Two Sum in two minutes feels like fluency. It's actually retrieval of a stored solution. The moment a novel problem looks similar but has different constraints, the speed evaporates.
  • Peer comparison. Your friend got into Google in six months. That ignores their background, their pattern coverage, how they practised, and what level they interviewed for.

The three family test bypasses all four. It doesn't care about the count, the completion checkmarks, the recall speed, or anyone else's timeline. It measures one thing: can you construct a solution to a novel problem, under pressure, across families.

Setting up the conditions yourself

The hardest part of the test is replicating real interview conditions. Solving at your desk, documentation a tab away, with the timer optional, doesn't replicate a forty five minute FAANG round.

What you actually need: a source of unseen mediums in the families you've studied (the variable sliding window lesson covers one family if you haven't been through it before), a way to hide the problem name (a friend covering it, or a browser extension that aliases the title), a kitchen timer set to twenty minutes, and the discipline to stop after two failed runs. The conditions matter. The test fails the moment you peek at hints or let the timer slide.

I keep noticing the same two things across engineers who run this test for the first time. The ones who fail one family and immediately know why aren't far from ready, they're a couple of weeks of focused study away. The ones who fail all three and panic into more volume usually need to step away from the problem bank for a week and re read the foundational material on identification and invariants. The diagnostic is more useful than the score.

If you're stuck in the β€œI've solved a lot but still don't know if I'm ready” phase, the problem usually isn't effort.

It's measurement.

I wrote a longer breakdown covering:

  • per family readiness signals
  • common failure patterns
  • and what to fix when one family collapses under pressure

Full breakdown here

What's the specific moment you knew you weren't ready yet? A particular problem, a frozen minute in a mock, or the cumulative shape of practice that just felt off?

Top comments (0)