Two engineers prep for the same cycle. One solves more than 500 problems and freezes when the medium doesn't look like anything from the practice list. The other solves a fraction of that and works the problem out from scratch on the screen. The variable that decides which way it goes isn't volume.
TL;DR
- Volume practice builds memory of specific problems. It builds little of the skill that recognises which technique applies to a problem you've never seen.
- Learning science calls these near transfer (familiar problems) and far transfer (unfamiliar ones). Volume practice mostly trains near transfer.
- Real interviews test far transfer because the problem isn't labelled and won't match anything in the practice bank.
- Recognition is trainable. The training is explicit: read the problem for triggers, name the pattern, then write code.
- On the next problem you face, the recognition pass takes 30 seconds before you touch the keyboard. That pass is the gap.
Disclosure up front: I built Codeintuition, a structured learning platform for coding interviews. This post is about the recognition skill that decides whether volume practice converts to interview readiness, not about the product. The closing link goes to the longer version on my own blog.
What 500 problems actually trains you to do
Most coding interview advice tells you to solve more problems. There's a real reason this advice exists. Volume builds fluency: you stop being confused by syntax, you stop misreading constraints, and the interview minutes that used to go to typo-hunting start going to the actual problem. Past a few hundred problems, those skills are usually solid.
What volume doesn't reliably build is the ability to read a problem you've never seen and decide which technique applies. That's a different skill, and the way most engineers practise actively trains around it. You attempt the problem, get stuck, glance at the LeetCode tags, notice it says stack, and read a solution. The next time a stack problem shows up, you might recognise it. The time after, you might not. Recognition is being built by accident, not by design.
In an interview the tag is gone. The problem statement isn't labelled stack or sliding window or dynamic programming. So the skill that's been built by accident has to do work it's never been explicitly trained for. That's where the freeze comes from.
Near transfer vs far transfer
There's a useful piece of language for this from the learning sciences, and it's worth borrowing because it predicts the failure mode precisely.
Near transfer is when you can solve a problem because it resembles one you've practised. You solved Two Sum with a hash map. The interviewer hands you Two Sum II with a sorted array. The visible details changed but the underlying idea is close enough that recognition fires.
Far transfer is when you can solve a problem that doesn't resemble anything in your practice set, by reading the structure of the problem and constructing the approach from first principles. You see "minimum window of an array containing all characters of a target string" and you've never solved a window problem with a character constraint. You read the structure and construct a variable sliding window from scratch.
Grinding 500 problems builds near transfer well. It does much less for far transfer. Whether far transfer is reliably teachable is a debate the research on transfer of learning hasn't fully settled, but the evidence does support one specific intervention. Explicit instruction in when and why a method applies, not just how to execute it, produces more transfer than practice alone.
A worked example: monotonic stack on stock span
Here's the kind of problem where the gap shows up. You're given an array of stock prices. For each day, find how many consecutive days before it had a price less than or equal to that day. The constraints don't mention "stack" anywhere.
If you've practised the technique without practising the recognition, you'll probably try a nested loop first, hit O(n^2), and try to remember which problems you've seen with this shape. If recognition is trained, you read the problem's structure and three triggers fire:
- "For each day" means you need an answer per element. Element-wise, not aggregate.
- "Consecutive days before it" means a directional search to the left.
- "A price less than or equal to" means a comparison condition between the current element and what's to its left.
Three triggers point at one technique: the previous closest occurrence pattern, implemented with a monotonic stack. The triggers are what you read off the problem statement. The pattern is what you write in code.
def stock_span(prices):
stack = []
spans = []
for i, price in enumerate(prices):
while stack and prices[stack[-1]] <= price:
stack.pop()
span = i - stack[-1] if stack else i + 1
spans.append(span)
stack.append(i)
return spans
The implementation is short and well documented across the internet. The bottleneck isn't writing the loop. It's noticing that "for each element, find the previous element satisfying a comparison condition" is one recognisable shape that always uses this technique. That's the recognition skill, and it's the part most prep doesn't train.
What a recognition drill looks like
Before solving a problem, do a 30 to 60 second pass on the statement alone. No code. The output of the pass is a name: which pattern, and why.
For each technique you've learned, you should be able to name the two or three observable features of a problem statement that signal it applies. A few common ones from the techniques most coding interviews lean on:
- Variable sliding window: contiguous subarray or substring, plus a condition that holds across the window, plus an optimisation on window length. When all three appear, it's almost always this technique.
- Two pointers: a sorted array (or one you can sort) and a search for a pair or triple satisfying a target. The pointers move from the ends inward.
- Monotonic stack: per element answer, plus a directional search left or right, plus a comparison condition. Stock span fits. Next greater element fits.
- Backtracking: enumerate combinations or paths under a constraint, with the option to abandon a partial candidate when the constraint is violated.
When you start a problem, the drill is: read the constraints, list the features you see, name the technique, then start coding. If you can't name a technique in 60 seconds, you don't start coding. You re-read the constraints with the trigger checklists in front of you and try again. If you still can't, mark the problem and learn the missing feature before moving on.
The first time, this feels artificial. After 30 problems, it stops feeling artificial because the features start firing automatically as you read.
The other thing prep tends to skip
A bit of honesty before the close. Recognition isn't the only thing prep tends to skip. The conditions you practise under matter at least as much, and the default LeetCode loop is much friendlier than an interview.
The default loop has the title visible, the difficulty visible, the company tags visible, the discussion section a click away, and no clock. None of those are present in the actual interview. If your reads of the problem have always been informed by the tag, the moment the tag disappears the read gets harder.
A reasonable practice protocol for the last few weeks before an interview cycle:
- Cover the problem name with a sticky note before reading the constraints. Many problem names give away the technique.
- Set a 25 minute clock per medium. If you blow past it, that's data: the problem went to "still working" rather than "solved", and the next pass focuses on what slowed you down.
- Skip the discussion section on the first attempt. Read it after, only if your attempt didn't converge.
- Mix techniques. Three sliding window problems in a row trains nothing about recognition because the third is obvious by inertia. Three problems from different techniques force you to read the constraints.
You can run all of this against any problem bank. It's a practice protocol, not a feature.
When grinding more is the right move
Volume practice is the right call in two cases. First, when your fundamentals on a specific data structure or algorithm are weak enough that you can't implement it cleanly even with the technique handed to you. There, more reps fix the bottleneck directly. Second, when you're a few weeks out from an interview cycle and the goal is speed, not depth. The features you've already internalised get faster with reps.
The cases where volume stops working are different. When implementation is solid but recognition under unfamiliar problems is the bottleneck, more reps don't move the needle because they aren't training what's broken.
What the next problem looks like with this trained
Six months out, you open an unfamiliar problem on a phone screen. The description mentions "for each element, count how many elements to the right are strictly greater before reaching one that's less or equal." Nothing in your practice list looks like it.
Reading for features gets you somewhere. "For each element" is element-wise. "To the right" is directional. "Strictly greater" is a comparison condition. All three match a monotonic stack, written from the right toward the left. You're 30 seconds in and writing the loop.
That's a trained skill. Volume helped, structure helped more, recognition is what closed the gap.
If unfamiliar mediums still freeze you even after hundreds of problems, the bottleneck usually isn't implementation.
It's recognition.
I wrote a longer breakdown covering:
- feature checklists for six major interview patterns
- near transfer vs far transfer
- and the exact recognition drills that made unfamiliar problems feel solvable again
Which technique's features finally clicked for you only after seeing it on a problem the explanations had skipped?
Top comments (0)