The most common DSA prep mistake I see at the FAANG band isn't undertraining or overtraining. It's training as if Amazon, Google, and Meta all sample from the same pattern bag. They don't. After mapping company tags across 450+ handpicked interview problems, the gap between Amazon's pattern footprint and Google's is larger than most candidates expect, and materially changes how prep time should be allocated.
TL;DR: Amazon spans 11+ distinct DSA pattern families, the broadest of any major company. Google has narrower coverage but a distinctive emphasis on
predicate search(binary search on the answer space, not on a sorted array). Seven patterns appear at six or more companies and form the universal baseline that every FAANG candidate has to own before specialising.
The seven patterns every FAANG round assumes you can run
Before company specific differences matter, there's a universal baseline. Seven patterns appear at six or more major companies in the data set. Skipping any of them, regardless of target company, is a structural error in the prep plan.
Company-specific prep matters. But only after the universal baseline is genuinely automatic.
The most heavily tagged is LRU Cache, at 19 companies. That's every FAANG member, plus DoorDash, Oracle, Zoom, PayPal, Twilio, TikTok, eBay, Yandex, LinkedIn, Zillow, Intuit, and Cloudera. The reason it spreads that far is structural: a single problem tests hash table mechanics, doubly linked list manipulation, and design composition at once. No alternative problem covers the same combination at the same difficulty.
The remaining six universal patterns:
- Counting (hash table): 9 companies. Frequency counts, anagram grouping, character buckets. Most broadly tested hash table pattern.
- Backtracking: 8 companies. Generate parentheses through N-Queens. Tests recursive state and pruning.
- Prefix Sum: 8 companies. Range queries, subarray sums, equilibrium problems. Almost always undertaught relative to its testing rate.
- Binary Search: 7 companies. The classic sorted array variant. Includes rotated array and boundary problems.
- Fixed Sliding Window: 7 companies. Window of fixed size with frequency tracking inside it.
- Variable Sliding Window: 6 companies. Different triggers from fixed: a contracting mechanism that has to fire on a condition, not on a counter.
Prefix Sum is the one that gets shortchanged most. Eight companies tag it. Most prep plans treat it as a footnote because it doesn't have a flagship problem the way LRU Cache or Two Sum do. That's a mismatch worth correcting: it gives disproportionate company coverage relative to the time required to learn it well.
The highest ROI interview patterns are often the ones without famous flagship problems.
Predicate search: the pattern Google tests that nobody calls by name
Google shares the universals, but the pattern that distinguishes Google interviews from the rest is predicate search. This is binary search applied not to a sorted array but to the answer space itself. You define a range of possible answers, then binary search that range by checking feasibility at each midpoint.
The classic shape is the minimum ship capacity problem. You're given package weights and D days. Find the smallest ship capacity that lets you ship everything in D days.
Instead of trying every capacity from 1 upward, you frame the search range:
- Low: the heaviest single package (you have to lift it on day one)
- High: the sum of all weights (one giant day)
Then binary search inside that range:
def min_ship_capacity(weights, days):
lo, hi = max(weights), sum(weights)
while lo < hi:
mid = (lo + hi) // 2
day_count, current_load = 1, 0
for w in weights:
if current_load + w > mid:
day_count += 1
current_load = 0
current_load += w
if day_count <= days:
hi = mid
else:
lo = mid + 1
return lo
The mechanical shift from classic binary search is small, but the mental shift is large. In classic binary search, the search space is given to you (a sorted array). In predicate search, you construct the search space from the constraints, then run binary search on your own abstraction. You're searching for the minimum value that satisfies a feasibility predicate, not a value already sitting somewhere in the input.
LeetCode tags these problems "Binary Search" alongside sorted array problems. The tag isn't wrong, but it hides the model shift. If your only model for binary search is "find a value in a sorted array," predicate search will freeze you because the starting assumption doesn't match. Google asks predicate search variants more than any other company in the data set: punctual arrival speed, trip completion frenzy, calculate square root, capacity to ship within D days, all the same shape.
Most candidates know binary search on arrays. Far fewer recognise when the answer itself is the search space.
A useful test before practising binary search problems: can you state, in one sentence, what the search range is and what the feasibility check is? If yes, you're doing predicate search. If you reach for mid = (left + right) // 2 without that explicit framing, you're guessing.
Why Amazon's pattern coverage is wider than anyone else's
Amazon's problem set covers Counting, Fixed and Variable Sliding Window, Prefix Sum, LRU Cache, Randomised Set, Binary Search, 2D Binary Search, Staircase Search, Maximum Predicate Search, Queue Design, and Backtracking. That's 11+ pattern families against Google's 7-8 and Meta's 6-7.
The practical effect on prep is concrete. A Google candidate who goes deep on predicate search, counting, and graphs covers a meaningful slice of what they'll actually see. An Amazon candidate who goes equally deep on three families has blind spots across the other eight. The "study three patterns deeply and hope you land in your zone" strategy has a lower hit rate at Amazon than anywhere else.
The Bar Raiser round amplifies the breadth requirement. The Bar Raiser is an interviewer pulled from outside the hiring team, with veto power, and they're not bound to the team's domain. They can sample any pattern family Amazon tests. If the round happens to land on a category you skipped, the heuristic of "the team usually asks X" doesn't catch you.
Going deep still matters. But at Amazon the breadth axis carries more weight than at the narrower companies, and the prep allocation should reflect that.
Meta, Microsoft, Apple: where the rest of the picture sits
Meta concentrates on Sliding Window (both Fixed and Variable), Prefix Sum, Counting, and design problems (LRU Cache, Randomised Set). Maximum Predicate Search shows up too. Compared to Google, Meta places less weight on searching variants and more on hash table depth and design implementation. If Amazon tests breadth and Google tests reasoning depth, Meta wants hash table fluency in a tighter band.
Microsoft is distinct for 2D Binary Search and Staircase Search. These are multi dimensional search problems where the input is a sorted matrix and the search has to respect both axes. Most prep plans skip them entirely because they don't appear at Google or Meta. If Microsoft is your target, weight 2D search higher and variable sliding window lower.
Apple tilts toward fundamentals tested deeply. Five Counting problems carry Apple tags, alongside Binary Search, Prefix Sum, and Backtracking. Apple's data signals a preference for candidates with strong basics over candidates with broad pattern coverage. The advanced design problems (Randomised Set, Queue Design) that Amazon tests don't appear in Apple's tags.
What a company doesn't test matters as much as what it does. Every hour spent on patterns your target company doesn't emphasise is an hour that could've gone to one they do. Google shows minimal tags for Queue Design and 2D Binary Search. Meta shows lower coverage of searching variants. Microsoft shows fewer tags for Variable Sliding Window. Apple barely tests advanced design at all. If you're prep-allocated against the wrong company's profile, you're optimising the wrong axis.
Allocating prep time once you know your target
Three rules fall out of the data, in this order:
- Cover the universals first, regardless of target. LRU Cache, Counting, Backtracking, Prefix Sum, Binary Search, Fixed Sliding Window, Variable Sliding Window. Skip none of them.
- Specialise second. Predicate search for Google. Design breadth and Bar Raiser follow up depth for Amazon. Design depth and sliding window fluency for Meta. 2D and staircase search for Microsoft. Counting and fundamentals depth for Apple.
- Cut what doesn't match. Targeting Google and not Amazon? Queue Design can wait. Targeting Meta and not Microsoft? Staircase Search can wait. Prep time is finite.
Two patterns are worth flagging again. Prefix Sum is undertaught relative to its 8-company tag count. Almost no popular prep plan gives it the time it deserves. And LRU Cache is the one problem you genuinely shouldn't walk into any FAANG round without being able to write cold, including the doubly linked list helpers. Nineteen company tags, no exceptions.
Most prep platforms organise problems by data structure (Hash Table, Tree, Graph), which buries the company-level signal entirely. The more useful filter is patterns by company tags. Once you can answer "which 6-8 patterns has my target company asked across 450+ problems," the allocation question gets concrete and fast.
I wrote a longer version with the per company breakdown and the FAQ on my own blog covering the patterns each company genuinely deemphasises (Google's quiet skip on Queue Design, Apple's near absence of advanced design).
If you've interviewed at more than one of these five companies, which pattern came up that you weren't expecting based on the company's reputation?
Top comments (2)
This is one of the few FAANG prep analyses that actually treats interview preparation like a systems problem instead of a motivational speech 😄.
The distinction between “classic binary search” and “predicate search” is especially valuable because a lot of candidates know the syntax of binary search without understanding the abstraction behind it.
I also agree with the point about Prefix Sum being massively undertaught relative to how frequently it appears in interviews.
The Amazon breadth-vs-Google depth comparison explains why some candidates feel overprepared for one company and completely blindsided by another 😂.
Another underrated takeaway is that company prep should be pattern-distribution aware, not just “solve 300 random LeetCode questions and pray.”
The LRU Cache observation is painfully accurate too — at this point it’s basically the “Hello World” of senior interview prep.
Really solid breakdown of how different companies optimize for different dimensions of problem solving.
Can I get your contact info to discuss in detail?