<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: codeintuition</title>
    <description>The latest articles on DEV Community by codeintuition (@codeintuition).</description>
    <link>https://dev.to/codeintuition</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/codeintuition"/>
    <language>en</language>
    <item>
      <title>I Mapped 450+ FAANG Problems: Amazon, Google, and Meta Don't Test the Same DSA Patterns</title>
      <dc:creator>Prakhar Srivastava</dc:creator>
      <pubDate>Tue, 12 May 2026 15:23:46 +0000</pubDate>
      <link>https://dev.to/codeintuition/i-mapped-450-faang-problems-amazon-google-and-meta-dont-test-the-same-dsa-patterns-38df</link>
      <guid>https://dev.to/codeintuition/i-mapped-450-faang-problems-amazon-google-and-meta-dont-test-the-same-dsa-patterns-38df</guid>
      <description>&lt;p&gt;The most common DSA prep mistake I see at the FAANG band isn't undertraining or overtraining. It's training as if Amazon, Google, and Meta all sample from the same pattern bag. They don't. After mapping company tags across 450+ handpicked interview problems, the gap between Amazon's pattern footprint and Google's is larger than most candidates expect, and materially changes how prep time should be allocated.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Amazon spans 11+ distinct DSA pattern families, the broadest of any major company. Google has narrower coverage but a distinctive emphasis on &lt;code&gt;predicate search&lt;/code&gt; (binary search on the answer space, not on a sorted array). Seven patterns appear at six or more companies and form the universal baseline that every FAANG candidate has to own before specialising.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The seven patterns every FAANG round assumes you can run
&lt;/h2&gt;

&lt;p&gt;Before company specific differences matter, there's a universal baseline. Seven patterns appear at six or more major companies in the data set. Skipping any of them, regardless of target company, is a structural error in the prep plan.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Company-specific prep matters. But only after the universal baseline is genuinely automatic.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The most heavily tagged is &lt;code&gt;LRU Cache&lt;/code&gt;, at 19 companies. That's every FAANG member, plus DoorDash, Oracle, Zoom, PayPal, Twilio, TikTok, eBay, Yandex, LinkedIn, Zillow, Intuit, and Cloudera. The reason it spreads that far is structural: a single problem tests hash table mechanics, doubly linked list manipulation, and design composition at once. No alternative problem covers the same combination at the same difficulty.&lt;/p&gt;

&lt;p&gt;The remaining six universal patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Counting (hash table):&lt;/strong&gt; 9 companies. Frequency counts, anagram grouping, character buckets. Most broadly tested hash table pattern.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backtracking:&lt;/strong&gt; 8 companies. Generate parentheses through N-Queens. Tests recursive state and pruning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prefix Sum:&lt;/strong&gt; 8 companies. Range queries, subarray sums, equilibrium problems. Almost always undertaught relative to its testing rate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Binary Search:&lt;/strong&gt; 7 companies. The classic sorted array variant. Includes rotated array and boundary problems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fixed Sliding Window:&lt;/strong&gt; 7 companies. Window of fixed size with frequency tracking inside it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Variable Sliding Window:&lt;/strong&gt; 6 companies. Different triggers from fixed: a contracting mechanism that has to fire on a condition, not on a counter.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Prefix Sum is the one that gets shortchanged most. Eight companies tag it. Most prep plans treat it as a footnote because it doesn't have a flagship problem the way LRU Cache or Two Sum do. That's a mismatch worth correcting: it gives disproportionate company coverage relative to the time required to learn it well.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The highest ROI interview patterns are often the ones without famous flagship problems.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Predicate search: the pattern Google tests that nobody calls by name
&lt;/h2&gt;

&lt;p&gt;Google shares the universals, but the pattern that distinguishes Google interviews from the rest is &lt;code&gt;predicate search&lt;/code&gt;. This is binary search applied not to a sorted array but to the answer space itself. You define a range of possible answers, then binary search that range by checking feasibility at each midpoint.&lt;/p&gt;

&lt;p&gt;The classic shape is the &lt;a href="https://www.codeintuition.io/courses/searching/ke266Y-PVh5hOSYdfs0gV" rel="noopener noreferrer"&gt;minimum ship capacity problem&lt;/a&gt;. You're given package weights and &lt;code&gt;D&lt;/code&gt; days. Find the smallest ship capacity that lets you ship everything in &lt;code&gt;D&lt;/code&gt; days.&lt;/p&gt;

&lt;p&gt;Instead of trying every capacity from 1 upward, you frame the search range:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Low:&lt;/strong&gt; the heaviest single package (you have to lift it on day one)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High:&lt;/strong&gt; the sum of all weights (one giant day)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then binary search inside that range:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;min_ship_capacity&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;weights&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;days&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;lo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hi&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;weights&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;weights&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;mid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
        &lt;span class="n"&gt;day_count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;current_load&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;weights&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;current_load&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;mid&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;day_count&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
                &lt;span class="n"&gt;current_load&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
            &lt;span class="n"&gt;current_load&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;day_count&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;days&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;hi&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mid&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mid&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;lo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The mechanical shift from classic binary search is small, but the mental shift is large. In classic binary search, the search space is given to you (a sorted array). In predicate search, you construct the search space from the constraints, then run binary search on your own abstraction. You're searching for the minimum value that satisfies a feasibility predicate, not a value already sitting somewhere in the input.&lt;/p&gt;

&lt;p&gt;LeetCode tags these problems "Binary Search" alongside sorted array problems. The tag isn't wrong, but it hides the model shift. If your only model for binary search is "find a value in a sorted array," predicate search will freeze you because the starting assumption doesn't match. Google asks predicate search variants more than any other company in the data set: punctual arrival speed, trip completion frenzy, calculate square root, capacity to ship within D days, all the same shape.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Most candidates know binary search on arrays. Far fewer recognise when the answer itself is the search space.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A useful test before practising binary search problems: can you state, in one sentence, what the search range is and what the feasibility check is? If yes, you're doing predicate search. If you reach for &lt;code&gt;mid = (left + right) // 2&lt;/code&gt; without that explicit framing, you're guessing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Amazon's pattern coverage is wider than anyone else's
&lt;/h2&gt;

&lt;p&gt;Amazon's problem set covers Counting, Fixed and Variable Sliding Window, Prefix Sum, LRU Cache, Randomised Set, Binary Search, 2D Binary Search, Staircase Search, Maximum Predicate Search, Queue Design, and Backtracking. That's 11+ pattern families against Google's 7-8 and Meta's 6-7.&lt;/p&gt;

&lt;p&gt;The practical effect on prep is concrete. A Google candidate who goes deep on predicate search, counting, and graphs covers a meaningful slice of what they'll actually see. An Amazon candidate who goes equally deep on three families has blind spots across the other eight. The "study three patterns deeply and hope you land in your zone" strategy has a lower hit rate at Amazon than anywhere else.&lt;/p&gt;

&lt;p&gt;The Bar Raiser round amplifies the breadth requirement. The Bar Raiser is an interviewer pulled from outside the hiring team, with veto power, and they're not bound to the team's domain. They can sample any pattern family Amazon tests. If the round happens to land on a category you skipped, the heuristic of "the team usually asks X" doesn't catch you.&lt;/p&gt;

&lt;p&gt;Going deep still matters. But at Amazon the breadth axis carries more weight than at the narrower companies, and the prep allocation should reflect that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Meta, Microsoft, Apple: where the rest of the picture sits
&lt;/h2&gt;

&lt;p&gt;Meta concentrates on Sliding Window (both Fixed and Variable), Prefix Sum, Counting, and design problems (LRU Cache, Randomised Set). Maximum Predicate Search shows up too. Compared to Google, Meta places less weight on searching variants and more on hash table depth and design implementation. If Amazon tests breadth and Google tests reasoning depth, Meta wants hash table fluency in a tighter band.&lt;/p&gt;

&lt;p&gt;Microsoft is distinct for &lt;code&gt;2D Binary Search&lt;/code&gt; and &lt;code&gt;Staircase Search&lt;/code&gt;. These are multi dimensional search problems where the input is a sorted matrix and the search has to respect both axes. Most prep plans skip them entirely because they don't appear at Google or Meta. If Microsoft is your target, weight 2D search higher and variable sliding window lower.&lt;/p&gt;

&lt;p&gt;Apple tilts toward fundamentals tested deeply. Five Counting problems carry Apple tags, alongside Binary Search, Prefix Sum, and Backtracking. Apple's data signals a preference for candidates with strong basics over candidates with broad pattern coverage. The advanced design problems (Randomised Set, Queue Design) that Amazon tests don't appear in Apple's tags.&lt;/p&gt;

&lt;p&gt;What a company &lt;em&gt;doesn't&lt;/em&gt; test matters as much as what it does. Every hour spent on patterns your target company doesn't emphasise is an hour that could've gone to one they do. Google shows minimal tags for Queue Design and 2D Binary Search. Meta shows lower coverage of searching variants. Microsoft shows fewer tags for Variable Sliding Window. Apple barely tests advanced design at all. If you're prep-allocated against the wrong company's profile, you're optimising the wrong axis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Allocating prep time once you know your target
&lt;/h2&gt;

&lt;p&gt;Three rules fall out of the data, in this order:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cover the universals first, regardless of target.&lt;/strong&gt; LRU Cache, Counting, Backtracking, Prefix Sum, Binary Search, Fixed Sliding Window, Variable Sliding Window. Skip none of them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Specialise second.&lt;/strong&gt; Predicate search for Google. Design breadth and Bar Raiser follow up depth for Amazon. Design depth and sliding window fluency for Meta. 2D and staircase search for Microsoft. Counting and fundamentals depth for Apple.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cut what doesn't match.&lt;/strong&gt; Targeting Google and not Amazon? Queue Design can wait. Targeting Meta and not Microsoft? Staircase Search can wait. Prep time is finite.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Two patterns are worth flagging again. Prefix Sum is undertaught relative to its 8-company tag count. Almost no popular prep plan gives it the time it deserves. And LRU Cache is the one problem you genuinely shouldn't walk into any FAANG round without being able to write cold, including the doubly linked list helpers. Nineteen company tags, no exceptions.&lt;/p&gt;

&lt;p&gt;Most prep platforms organise problems by data structure (&lt;code&gt;Hash Table&lt;/code&gt;, &lt;code&gt;Tree&lt;/code&gt;, &lt;code&gt;Graph&lt;/code&gt;), which buries the company-level signal entirely. The more useful filter is patterns by company tags. Once you can answer "which 6-8 patterns has my target company asked across 450+ problems," the allocation question gets concrete and fast.&lt;/p&gt;

&lt;p&gt;I wrote a longer version with the per company breakdown and the FAQ on &lt;a href="https://www.codeintuition.io/blogs/amazon-vs-google-coding-interview-dsa" rel="noopener noreferrer"&gt;my own blog&lt;/a&gt; covering the patterns each company genuinely deemphasises (Google's quiet skip on Queue Design, Apple's near absence of advanced design).&lt;/p&gt;

&lt;p&gt;If you've interviewed at more than one of these five companies, which pattern came up that you weren't expecting based on the company's reputation?&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>leetcode</category>
      <category>career</category>
      <category>faang</category>
    </item>
    <item>
      <title>The Amazon Bar Raiser Isn't Testing Your Solution. It's Testing the Follow Up.</title>
      <dc:creator>Prakhar Srivastava</dc:creator>
      <pubDate>Mon, 11 May 2026 14:32:57 +0000</pubDate>
      <link>https://dev.to/codeintuition/the-amazon-bar-raiser-isnt-testing-your-solution-its-testing-the-follow-up-fpn</link>
      <guid>https://dev.to/codeintuition/the-amazon-bar-raiser-isnt-testing-your-solution-its-testing-the-follow-up-fpn</guid>
      <description>&lt;p&gt;You're 15 minutes into an Amazon coding round. You've identified the pattern, your solution handles the examples, you're about to submit. Then the interviewer changes a constraint. Sorted input is now unsorted. The complexity ceiling drops from &lt;code&gt;O(n log n)&lt;/code&gt; to &lt;code&gt;O(n)&lt;/code&gt;. Your original solution collapses, and the rest of the round is now about whether you can reconstruct the approach from the invariant rather than from memory.&lt;/p&gt;

&lt;p&gt;That moment, the constraint shift after the working solution, is where the Bar Raiser's evaluation lives. Most prep treats it as a bonus question. The Bar Raiser treats it as the round.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Amazon's Bar Raiser is an independent interviewer with veto power who tests the same problems as a regular round but probes the follow up much harder. Two things make Amazon prep different from Google or Meta prep: the pattern coverage is broader (eleven plus pattern families vs. six or seven), and the round's evaluation centres on whether you can adapt your solution after a constraint changes, not on whether you can produce the optimal solution first.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What the Bar Raiser is, in one paragraph
&lt;/h2&gt;

&lt;p&gt;Amazon's Bar Raiser is an interviewer pulled from outside the hiring team. Their job is to evaluate the candidate against Amazon's company wide hiring bar, independent of how badly the team wants to fill the seat. They can veto a hire that everyone else approved. They can also push a borderline candidate over the line if they see genuine depth.&lt;/p&gt;

&lt;p&gt;In practice, you don't know which of your interviewers is the Bar Raiser. Amazon keeps that ambiguous through the loop. What you do know is that at least one round will be a Bar Raiser round, and that round usually includes a coding problem.&lt;/p&gt;

&lt;p&gt;The problem itself looks the same as any other Amazon technical screen. The difference is what happens after you produce a working answer. A standard interviewer often moves on once you've solved it. A Bar Raiser keeps going. They'll change a constraint, ask you to optimise, ask why your solution is correct (not just whether it works), or walk you through the failure case you didn't test.&lt;/p&gt;

&lt;p&gt;The initial solution gets you to the conversation. The follow up is the conversation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The solution gets you into the round. The adaptation decides the round.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why Amazon's pattern coverage is wider
&lt;/h2&gt;

&lt;p&gt;Every FAANG company has favourite pattern categories. Google leans heavily on predicate search (binary search on the answer space). Meta concentrates on sliding window and design. Amazon doesn't have a tight central focus the way those two do. Its problem set spans counting, fixed and variable sliding window, prefix sum, LRU style design, randomised set design, binary search and its 2D / staircase variants, queue design, and backtracking.&lt;/p&gt;

&lt;p&gt;That's eleven plus pattern families versus roughly seven or eight for Google and six or seven for Meta.&lt;/p&gt;

&lt;p&gt;What this means for prep is concrete: the strategy of going deep on two or three pattern families and hoping the round lands in your zone has a lower hit rate at Amazon. A Google candidate who deeply understands predicate search, counting, and graphs covers a meaningful fraction of what they'll see. An Amazon candidate who goes equally deep on three families has blind spots across the other eight.&lt;/p&gt;

&lt;p&gt;The Bar Raiser amplifies this. Because they're not bound to the hiring team's domain, they can pull a problem from any of those families. If the round happens to land on a category you skipped, the safety net of "the team usually asks X" doesn't catch you.&lt;/p&gt;

&lt;p&gt;The broader your pattern coverage, the smaller the chance any single Bar Raiser pick puts you in unfamiliar territory. Going deep matters too, but at Amazon the breadth axis carries more weight than at the narrower companies.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the follow up is testing
&lt;/h2&gt;

&lt;p&gt;The interviewer isn't asking the follow up to be cruel. They're testing whether you understood the mechanism of your solution or just the solution itself. If your initial answer was a recalled template for the problem, the constraint change exposes that. If your initial answer was constructed from an invariant, the constraint change is something you can adapt to.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Strong candidates optimise from understanding. Weak candidates optimise from memory.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A worked example makes this concrete. LRU Cache is one of Amazon's most reliably tested problems. The standard solution composes a hash map (&lt;code&gt;O(1)&lt;/code&gt; key lookup) with a doubly linked list (&lt;code&gt;O(1)&lt;/code&gt; insert and removal at any node when you have the node reference). The hash map maps key to list node. Operations look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;LRUCache&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;capacity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;capacity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;capacity&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;map&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;                     &lt;span class="c1"&gt;# key -&amp;gt; ListNode
&lt;/span&gt;        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;head&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ListNode&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;            &lt;span class="c1"&gt;# most recently used end
&lt;/span&gt;        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tail&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ListNode&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;            &lt;span class="c1"&gt;# least recently used end
&lt;/span&gt;        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;head&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;next&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tail&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tail&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;prev&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;head&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_remove&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_add_to_front&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;map&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_remove&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
        &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ListNode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_add_to_front&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;map&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;capacity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;lru&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tail&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;prev&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_remove&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lru&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;del&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;lru&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's the answer the Bar Raiser expects you to land. Now the follow up: each entry has a TTL. Reads on an expired entry must return &lt;code&gt;-1&lt;/code&gt;. The cache must still operate at expected &lt;code&gt;O(1)&lt;/code&gt; per call.&lt;/p&gt;

&lt;p&gt;The candidate who memorised LRU starts redesigning from zero. The candidate who built it from the invariant asks one question first: which invariant did the doubly linked list maintain, and does that invariant still hold under TTL?&lt;/p&gt;

&lt;p&gt;The list maintained an ordering invariant: nodes near the head were used more recently than nodes near the tail. Eviction removed the tail because tail was the least recently used. With TTL, the eviction criterion changes from "least recently used" to "expired or least recently used." The list ordering by recency is still useful for the second clause, but the first clause needs a different signal. Two choices land:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lazy expiry: on &lt;code&gt;get&lt;/code&gt;, if the node's expiry timestamp has passed, treat it as a miss, remove the node, return &lt;code&gt;-1&lt;/code&gt;. On &lt;code&gt;put&lt;/code&gt;, when the cache is full, evict the tail as before. Simple, keeps the data structure unchanged. Worst case has expired entries lingering until they're touched.&lt;/li&gt;
&lt;li&gt;Eager expiry: maintain a second ordered structure keyed by expiry time (a min heap or a second linked list ordered by expiry). On every &lt;code&gt;get&lt;/code&gt; or &lt;code&gt;put&lt;/code&gt;, sweep expired entries off the front of the expiry structure. More memory, tighter behaviour on memory pressure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Bar Raiser doesn't necessarily care which one you pick. They care whether you can articulate which invariant you're preserving and which one you're modifying. The candidate who answers "the recency ordering still holds, I'm adding an eviction predicate that checks expiry first" has demonstrated something about reasoning that solving twenty more problems wouldn't have.&lt;/p&gt;

&lt;h2&gt;
  
  
  The three signals the round is graded on
&lt;/h2&gt;

&lt;p&gt;Three threads run through what separates a "hire" rating from a "no hire" in a Bar Raiser coding round, and they correspond to the three places the round lives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clean reasoning under loose constraints.&lt;/strong&gt; The problem statement is often deliberately under specified. Inputs might or might not be sorted. Sizes might be zero. Negative numbers might appear. The candidate who immediately asks the clarifying questions, defines assumptions on a whiteboard area, and only then starts coding signals the kind of structured thinking Amazon values. Jumping into code without defining the problem space reads as someone operating from pattern memory rather than from understanding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adaptation across the follow up.&lt;/strong&gt; This is the LRU + TTL example above. The follow up isn't a new problem. It's the same problem with a constraint flipped. The candidate who can isolate which part of the previous solution depended on the changed constraint passes. The candidate who scraps and restarts shows that the previous solution was lookup, not construction.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The follow-up is usually testing whether your first solution was constructed or recalled.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Articulation of why.&lt;/strong&gt; Bar Raisers listen for the explanation that goes past "I'm using a hash map here." They want to hear what invariant the hash map maintains, what would break with an array, what tradeoff you accepted. A candidate who can talk through the load bearing reasoning is a candidate who would raise the team's average performance, which is the literal evaluation Amazon writes down.&lt;/p&gt;

&lt;h2&gt;
  
  
  The four mistakes the round punishes
&lt;/h2&gt;

&lt;p&gt;These are not coding mistakes. They're prep mistakes that compound under the format.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solving silently.&lt;/strong&gt; You finish in fifteen minutes and present the answer. The Bar Raiser has nothing to evaluate beyond the output. Your reasoning made no observable trace. Narrate the structural choices as you go: why a hash map and not a sorted array, why an &lt;code&gt;O(n)&lt;/code&gt; first pass instead of trying for &lt;code&gt;O(log n)&lt;/code&gt;, what edge case you'll come back to. The round is graded on observable reasoning, not just output.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chasing the optimal solution before the working one.&lt;/strong&gt; Some candidates spend twenty minutes thinking before writing a line, hunting for the optimal solution out of the gate. Then the follow up arrives at minute thirty five and there's no time for it. Get a correct &lt;code&gt;O(n^2)&lt;/code&gt; or &lt;code&gt;O(n log n)&lt;/code&gt; working first, confirm edge cases, then improve when the Bar Raiser asks. The follow up is part of the round's grade. Running out of time on it is a soft fail you didn't have to take.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Treating the follow up as a new problem.&lt;/strong&gt; When the constraint changes, the instinct is to discard the previous solution and start fresh. That's almost always wrong. The follow up is testing whether you understand the structural relationship between the constraint and your approach. Ask which part of your solution depended on the constraint that just changed, then modify only that part. If the original used binary search because the input was sorted and the follow up removes the sort guarantee, you change the search mechanism, not the algorithm.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skipping edge cases until prompted.&lt;/strong&gt; Empty input, single element, duplicate values, integer overflow boundaries. Walking through two of these unprompted before the interviewer asks signals thoroughness and pre empts the follow up that targets the case you skipped. Reactive edge case handling reads as a candidate who optimises for the example and not for the input space.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What this changes about how to prep
&lt;/h2&gt;

&lt;p&gt;The standard FAANG prep approach is to go deep on two or three pattern families and hope the interview lands in their zone. That works at companies with narrower coverage. For Amazon's breadth it leaves too many uncovered surfaces, and for the Bar Raiser in particular it leaves you with no recovery path if the chosen pattern isn't one you went deep on.&lt;/p&gt;

&lt;p&gt;Two adjustments help.&lt;/p&gt;

&lt;p&gt;The first is &lt;strong&gt;interleaving over blocking&lt;/strong&gt;. Block practice is a week on sliding window, a week on dynamic programming, a week on graphs. Interleaved practice rotates between pattern families inside the same study session. Cognitive science research on interleaved practice consistently shows that mixing problem types during study improves your ability to identify which pattern applies to a new problem. Mixed practice is harder while you're doing it (you can't lean on the priming of "this week is sliding window, every problem is sliding window") and that's the point. The harder identification work during practice is the same identification work the Bar Raiser's follow up will require.&lt;/p&gt;

&lt;p&gt;The second is &lt;strong&gt;practising the follow up explicitly&lt;/strong&gt;. After every problem you solve, ask: what's the constraint change that would invalidate this solution most? Implement the modified version. The constraint change is the rep that builds the adaptation skill, and adaptation is what the Bar Raiser scores. If LRU Cache is the practice problem, "add TTL" is the rep. If sliding window with at most K distinct characters is the problem, "K varies based on the substring contents" is the rep.&lt;/p&gt;

&lt;p&gt;You can do both of these without any platform: a pattern list, a notebook of constraint flips per problem, and a habit of running the second pass before moving on. The reason most prep skips it is that there's no immediate signal it's working. The signal arrives in the round when the Bar Raiser shifts a constraint and you reach for the invariant instead of the template.&lt;/p&gt;

&lt;p&gt;If you're preparing for Amazon and your practice still stops after finding the optimal solution, you're missing the part the Bar Raiser actually evaluates.&lt;/p&gt;

&lt;p&gt;I wrote a longer breakdown covering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon's highest frequency pattern families&lt;/li&gt;
&lt;li&gt;how the Bar Raiser evaluates coding rounds&lt;/li&gt;
&lt;li&gt;and the follow up drills that build adaptation under constraint shifts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.codeintuition.io/blogs/amazon-bar-raiser-coding-interview" rel="noopener noreferrer"&gt;Full breakdown here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The day of the loop
&lt;/h2&gt;

&lt;p&gt;A few logistical things candidates often get wrong, worth noting because they're outside your technical preparation but inside the Bar Raiser's evaluation surface.&lt;/p&gt;

&lt;p&gt;The onsite loop is typically four or five rounds across a single day or consecutive virtual sessions. One is system design for senior candidates. Two or three are coding rounds. One is behavioural, focused on the Leadership Principles. The Bar Raiser participates in at least one of these, and you won't know which until after the process ends.&lt;/p&gt;

&lt;p&gt;Coding rounds are about forty five minutes. Five to ten minutes go to introductions and the problem statement. Twenty five to thirty minutes go to the initial solution. The last ten to fifteen are where the follow up happens. If your first solution takes thirty five minutes because you were reaching for the optimal answer from the start, you've already lost the follow up window.&lt;/p&gt;

&lt;p&gt;Each round is evaluated independently. A weak round one doesn't doom you if rounds two and three are strong, and the Bar Raiser considers performance across the loop. Don't mentally rehearse what just happened in the breaks. Reset and walk into the next round.&lt;/p&gt;

&lt;p&gt;The debrief happens without you. All interviewers, including the Bar Raiser, meet afterward. If the hiring manager wants to extend an offer and the Bar Raiser objects, the Bar Raiser's veto stands.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three months from now
&lt;/h2&gt;

&lt;p&gt;The Bar Raiser round isn't a fundamentally different format. It's the same problem solving test with a higher evaluation standard and broader pattern coverage. Two things move the needle for it: pattern breadth across families that gets exercised by interleaved practice, and the ability to adapt your solution when the constraint shifts in the follow up.&lt;/p&gt;

&lt;p&gt;I wrote a longer version with the full pattern breakdown across Amazon's tested families on &lt;a href="https://www.codeintuition.io/blogs/amazon-bar-raiser-coding-interview" rel="noopener noreferrer"&gt;my own blog&lt;/a&gt;, including how the loop's other rounds factor into the Bar Raiser's overall judgement.&lt;/p&gt;

&lt;p&gt;What's a follow up question from a real interview that changed how you think about the problem itself, not just the solution?&lt;/p&gt;

</description>
      <category>career</category>
      <category>leetcode</category>
      <category>codinginterview</category>
      <category>aws</category>
    </item>
    <item>
      <title>Arrays vs Linked Lists: Why Big-O Alone Doesn't Explain the Tradeoff</title>
      <dc:creator>Prakhar Srivastava</dc:creator>
      <pubDate>Sun, 10 May 2026 10:58:45 +0000</pubDate>
      <link>https://dev.to/codeintuition/arrays-vs-linked-lists-why-big-o-alone-doesnt-explain-the-tradeoff-foa</link>
      <guid>https://dev.to/codeintuition/arrays-vs-linked-lists-why-big-o-alone-doesnt-explain-the-tradeoff-foa</guid>
      <description>&lt;p&gt;The complexity table for arrays and linked lists is the most memorised, least understood reference in interview prep. Arrays give &lt;code&gt;O(1)&lt;/code&gt; access. Linked lists give &lt;code&gt;O(1)&lt;/code&gt; insertion. That's the answer on every flashcard. The table doesn't lie. It's incomplete. The missing piece, once you have it, makes every entry in that table feel obvious.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Arrays and linked lists differ for one reason: contiguous memory vs scattered memory.&lt;/li&gt;
&lt;li&gt;Every entry in the complexity table follows from that single fact.&lt;/li&gt;
&lt;li&gt;Two &lt;code&gt;O(n)&lt;/code&gt; operations aren't equal in practice. A million element array scan can often run an order of magnitude faster than the equivalent linked list traversal because of cache locality.&lt;/li&gt;
&lt;li&gt;Linked lists win when your algorithm already holds a pointer to the mutation point. They lose when you have to traverse to find it first.&lt;/li&gt;
&lt;li&gt;The right question on any problem isn't "which has better Big-O?" It's "which operation does this algorithm perform most?"&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What the table tells you, and where it stops
&lt;/h2&gt;

&lt;p&gt;The standard table has six or seven rows: access by index, search, insert at start, insert at end, insert at known position, delete at known position, memory overhead. Each row has two cells. Most engineers memorise the cells and stop there.&lt;/p&gt;

&lt;p&gt;The table is correct. Arrays really do give you &lt;code&gt;O(1)&lt;/code&gt; index access. Linked lists really do give you &lt;code&gt;O(1)&lt;/code&gt; insertion at a known node. The cells aren't wrong. They describe outcomes without explaining why those outcomes happen, which means you can't generalise to a problem you haven't seen.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Memorising complexity tables teaches conclusions. Understanding memory layout teaches reasoning.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If your interviewer asks "which would you use here?" on an unfamiliar problem, a memorised table doesn't help. You need the mechanism underneath.&lt;/p&gt;

&lt;h2&gt;
  
  
  Memory layout is the root cause
&lt;/h2&gt;

&lt;p&gt;Both data structures store the same thing: a sequence of values. They differ in exactly one way: where those values live.&lt;/p&gt;

&lt;p&gt;An array is a single block of contiguous memory. Element &lt;code&gt;0&lt;/code&gt; sits at the base address. Element &lt;code&gt;1&lt;/code&gt; sits at &lt;code&gt;base + element_size&lt;/code&gt;. Element &lt;code&gt;i&lt;/code&gt; sits at &lt;code&gt;base + (i * element_size)&lt;/code&gt;. The address of any element is a single arithmetic expression. That arithmetic is what makes index access &lt;code&gt;O(1)&lt;/code&gt;. It's also what makes insertion in the middle &lt;code&gt;O(n)&lt;/code&gt;: to make room at index &lt;code&gt;2&lt;/code&gt; of a 5 element array, every element from index &lt;code&gt;2&lt;/code&gt; onward has to shift right.&lt;/p&gt;

&lt;p&gt;A linked list is a chain of nodes that can live anywhere in memory. Node &lt;code&gt;0&lt;/code&gt; might be at address 1000. Node &lt;code&gt;1&lt;/code&gt; at address 7500. Node &lt;code&gt;2&lt;/code&gt; at address 3200. The only way to find node &lt;code&gt;2&lt;/code&gt; is to start at node &lt;code&gt;0&lt;/code&gt; and follow the next pointers. That traversal is what makes index access &lt;code&gt;O(n)&lt;/code&gt;. It's also why insertion at a known node is &lt;code&gt;O(1)&lt;/code&gt;: you allocate a new node, point it at the next node, and update the previous node's pointer. Two pointer assignments. No shifting.&lt;/p&gt;

&lt;p&gt;Every entry in the complexity table is a consequence of these two facts. The math gives you the cells. The layout tells you why.&lt;/p&gt;

&lt;h2&gt;
  
  
  When the same Big-O isn't the same: cache locality
&lt;/h2&gt;

&lt;p&gt;This is what the table can't show. Both arrays and linked lists are technically &lt;code&gt;O(n)&lt;/code&gt; for a full traversal. In practice, an array traversal of a million integers can often run an order of magnitude faster than the equivalent linked list traversal. Same Big-O. Very different wall clock time.&lt;/p&gt;

&lt;p&gt;The reason is the CPU cache.&lt;/p&gt;

&lt;p&gt;When the CPU reads element &lt;code&gt;0&lt;/code&gt; from an array, it doesn't fetch just that one integer. It loads an entire cache line, typically 64 bytes, into the L1 cache. For a 4 byte integer array, that single fetch pulls in the next 15 elements alongside &lt;code&gt;arr[0]&lt;/code&gt;. Iterating through them costs nanoseconds per access because they're already on chip. The CPU prefetcher notices the access pattern and starts loading the next cache line before you ask for it.&lt;/p&gt;

&lt;p&gt;Linked list nodes are scattered across the heap. Each &lt;code&gt;node.next&lt;/code&gt; dereference points at an unrelated address. Each pointer chase is a potential cache miss, and a cache miss is roughly a 100 nanosecond round trip to main memory. Multiply that by a million nodes and the gap is enormous.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Both loops are O(n). The constants differ by ~10x in practice.
&lt;/span&gt;
&lt;span class="c1"&gt;# Array: cache friendly contiguous reads
&lt;/span&gt;&lt;span class="n"&gt;total&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# next 15 ints already in L1 after the first read
&lt;/span&gt;    &lt;span class="n"&gt;total&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;          

&lt;span class="c1"&gt;# Linked list: cache hostile pointer chases
&lt;/span&gt;&lt;span class="n"&gt;total&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;head&lt;/span&gt;
&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;total&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;

    &lt;span class="c1"&gt;# node.next probably triggers a cache miss
&lt;/span&gt;    &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;next&lt;/span&gt;    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
`&lt;/p&gt;

&lt;p&gt;I keep noticing this is what surprises engineers most when they first see it. They've quoted &lt;code&gt;O(n)&lt;/code&gt; for traversal in both cases for years. The constant factor never came up because Big-O hides it.&lt;/p&gt;

&lt;p&gt;This is the reason &lt;code&gt;ArrayList&lt;/code&gt; in Java and Python's built in &lt;code&gt;list&lt;/code&gt; are array backed. You're paying the occasional &lt;code&gt;O(n)&lt;/code&gt; resize cost in exchange for cache friendly iteration on every operation between resizes. That trade is almost always worth it on real workloads, even ones that involve frequent appends.&lt;/p&gt;

&lt;p&gt;If you mention this in an interview, you've shown that you understand the hardware level, not just the math. Two candidates can both quote &lt;code&gt;O(n)&lt;/code&gt; for traversal. Only one knows that one of those &lt;code&gt;O(n)&lt;/code&gt;s is dramatically slower in practice.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Big-O tells you how growth behaves. Hardware tells you what actually feels fast.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  When linked lists actually win
&lt;/h2&gt;

&lt;p&gt;Honest credit where it's due: linked lists are the right structure in a narrow set of cases. Don't read the cache argument as "always pick arrays."&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mutation at a known position:&lt;/strong&gt; if your algorithm already holds a pointer to the node where you're inserting or deleting, linked lists are &lt;code&gt;O(1)&lt;/code&gt; and arrays are &lt;code&gt;O(n)&lt;/code&gt;. The keyword is "already holds." The standard LeetCode example is &lt;code&gt;LRU Cache&lt;/code&gt;, where a hash map gives you &lt;code&gt;O(1)&lt;/code&gt; access to the relevant node and a doubly linked list gives you &lt;code&gt;O(1)&lt;/code&gt; reordering on every access. Neither structure alone solves the problem.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pointer only manipulation:&lt;/strong&gt; reversing a segment, splitting a list at a node, merging two sorted sequences. None of these require moving data, they're pure pointer reassignment. The &lt;a href="https://www.codeintuition.io/courses/singly-linked-list" rel="noopener noreferrer"&gt;singly linked list course&lt;/a&gt; on my own site covers seven distinct patterns built on this property.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No reallocation cost:&lt;/strong&gt; dynamic arrays double their capacity when full, which involves copying every element. Linked lists grow one node at a time. For genuinely unbounded streaming workloads with random insertion, the linked list's amortised cost can come out ahead.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The most frequent mistake I watch beginners make is reaching for a linked list the moment they see "insert" or "delete" in a problem, without checking whether they already hold a pointer to the position. Without that pointer, you traverse first (&lt;code&gt;O(n)&lt;/code&gt;) and then insert (&lt;code&gt;O(1)&lt;/code&gt;). The traversal eats the advantage. Total cost is &lt;code&gt;O(n)&lt;/code&gt;, same as the array shift, with worse constants because of cache misses.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;An &lt;code&gt;O(1)&lt;/code&gt; operation that requires &lt;code&gt;O(n)&lt;/code&gt; setup is still &lt;code&gt;O(n)&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How to choose without memorising the table
&lt;/h2&gt;

&lt;p&gt;Once memory layout is your mental model, the choice becomes a small set of questions you can ask on any problem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does the algorithm need random access by index, or binary search? Pick an array. Linked lists can't do &lt;code&gt;O(log n)&lt;/code&gt; binary search because finding the middle requires traversal.&lt;/li&gt;
&lt;li&gt;Does the algorithm scan the whole structure repeatedly, where speed matters? Pick an array. Cache locality is doing real work in your favour.&lt;/li&gt;
&lt;li&gt;Does the algorithm hold direct pointers to nodes it mutates (head, tail, hash map mapped)? A linked list might genuinely win.&lt;/li&gt;
&lt;li&gt;Does the algorithm reverse, split, or merge segments by relinking? Linked lists are the natural fit.&lt;/li&gt;
&lt;li&gt;Does the algorithm need both random access and frequent reordering at known positions? You probably want both: a hash map of keys to linked list nodes. &lt;code&gt;LRU Cache&lt;/code&gt; is the canonical example.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Almost every interview problem involving a sequence of mutations is either an array problem (because access dominates) or a hash map plus linked list problem (because both operations matter). Pure linked list only problems are surprisingly rare in interview banks. They mostly show up when the problem itself is testing whether you can do pointer manipulation cleanly.&lt;/p&gt;

&lt;p&gt;An aside on the table itself: the complexity tables you see everywhere aren't useless, even if they're incomplete. They're a fast lookup once you know what's underneath. The point isn't to throw the table out. It's to stop treating it as the answer when it's the symptom.&lt;/p&gt;

&lt;p&gt;If Big-O tables still feel like facts you memorise rather than intuitions you can derive, I wrote a longer breakdown covering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;stacks and queues from the same memory model&lt;/li&gt;
&lt;li&gt;adjacency lists vs matrices&lt;/li&gt;
&lt;li&gt;and why some &lt;code&gt;O(n)&lt;/code&gt; operations feel dramatically faster in practice&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.codeintuition.io/blogs/arrays-vs-linked-lists-difference" rel="noopener noreferrer"&gt;Full breakdown here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What's a problem where you reached for a linked list and only realised mid implementation that an array would have been cleaner?&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>career</category>
      <category>computerscience</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Am I Ready for FAANG? A Better Test Than Solving More LeetCode</title>
      <dc:creator>Prakhar Srivastava</dc:creator>
      <pubDate>Sat, 09 May 2026 12:40:03 +0000</pubDate>
      <link>https://dev.to/codeintuition/am-i-ready-for-faang-a-better-test-than-solving-more-leetcode-im9</link>
      <guid>https://dev.to/codeintuition/am-i-ready-for-faang-a-better-test-than-solving-more-leetcode-im9</guid>
      <description>&lt;p&gt;You've solved 200 problems. Mediums you've already seen take fifteen minutes. The next one you haven't seen freezes you cold inside of five. And every time you ask yourself if you're ready for a FAANG loop, the honest answer is "I don't know."&lt;/p&gt;

&lt;p&gt;The "I don't know" isn't a feeling problem. It's that you've been asking a feelings question about a performance outcome.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Self assessed readiness is unreliable because confidence swings with every session.&lt;/li&gt;
&lt;li&gt;Solving a familiar problem and constructing a solution to a novel one are different skills.&lt;/li&gt;
&lt;li&gt;The fix is a measurable test: an unseen medium in a pattern family you've studied, under timed conditions, with a hard cap on execution attempts.&lt;/li&gt;
&lt;li&gt;Pass three of those across three different families and you likely have the level of transfer real interviews actually reward. Fail one and you know exactly where to work.&lt;/li&gt;
&lt;li&gt;The signal isn't the count. It's whether the recognition holds when the problem name is hidden.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why "do I feel ready" is the wrong question
&lt;/h2&gt;

&lt;p&gt;Solving a problem you've seen before and reasoning your way through one you haven't are different skills. Practice on the same patterns repeatedly builds recognition. Recognition only fires on shapes you've encountered. The interview gives you a shape you haven't, and asks you to construct the approach from the constraints alone.&lt;/p&gt;

&lt;p&gt;That's the gap between &lt;a href="https://en.wikipedia.org/wiki/Transfer_of_learning" rel="noopener noreferrer"&gt;near transfer and far transfer&lt;/a&gt;. Near transfer is what 300 problems will buy you, applying what you've practised to similar setups. Far transfer is what FAANG selects for, applying what you've understood to genuinely new ones.&lt;/p&gt;

&lt;p&gt;There's a second issue with self assessed confidence. It swings with your last session. Crush five tree problems on Saturday morning and you feel ready. Freeze on an unfamiliar graph problem on Saturday afternoon and the confidence evaporates. Neither data point reflects your stable ability across the families an interview actually draws from.&lt;/p&gt;

&lt;p&gt;The result is a loop. You prep, you feel uncertain, you prep more, you still feel uncertain. The method of evaluation is wrong. You're asking a feelings question about a performance outcome.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Interview readiness isn't confidence. It's repeatable performance under unfamiliar conditions.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The three pattern family test
&lt;/h2&gt;

&lt;p&gt;Readiness is a performance threshold you can measure. The protocol takes about two hours and produces a binary answer.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Pick an unseen medium from a family you've studied.&lt;/strong&gt; Sliding window, tree traversal, graph BFS, DP subsequence. The problem has to be genuinely novel. You haven't solved it, browsed its discussion thread, or read hints for it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solve it under real interview constraints.&lt;/strong&gt; Twenty minute timer. The problem name covered or aliased so you can't reverse engineer the family from the title. No hints. A hard cap on code execution attempts so you can't trial and error your way through. You have to identify the approach, build it, and trace it before running code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repeat across two more families you've studied but haven't over practised.&lt;/strong&gt; One pass isn't signal. Three passes across different families confirms the readiness is broad, not narrow.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Pass means: solve within twenty minutes with fewer than two failed execution attempts. Anything else is useful data.&lt;/p&gt;

&lt;p&gt;If you pass three across different families and you likely have the level of transfer real interviews actually reward. If you fail one, you know precisely where to work.&lt;/p&gt;

&lt;h2&gt;
  
  
  What that looks like on graph BFS
&lt;/h2&gt;

&lt;p&gt;Pick the worst case for this exercise: a medium graph BFS problem you haven't seen. The constraints describe a grid, an adjacency list, or some traversal where shortest distance is the answer.&lt;/p&gt;

&lt;p&gt;Two minutes in, you've identified the family. Not from the problem title because that's hidden. From the constraints: shortest distance, unweighted edges, layered exploration. That recognition came from training what makes BFS the right approach, not from spotting "BFS" in the problem name.&lt;/p&gt;

&lt;p&gt;The solution builds from BFS's invariant. At any moment, the queue contains every node whose shortest distance from the source is exactly the distance you're currently processing. You aren't recalling "this one used a deque." You're reasoning: enqueue the start at distance zero, expand neighbours level by level, return as soon as you pop the target.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;collections&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;deque&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;shortest_path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;queue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;deque&lt;/span&gt;&lt;span class="p"&gt;([(&lt;/span&gt;&lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)])&lt;/span&gt;
    &lt;span class="n"&gt;visited&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dist&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;popleft&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;dist&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;nbr&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;nbr&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;visited&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;continue&lt;/span&gt;
            &lt;span class="n"&gt;visited&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;nbr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;nbr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dist&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before you run anything, you trace it on a four node example. Walk the queue. Check &lt;code&gt;visited&lt;/code&gt;. Verify the returned distance matches what you'd expect for a path you can see in your head. The mental dry run catches bugs the random test and submit loop misses, and it's the exact behaviour interviewers watch for: verifying correctness without leaning on the compiler.&lt;/p&gt;

&lt;p&gt;You submit. It passes. Eight minutes still on the timer. Not because you rushed, but because identification took two minutes instead of fifteen, and the construction followed the invariant rather than trial and error.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Recognition under pressure matters more than recall in comfort.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is what FAANG ready looks like. Not "I feel confident." A repeatable, observable performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  A second pass on a different family
&lt;/h2&gt;

&lt;p&gt;Now you change family. Pick a variable sliding window problem you haven't seen. The constraint shape: a contiguous range over an array or string, a flexible boundary that grows and shrinks, an objective that asks for the longest, shortest, or maximum window meeting some condition.&lt;/p&gt;

&lt;p&gt;The recognition again happens within the first three minutes, before any code. The constraints match the variable sliding window's three triggers, you can name the invariant the window has to maintain, and you write the same expand then contract skeleton you'd write for any problem in the family.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;variable_window&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;valid&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;left&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="n"&gt;best&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="n"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;init_state&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;right&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt;
        &lt;span class="n"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;include&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;right&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
        &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="nf"&gt;valid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;exclude&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;left&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
            &lt;span class="n"&gt;left&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="n"&gt;best&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;best&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;right&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;left&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;best&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You fill in &lt;code&gt;init_state&lt;/code&gt;, &lt;code&gt;include&lt;/code&gt;, &lt;code&gt;exclude&lt;/code&gt;, and &lt;code&gt;valid&lt;/code&gt; for the specific problem. The skeleton stays the same. That's the marker of a pattern that's actually generalised in your head: you write the skeleton first, then specialise.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;When a pattern generalises, you stop memorising solutions and start specialising frameworks.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you pass this one too, you've got two of three. One more, on a third family you haven't over practised, decides it.&lt;/p&gt;

&lt;h2&gt;
  
  
  When you fail the test
&lt;/h2&gt;

&lt;p&gt;Most engineers don't pass all three on the first attempt. That's expected. A clean three for three on the first try usually means the families were too comfortable.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;One family failed.&lt;/strong&gt; You know the pattern at a surface level but haven't internalised the identification triggers or the construction skeleton. Go back to the foundational material for that family. Don't just grind more problems in it. Study what makes the pattern applicable, the constraint combinations that point to it, the invariant every problem in the family shares. Once you can articulate that without notes, retest with a different unseen problem.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Two families failed.&lt;/strong&gt; You likely have one strong area where you've over practised and shallow gaps everywhere else. Common for engineers who spent months on arrays or trees because the work felt productive. Broaden the coverage. Spend focused time on the families where the understanding is thin.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;All three failed.&lt;/strong&gt; The preparation has been building near transfer without building far transfer. That's a method gap, not a talent gap. Shift from solving high volumes to studying fewer problems more deeply. Focus on identification and constraint analysis rather than just reaching a correct solution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One catch. Don't retake the test with the same problems. A retest on a problem you've already seen, even if you failed it, measures recall instead of reasoning. Find a different unseen problem in the same family.&lt;/p&gt;

&lt;h2&gt;
  
  
  The four signals most engineers use instead
&lt;/h2&gt;

&lt;p&gt;Before the test, it helps to name the signals you've probably been using, and why each one lies.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Problem count.&lt;/strong&gt; Tells you nothing about how the problems were solved. Someone at 120 problems with genuine pattern depth outperforms someone at 400 who relied on hints for half of them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Topic completion.&lt;/strong&gt; You finished sliding window two months ago and haven't touched it since. Completion isn't retention. Spacing matters. The performance you had on week three doesn't survive without revisits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speed on familiar problems.&lt;/strong&gt; Two Sum in two minutes feels like fluency. It's actually retrieval of a stored solution. The moment a novel problem looks similar but has different constraints, the speed evaporates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Peer comparison.&lt;/strong&gt; Your friend got into Google in six months. That ignores their background, their pattern coverage, how they practised, and what level they interviewed for.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The three family test bypasses all four. It doesn't care about the count, the completion checkmarks, the recall speed, or anyone else's timeline. It measures one thing: can you construct a solution to a novel problem, under pressure, across families.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the conditions yourself
&lt;/h2&gt;

&lt;p&gt;The hardest part of the test is replicating real interview conditions. Solving at your desk, documentation a tab away, with the timer optional, doesn't replicate a forty five minute FAANG round.&lt;/p&gt;

&lt;p&gt;What you actually need: a source of unseen mediums in the families you've studied (the &lt;a href="https://www.codeintuition.io/courses/array/tFob_VIXJQJ8bUN3hlSMB" rel="noopener noreferrer"&gt;variable sliding window lesson&lt;/a&gt; covers one family if you haven't been through it before), a way to hide the problem name (a friend covering it, or a browser extension that aliases the title), a kitchen timer set to twenty minutes, and the discipline to stop after two failed runs. The conditions matter. The test fails the moment you peek at hints or let the timer slide.&lt;/p&gt;

&lt;p&gt;I keep noticing the same two things across engineers who run this test for the first time. The ones who fail one family and immediately know why aren't far from ready, they're a couple of weeks of focused study away. The ones who fail all three and panic into more volume usually need to step away from the problem bank for a week and re read the foundational material on identification and invariants. The diagnostic is more useful than the score.&lt;/p&gt;

&lt;p&gt;If you're stuck in the “I've solved a lot but still don't know if I'm ready” phase, the problem usually isn't effort.&lt;/p&gt;

&lt;p&gt;It's measurement.&lt;/p&gt;

&lt;p&gt;I wrote a longer breakdown covering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;per family readiness signals&lt;/li&gt;
&lt;li&gt;common failure patterns&lt;/li&gt;
&lt;li&gt;and what to fix when one family collapses under pressure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.codeintuition.io/blogs/am-i-ready-for-faang" rel="noopener noreferrer"&gt;Full breakdown here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What's the specific moment you knew you weren't ready yet? A particular problem, a frozen minute in a mock, or the cumulative shape of practice that just felt off?&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>leetcode</category>
      <category>career</category>
      <category>interview</category>
    </item>
    <item>
      <title>Algorithmic intuition matters more than problem count in coding interviews</title>
      <dc:creator>Prakhar Srivastava</dc:creator>
      <pubDate>Thu, 07 May 2026 12:31:30 +0000</pubDate>
      <link>https://dev.to/codeintuition/algorithmic-intuition-matters-more-than-problem-count-in-coding-interviews-pep</link>
      <guid>https://dev.to/codeintuition/algorithmic-intuition-matters-more-than-problem-count-in-coding-interviews-pep</guid>
      <description>&lt;p&gt;Two engineers prep for the same cycle. One solves more than 500 problems and freezes when the medium doesn't look like anything from the practice list. The other solves a fraction of that and works the problem out from scratch on the screen. The variable that decides which way it goes isn't volume.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Volume practice builds memory of specific problems. It builds little of the skill that recognises which technique applies to a problem you've never seen.&lt;/li&gt;
&lt;li&gt;Learning science calls these near transfer (familiar problems) and far transfer (unfamiliar ones). Volume practice mostly trains near transfer.&lt;/li&gt;
&lt;li&gt;Real interviews test far transfer because the problem isn't labelled and won't match anything in the practice bank.&lt;/li&gt;
&lt;li&gt;Recognition is trainable. The training is explicit: read the problem for triggers, name the pattern, then write code.&lt;/li&gt;
&lt;li&gt;On the next problem you face, the recognition pass takes 30 seconds before you touch the keyboard. That pass is the gap.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Disclosure up front: I built Codeintuition, a structured learning platform for coding interviews. This post is about the recognition skill that decides whether volume practice converts to interview readiness, not about the product. The closing link goes to the longer version on my own blog.&lt;/p&gt;

&lt;h2&gt;
  
  
  What 500 problems actually trains you to do
&lt;/h2&gt;

&lt;p&gt;Most coding interview advice tells you to solve more problems. There's a real reason this advice exists. Volume builds fluency: you stop being confused by syntax, you stop misreading constraints, and the interview minutes that used to go to typo-hunting start going to the actual problem. Past a few hundred problems, those skills are usually solid.&lt;/p&gt;

&lt;p&gt;What volume doesn't reliably build is the ability to read a problem you've never seen and decide which technique applies. That's a different skill, and the way most engineers practise actively trains around it. You attempt the problem, get stuck, glance at the LeetCode tags, notice it says &lt;code&gt;stack&lt;/code&gt;, and read a solution. The next time a stack problem shows up, you might recognise it. The time after, you might not. Recognition is being built by accident, not by design.&lt;/p&gt;

&lt;p&gt;In an interview the tag is gone. The problem statement isn't labelled &lt;code&gt;stack&lt;/code&gt; or &lt;code&gt;sliding window&lt;/code&gt; or &lt;code&gt;dynamic programming&lt;/code&gt;. So the skill that's been built by accident has to do work it's never been explicitly trained for. That's where the freeze comes from.&lt;/p&gt;

&lt;h2&gt;
  
  
  Near transfer vs far transfer
&lt;/h2&gt;

&lt;p&gt;There's a useful piece of language for this from the learning sciences, and it's worth borrowing because it predicts the failure mode precisely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Near transfer&lt;/strong&gt; is when you can solve a problem because it resembles one you've practised. You solved Two Sum with a hash map. The interviewer hands you Two Sum II with a sorted array. The visible details changed but the underlying idea is close enough that recognition fires.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Far transfer&lt;/strong&gt; is when you can solve a problem that doesn't resemble anything in your practice set, by reading the structure of the problem and constructing the approach from first principles. You see "minimum window of an array containing all characters of a target string" and you've never solved a window problem with a character constraint. You read the structure and construct a variable sliding window from scratch.&lt;/p&gt;

&lt;p&gt;Grinding 500 problems builds near transfer well. It does much less for far transfer. Whether far transfer is reliably teachable is a debate the &lt;a href="https://en.wikipedia.org/wiki/Transfer_of_learning" rel="noopener noreferrer"&gt;research on transfer of learning&lt;/a&gt; hasn't fully settled, but the evidence does support one specific intervention. Explicit instruction in &lt;strong&gt;when&lt;/strong&gt; and &lt;strong&gt;why&lt;/strong&gt; a method applies, not just &lt;strong&gt;how&lt;/strong&gt; to execute it, produces more transfer than practice alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  A worked example: monotonic stack on stock span
&lt;/h2&gt;

&lt;p&gt;Here's the kind of problem where the gap shows up. You're given an array of stock prices. For each day, find how many consecutive days before it had a price less than or equal to that day. The constraints don't mention "stack" anywhere.&lt;/p&gt;

&lt;p&gt;If you've practised the technique without practising the recognition, you'll probably try a nested loop first, hit &lt;code&gt;O(n^2)&lt;/code&gt;, and try to remember which problems you've seen with this shape. If recognition is trained, you read the problem's structure and three triggers fire:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"For each day" means you need an answer per element. Element-wise, not aggregate.&lt;/li&gt;
&lt;li&gt;"Consecutive days before it" means a directional search to the left.&lt;/li&gt;
&lt;li&gt;"A price less than or equal to" means a comparison condition between the current element and what's to its left.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Three triggers point at one technique: the previous closest occurrence pattern, implemented with a monotonic stack. The triggers are what you read off the problem statement. The pattern is what you write in code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;stock_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prices&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;stack&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="n"&gt;spans&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;price&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prices&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;stack&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;prices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;stack&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;stack&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;stack&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="n"&gt;spans&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;spans&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The implementation is short and well documented across the internet. The bottleneck isn't writing the loop. It's noticing that "for each element, find the previous element satisfying a comparison condition" is one recognisable shape that always uses this technique. That's the recognition skill, and it's the part most prep doesn't train.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a recognition drill looks like
&lt;/h2&gt;

&lt;p&gt;Before solving a problem, do a 30 to 60 second pass on the statement alone. No code. The output of the pass is a name: which pattern, and why.&lt;/p&gt;

&lt;p&gt;For each technique you've learned, you should be able to name the two or three observable features of a problem statement that signal it applies. A few common ones from the techniques most coding interviews lean on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Variable sliding window: contiguous subarray or substring, plus a condition that holds across the window, plus an optimisation on window length. When all three appear, it's almost always this technique.&lt;/li&gt;
&lt;li&gt;Two pointers: a sorted array (or one you can sort) and a search for a pair or triple satisfying a target. The pointers move from the ends inward.&lt;/li&gt;
&lt;li&gt;Monotonic stack: per element answer, plus a directional search left or right, plus a comparison condition. Stock span fits. Next greater element fits.&lt;/li&gt;
&lt;li&gt;Backtracking: enumerate combinations or paths under a constraint, with the option to abandon a partial candidate when the constraint is violated.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you start a problem, the drill is: read the constraints, list the features you see, name the technique, then start coding. If you can't name a technique in 60 seconds, you don't start coding. You re-read the constraints with the trigger checklists in front of you and try again. If you still can't, mark the problem and learn the missing feature before moving on.&lt;/p&gt;

&lt;p&gt;The first time, this feels artificial. After 30 problems, it stops feeling artificial because the features start firing automatically as you read.&lt;/p&gt;

&lt;h2&gt;
  
  
  The other thing prep tends to skip
&lt;/h2&gt;

&lt;p&gt;A bit of honesty before the close. Recognition isn't the only thing prep tends to skip. The conditions you practise under matter at least as much, and the default LeetCode loop is much friendlier than an interview.&lt;/p&gt;

&lt;p&gt;The default loop has the title visible, the difficulty visible, the company tags visible, the discussion section a click away, and no clock. None of those are present in the actual interview. If your reads of the problem have always been informed by the tag, the moment the tag disappears the read gets harder.&lt;/p&gt;

&lt;p&gt;A reasonable practice protocol for the last few weeks before an interview cycle:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Cover the problem name with a sticky note before reading the constraints. Many problem names give away the technique.&lt;/li&gt;
&lt;li&gt;Set a 25 minute clock per medium. If you blow past it, that's data: the problem went to "still working" rather than "solved", and the next pass focuses on what slowed you down.&lt;/li&gt;
&lt;li&gt;Skip the discussion section on the first attempt. Read it after, only if your attempt didn't converge.&lt;/li&gt;
&lt;li&gt;Mix techniques. Three sliding window problems in a row trains nothing about recognition because the third is obvious by inertia. Three problems from different techniques force you to read the constraints.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can run all of this against any problem bank. It's a practice protocol, not a feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  When grinding more is the right move
&lt;/h2&gt;

&lt;p&gt;Volume practice is the right call in two cases. First, when your fundamentals on a specific data structure or algorithm are weak enough that you can't implement it cleanly even with the technique handed to you. There, more reps fix the bottleneck directly. Second, when you're a few weeks out from an interview cycle and the goal is speed, not depth. The features you've already internalised get faster with reps.&lt;/p&gt;

&lt;p&gt;The cases where volume stops working are different. When implementation is solid but recognition under unfamiliar problems is the bottleneck, more reps don't move the needle because they aren't training what's broken.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the next problem looks like with this trained
&lt;/h2&gt;

&lt;p&gt;Six months out, you open an unfamiliar problem on a phone screen. The description mentions "for each element, count how many elements to the right are strictly greater before reaching one that's less or equal." Nothing in your practice list looks like it.&lt;/p&gt;

&lt;p&gt;Reading for features gets you somewhere. "For each element" is element-wise. "To the right" is directional. "Strictly greater" is a comparison condition. All three match a monotonic stack, written from the right toward the left. You're 30 seconds in and writing the loop.&lt;/p&gt;

&lt;p&gt;That's a trained skill. Volume helped, structure helped more, recognition is what closed the gap.&lt;/p&gt;

&lt;p&gt;If unfamiliar mediums still freeze you even after hundreds of problems, the bottleneck usually isn't implementation.&lt;/p&gt;

&lt;p&gt;It's recognition.&lt;/p&gt;

&lt;p&gt;I wrote a longer breakdown covering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;feature checklists for six major interview patterns&lt;/li&gt;
&lt;li&gt;near transfer vs far transfer&lt;/li&gt;
&lt;li&gt;and the exact recognition drills that made unfamiliar problems feel solvable again&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.codeintuition.io/blogs/algorithmic-intuition-dsa" rel="noopener noreferrer"&gt;Full breakdown here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Which technique's features finally clicked for you only after seeing it on a problem the explanations had skipped?&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>leetcode</category>
      <category>career</category>
      <category>codinginterview</category>
    </item>
    <item>
      <title>How to Prove Your Algorithm Works in a Coding Interview</title>
      <dc:creator>Prakhar Srivastava</dc:creator>
      <pubDate>Wed, 06 May 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/codeintuition/how-to-prove-your-algorithm-works-in-a-coding-interview-38mh</link>
      <guid>https://dev.to/codeintuition/how-to-prove-your-algorithm-works-in-a-coding-interview-38mh</guid>
      <description>&lt;p&gt;An interviewer asks you to convince them your algorithm works. You walk through &lt;code&gt;[2, 1, 5, 1, 3, 2]&lt;/code&gt; with &lt;code&gt;K=3&lt;/code&gt;, get &lt;code&gt;9&lt;/code&gt;, and stop. The interviewer waits. You realise that what you did was test one input, not prove correctness, and you don't have anything else to say.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Proving your algorithm works in interviews isn't a formal induction. It's two things: state what property your loop maintains at each step (the invariant), then check initialization, termination, and edge cases. Tracing one example is testing, not proving. The invariant is what bridges one input to all of them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What "prove your algorithm is correct" actually means in an interview
&lt;/h2&gt;

&lt;p&gt;If you took a discrete math course, you spent weeks on induction proofs with formal notation. That isn't what an interviewer wants. They want what a senior engineer does in code review: read the loop, ask "what's true at each iteration?", and check whether that property forces the right answer.&lt;/p&gt;

&lt;p&gt;The two are related. They share the invariant idea. But the academic version is a paper proof, and the interview version is a 90 second verbal walkthrough. Mixing them up is why most candidates either freeze or write something that looks like exam notation.&lt;/p&gt;

&lt;p&gt;When the interviewer says "convince me this works," they want three things in order: the property your loop holds, evidence it holds at every step, and a check that the final state gives the right answer. Doing this clearly is rare, and noticeable when you can.&lt;/p&gt;

&lt;h2&gt;
  
  
  The two checks
&lt;/h2&gt;

&lt;p&gt;Two checks carry the whole method.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;State and verify the loop invariant.&lt;/strong&gt; Before tracing any code, write down what's supposed to be true at the start of each iteration. Then trace 3 to 4 iterations, confirming the property after each one.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check boundaries and termination.&lt;/strong&gt; The invariant covers the loop body. Boundaries cover what happens before the loop starts, after it ends, and on degenerate inputs (empty array, single element, all identical values, maximum size).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Most candidates skip step 1 and jump straight into tracing a specific input. That's testing. Testing tells you it works for &lt;em&gt;that&lt;/em&gt; input. The invariant tells you it works for every input.&lt;/p&gt;

&lt;p&gt;The difference matters when an interviewer asks about an input you didn't trace. "My sliding window maintains the sum of exactly &lt;code&gt;K&lt;/code&gt; elements at each step, and the update adds the new element and subtracts the leaving one" lets the interviewer check the mechanism. "I ran it on &lt;code&gt;[2, 1, 5, 1, 3, 2]&lt;/code&gt; and got 9" leaves them wondering about everything else.&lt;/p&gt;

&lt;h2&gt;
  
  
  A worked example: fixed sliding window
&lt;/h2&gt;

&lt;p&gt;Take the classic problem: given an array and a window size &lt;code&gt;K&lt;/code&gt;, find the maximum sum of any contiguous subarray of length &lt;code&gt;K&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;max_sum_subarray&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;window_sum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;[:&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="n"&gt;max_sum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;window_sum&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt;
        &lt;span class="n"&gt;window_sum&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;max_sum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_sum&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;window_sum&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;max_sum&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 1: State and verify the invariant
&lt;/h3&gt;

&lt;p&gt;The invariant: at the start of each iteration, &lt;code&gt;window_sum&lt;/code&gt; equals the sum of elements in the current window of &lt;code&gt;K&lt;/code&gt; elements.&lt;/p&gt;

&lt;p&gt;Trace it on &lt;code&gt;arr = [2, 1, 5, 1, 3, 2]&lt;/code&gt; with &lt;code&gt;K = 3&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before the loop: &lt;code&gt;window_sum = 2 + 1 + 5 = 8&lt;/code&gt;. The window covers indices &lt;code&gt;0..2&lt;/code&gt;. The invariant holds.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;i = 3&lt;/code&gt;: &lt;code&gt;window_sum = 8 + arr[3] - arr[0] = 8 + 1 - 2 = 7&lt;/code&gt;. The window is now &lt;code&gt;[1, 5, 1]&lt;/code&gt;, and &lt;code&gt;1 + 5 + 1 = 7&lt;/code&gt;. Holds.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;i = 4&lt;/code&gt;: &lt;code&gt;window_sum = 7 + arr[4] - arr[1] = 7 + 3 - 1 = 9&lt;/code&gt;. Window is &lt;code&gt;[5, 1, 3]&lt;/code&gt;, sums to 9. Holds.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;i = 5&lt;/code&gt;: &lt;code&gt;window_sum = 9 + arr[5] - arr[2] = 9 + 2 - 5 = 6&lt;/code&gt;. Window is &lt;code&gt;[1, 3, 2]&lt;/code&gt;, sums to 6. Holds.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The reason the invariant is &lt;em&gt;maintained&lt;/em&gt; matters more than the trace itself. The update adds the element entering the window and subtracts the one leaving. That's the mechanism, not just the result.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Check boundaries and termination
&lt;/h3&gt;

&lt;p&gt;Initialization: the code computes the sum of the first &lt;code&gt;K&lt;/code&gt; elements directly, so the invariant holds before the loop runs.&lt;/p&gt;

&lt;p&gt;Termination: the loop runs from &lt;code&gt;K&lt;/code&gt; to &lt;code&gt;len(arr) - 1&lt;/code&gt;, so the final window covers the last &lt;code&gt;K&lt;/code&gt; elements. &lt;code&gt;max_sum&lt;/code&gt; has tracked the maximum across every window the loop visited.&lt;/p&gt;

&lt;p&gt;Edge cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;K == len(arr)&lt;/code&gt;: the loop body never runs, and the initial &lt;code&gt;window_sum&lt;/code&gt; is already the answer.&lt;/li&gt;
&lt;li&gt;All negative numbers: the invariant doesn't depend on sign, so it still holds.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;K == 1&lt;/code&gt;: each window is one element. The update shifts by one position correctly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's a complete interview-grade correctness argument. You stated what's true at every step, you explained why the update preserves it, and you confirmed the final state gives the right answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where most candidates lose the argument
&lt;/h2&gt;

&lt;p&gt;Five ways correctness reasoning falls apart, all avoidable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;"It works on my example" fallacy.&lt;/strong&gt; Tracing one input is a single data point. You haven't proven anything about the inputs you didn't trace. The invariant is what generalises.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Not articulating the invariant.&lt;/strong&gt; You probably have an intuitive sense of what each iteration does. Without putting that into a checkable sentence, the interviewer hears handwaving at the code, not a claim.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skipping boundary checks.&lt;/strong&gt; The invariant covers the loop body. It doesn't tell you what happens on an empty array, a single element, or when &lt;code&gt;K&lt;/code&gt; equals the array length. Each of those needs separate verification.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proving termination, not correctness.&lt;/strong&gt; "The loop runs &lt;code&gt;n&lt;/code&gt; times and exits" tells the interviewer the algorithm halts. It says nothing about whether the output is right. Termination and correctness are independent properties.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Overcomplicating the argument.&lt;/strong&gt; If your reasoning takes three paragraphs, your algorithm might be too complex. Simpler algorithms have simpler invariants. That's part of why interviewers prefer elegant solutions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Invariants change shape across patterns
&lt;/h2&gt;

&lt;p&gt;The sliding window invariant tracks one running variable. Other patterns produce invariants with different shapes, and recognising those shapes is half of what makes the skill transfer.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pattern&lt;/th&gt;
&lt;th&gt;Invariant shape&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Sorted two pointers&lt;/td&gt;
&lt;td&gt;The answer, if it exists, lies within &lt;code&gt;[left, right]&lt;/code&gt;. Each step shrinks the search space without skipping a valid pair.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Binary search&lt;/td&gt;
&lt;td&gt;The target, if present, lies within &lt;code&gt;arr[low..high]&lt;/code&gt;. Every iteration halves the range while preserving that guarantee.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;BFS on unweighted graph&lt;/td&gt;
&lt;td&gt;When a node is dequeued, its recorded distance equals the shortest path from the source. This depends on FIFO ordering and equal edge weights.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Topological sort via DFS&lt;/td&gt;
&lt;td&gt;A node is added to the result only after all its descendants have been added. Reversing the post-order produces a valid ordering.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;0/1 knapsack DP&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;dp[i][w]&lt;/code&gt; equals the maximum value using items &lt;code&gt;0..i&lt;/code&gt; with capacity &lt;code&gt;w&lt;/code&gt;. Each cell depends only on previously computed cells.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The two step structure is the same across all of them. State the property, verify the boundaries. What changes is what the invariant looks like and how many variables it tracks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practising under time pressure
&lt;/h2&gt;

&lt;p&gt;Knowing the method and producing one in 45 minutes are different. The bottleneck isn't understanding what an invariant is. It's stating one quickly for an algorithm you just designed.&lt;/p&gt;

&lt;p&gt;A drill that works: pick a problem you've already solved. Set a timer for 60 seconds. Try to state the loop invariant in one sentence. If you can't, that's a signal you solved the problem by pattern matching, not by understanding what made it work. Re-read the code and figure out what property the algorithm depends on.&lt;/p&gt;

&lt;p&gt;Run that drill on five problems across five different patterns each week. After a month, two things change. First, articulating invariants becomes reflexive. Second, your debugging gets faster, because checking whether the invariant still holds at each step pinpoints the broken iteration faster than tracing the full execution.&lt;/p&gt;

&lt;p&gt;A second drill is wrong invariant spotting. Take a correct algorithm. Write down an invariant that &lt;em&gt;sounds&lt;/em&gt; plausible but is subtly wrong. Find the iteration where it breaks. This trains the gap between invariants that describe what the code does and invariants that guarantee correctness. Most interview mistakes live in that gap.&lt;/p&gt;

&lt;p&gt;I keep watching this play out across engineers I see prep. The ones who can articulate invariants under pressure are usually the ones who solved fewer problems but understood each one more deeply. The ones who froze had solved more but couldn't say what made any single algorithm work.&lt;/p&gt;

&lt;p&gt;If you want a worked walkthrough of why the variable sliding window update preserves the invariant before you ever solve a problem with it, &lt;a href="https://www.codeintuition.io/courses/array/wAvrphLQKKOxZazp-fg6X" rel="noopener noreferrer"&gt;the variable sliding window lesson&lt;/a&gt; walks through it step by step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two halves of the same skill
&lt;/h2&gt;

&lt;p&gt;Correctness reasoning and mental dry running pair together. The dry run gives you the trace. The invariant gives you the claim to verify against the trace. Practising them apart leaves a hole that interview pressure exposes. Practising them together is what makes the reasoning hold up when the algorithm is one you just designed.&lt;/p&gt;

&lt;p&gt;The same idea shows up in dynamic programming. When you derive a recurrence, you're implicitly stating an invariant: &lt;code&gt;dp[i]&lt;/code&gt; represents the optimal answer for the first &lt;code&gt;i&lt;/code&gt; elements. Proving that recurrence correct uses the same two step structure, just with a table-shaped invariant instead of a scalar one.&lt;/p&gt;

&lt;p&gt;If you’ve ever frozen when an interviewer said “prove your algorithm works,” the missing piece usually isn’t more problems.&lt;/p&gt;

&lt;p&gt;It’s learning to articulate the invariant.&lt;/p&gt;

&lt;p&gt;I wrote a &lt;a href="https://www.codeintuition.io/blogs/algorithm-correctness-proof" rel="noopener noreferrer"&gt;longer breakdown&lt;/a&gt; covering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;nested loop invariants&lt;/li&gt;
&lt;li&gt;wrong invariant drills&lt;/li&gt;
&lt;li&gt;DP correctness reasoning&lt;/li&gt;
&lt;li&gt;and how to structure interview-grade proofs without formal notation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Which algorithm did you finally feel confident in only after you could state its invariant out loud?&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>leetcode</category>
      <category>datastructures</category>
      <category>career</category>
    </item>
    <item>
      <title>AlgoExpert vs NeetCode: The Interview Skill Neither One Actually Trains</title>
      <dc:creator>Prakhar Srivastava</dc:creator>
      <pubDate>Tue, 05 May 2026 22:31:58 +0000</pubDate>
      <link>https://dev.to/codeintuition/algoexpert-vs-neetcode-the-interview-skill-neither-one-actually-trains-567f</link>
      <guid>https://dev.to/codeintuition/algoexpert-vs-neetcode-the-interview-skill-neither-one-actually-trains-567f</guid>
      <description>&lt;p&gt;A few years back I worked through both AlgoExpert and NeetCode while preparing for interviews. The 100 polished videos and the 400+ free walkthroughs were useful for what they covered. The interview round that broke me wasn't a problem either platform had skipped. It was a problem they both had a clean walkthrough for, where I could read either solution after the round and recognise the technique, but in 25 minutes against a whiteboard I couldn't see it.&lt;/p&gt;

&lt;p&gt;The problem was Longest Palindromic Substring. Both platforms have a video. Both derive expand around centre cleanly. After watching either, the technique makes sense. The issue was the interviewer didn't say "this is a palindrome expansion problem." The prompt said "find the longest substring that reads the same forwards and backwards." That gap, between the technique I could follow on a video and the technique I could spot from a description, is what neither platform set out to train.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; AlgoExpert wins on video depth per problem. NeetCode wins on breadth and free access. Both teach how techniques work after you've named the technique. Neither teaches the recognition before the technique gets named, and that recognition is what an unseen medium problem tests.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What AlgoExpert is good at
&lt;/h2&gt;

&lt;p&gt;Clement's videos are clean. There's no other word for them. He picks 100 problems, records each one, walks brute force to optimal in a consistent visual style, and the editing makes the reasoning easy to follow. The single instructor consistency is underrated. After three videos you've adapted to his vocabulary, his pacing, and his shorthand for tradeoffs. From video four onward you spend less time adapting and more time absorbing.&lt;/p&gt;

&lt;p&gt;The browser IDE inside AlgoExpert is also the best of any platform I tried that leads with video. After watching the walkthrough you have a workspace already loaded with the function signature, the tests, and the language template. The transition from passive watching to active solving costs you no setup, which matters more than it sounds.&lt;/p&gt;

&lt;p&gt;The bundle is the other thing the platform gets right. SystemsExpert and FrontendExpert sit beside AlgoExpert under one account. If you're prepping across DSA, system design, and frontend rounds, paying once for three coordinated curriculums is a real saving in cognitive load alone. The argument that "100 problems is too few" misses what AlgoExpert is going for. The platform aims for depth on a small set, not coverage of everything. On that goal it lands.&lt;/p&gt;

&lt;h2&gt;
  
  
  What NeetCode does well
&lt;/h2&gt;

&lt;p&gt;NeetCode's free tier is the best reason to start there. The YouTube channel covers more problems than most paid platforms, and the production quality has improved year over year. NeetCode 150 is the most widely shared curated list in the prep community for a reason. You get a defensible problem ordering without any commitment.&lt;/p&gt;

&lt;p&gt;The community momentum is the second thing NeetCode does that almost nobody else matches. Engineers swap solutions in the YouTube comments, swap timelines on the subreddit, swap notes in Discord. Studying alone is the failure mode for most preparation. A platform that pulls you into a group of people preparing in parallel is doing real work, even if the work isn't a feature on a comparison sheet.&lt;/p&gt;

&lt;p&gt;The mapping to LeetCode is also worth naming. The problems you practise on NeetCode are the same problems Big Tech screening tools serve, which means your practice environment matches your screening environment. AlgoExpert's IDE is more polished, but NeetCode's environment matches what you'll actually face when a recruiter sends you a Codility or HackerRank link.&lt;/p&gt;

&lt;h2&gt;
  
  
  The shared gap, made concrete
&lt;/h2&gt;

&lt;p&gt;Take Longest Palindromic Substring. The interview prompt is one sentence: given a string, return the longest substring that reads the same forwards and backwards. AlgoExpert's video derives expand around centre cleanly. NeetCode's video does the same with a slightly different style. Watch either and the solution makes sense.&lt;/p&gt;

&lt;p&gt;What neither video sets you up to do is the move that happens before the technique gets named. The move is reading the prompt, noticing that a substring is being asked for, noticing the symmetry constraint, and reaching for expand around centre rather than DP because of those two visible features. In an interview, no one labels the problem for you. Your search for a technique starts from the description, not from a category tag.&lt;/p&gt;

&lt;p&gt;The pattern repeats across topics. Take a tree problem like Binary Tree Maximum Path Sum. Both platforms have walkthroughs. After watching either you can reproduce the postorder helper that returns the local max while updating a global. What neither walkthrough installs is the recognition that "any node, any path, any direction" signals the helper-with-side-effects pattern. The walkthrough teaches the pattern. The recognition is a separate skill the walkthrough assumes you'll pick up by exposure.&lt;/p&gt;

&lt;h2&gt;
  
  
  What recognition is, mechanically
&lt;/h2&gt;

&lt;p&gt;Recognition is a small set of features you can name on the prompt before any code is written. For expand around centre, the features are roughly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The output is a substring of a string.&lt;/li&gt;
&lt;li&gt;The constraint involves symmetry, palindromes, or some "reads the same in both directions" property.&lt;/li&gt;
&lt;li&gt;Other approaches bottom out at quadratic, so the technique you're looking for is &lt;code&gt;O(n^2)&lt;/code&gt; or &lt;code&gt;O(n log n)&lt;/code&gt;, not &lt;code&gt;O(n)&lt;/code&gt; or sub-linear.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When all three apply, expand around centre is the candidate technique to try. Longest Palindromic Substring matches all three. Palindromic Substrings (count, not length) matches the first two. Both fall under the same recognition rule.&lt;/p&gt;

&lt;p&gt;This is the part neither AlgoExpert nor NeetCode targets directly. Their walkthroughs assume you've already arrived at the technique, then explain it. The arriving is the work no one is helping you do.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why volume alone doesn't close it
&lt;/h2&gt;

&lt;p&gt;The cognitive science name for this is the &lt;a href="https://en.wikipedia.org/wiki/Generation_effect" rel="noopener noreferrer"&gt;generation effect&lt;/a&gt;. Producing an answer from first principles, even imperfectly, builds stronger memory and stronger transfer than recognising a familiar solution. Watching a walkthrough builds recognition of an answer you've been shown. Interviews ask for generation, where you produce the technique from a description you haven't seen.&lt;/p&gt;

&lt;p&gt;What you watch and what you generate aren't the same skill. Watching ten variants of expand around centre gives you a strong recognition memory for variants close to what you watched. The interview problem is rarely close enough. It tends to look just different enough that the recognition memory misses, and you stall on what to try next.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://en.wikipedia.org/wiki/Transfer_of_learning" rel="noopener noreferrer"&gt;transfer of learning&lt;/a&gt; literature splits this into near transfer and far transfer. Near transfer is solving problems that look like ones you've already solved. Far transfer is reasoning through problems that look different but follow the same underlying pattern. Volume in the AlgoExpert / NeetCode mode produces near transfer reliably. Far transfer comes from a different practice shape, where the recognition gets trained on its own.&lt;/p&gt;

&lt;h2&gt;
  
  
  A recognition protocol you can run this week
&lt;/h2&gt;

&lt;p&gt;If you've worked through either platform's content and unfamiliar mediums still freeze you, the move isn't another fifty walkthroughs. It's a tighter loop on recognition first.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pick one technique a week. Expand around centre, sliding window, two pointer, monotonic stack, postorder with side effects.&lt;/li&gt;
&lt;li&gt;Write down the trigger features for that technique. Three or four specific features you can see in a problem statement that, when they all apply, make this technique the candidate. Write them in your own words, not from a cheat sheet.&lt;/li&gt;
&lt;li&gt;Read five problems that use this technique without solving them. For each, name the triggers you can see in the prompt before reading any constraints in detail.&lt;/li&gt;
&lt;li&gt;Solve three or four problems on the technique with the problem name and category tag hidden. The cover-the-name move is what forces you to recognise the technique before coding starts.&lt;/li&gt;
&lt;li&gt;Once a week, do one pressure session. Cover the title, set 25 minutes, talk through your reasoning out loud, and don't open the IDE until you've named the technique you intend to use.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A technique a week, eight or ten weeks of focused work. That replaces the hundred-plus mediums where the signal you actually need gets buried under details that don't matter.&lt;/p&gt;

&lt;p&gt;Both AlgoExpert and NeetCode are reasonable choices for the watching part of this loop. Once you've watched the technique once or twice, the work that compounds is the recognition reps the walkthroughs don't include.&lt;/p&gt;

&lt;p&gt;If you’ve already watched hundreds of walkthroughs but still freeze on unfamiliar mediums, the problem usually isn’t knowledge.&lt;/p&gt;

&lt;p&gt;It’s recognition.&lt;/p&gt;

&lt;p&gt;I wrote a &lt;a href="https://www.codeintuition.io/blogs/algoexpert-vs-neetcode" rel="noopener noreferrer"&gt;longer breakdown&lt;/a&gt; comparing AlgoExpert and NeetCode on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;depth vs breadth&lt;/li&gt;
&lt;li&gt;passive watching vs active recall&lt;/li&gt;
&lt;li&gt;and the recognition drills that finally fixed this for me&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When did the recognition click for a pattern you used to watch and re-watch without spotting on a fresh problem, and what was the specific problem that broke it open?&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>career</category>
      <category>leetcode</category>
      <category>learning</category>
    </item>
    <item>
      <title>The 90 Day FAANG Prep Plan That Actually Works</title>
      <dc:creator>Prakhar Srivastava</dc:creator>
      <pubDate>Mon, 04 May 2026 20:30:50 +0000</pubDate>
      <link>https://dev.to/codeintuition/the-90-day-faang-prep-plan-that-actually-works-stop-studying-in-the-wrong-order-4he7</link>
      <guid>https://dev.to/codeintuition/the-90-day-faang-prep-plan-that-actually-works-stop-studying-in-the-wrong-order-4he7</guid>
      <description>&lt;p&gt;You count the weeks between today and your on site. Twelve. You pull up a 90 day FAANG prep plan and the structure looks reasonable: easy problems for two weeks, mediums for six, hards for the rest. Six weeks in, you hit binary trees and realise your recursion is shaky. Two weeks later you try a DP problem and can't formulate the recurrence. Suddenly the 12 week plan is a 6 week plan with 6 weeks of rework.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Difficulty isn’t the problem. Order is.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; A 90 day FAANG plan works when topics are ordered by dependency, not by difficulty or popularity. The typical "easy then medium then hard" plan ignores that DP needs recursion, recursion needs the call stack, and the call stack needs array fundamentals. Sequence by what each topic &lt;em&gt;requires&lt;/em&gt;, not by how hard it feels.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What "wrong order" actually looks like
&lt;/h2&gt;

&lt;p&gt;The failure mode in most 90 day plans is the same shape every time. The plan groups topics by surface difficulty: arrays first because they are easy, DP last because it is hard. Or the plan groups by frequency: arrays first because they appear most, then strings, then trees, then DP. Both shapes ignore that DSA topics have hard prerequisites.&lt;/p&gt;

&lt;p&gt;You can't reason about binary tree traversal without recursion. You can't build recursive intuition without the call stack. You can't pattern-match on hash table problems until you have internalised what &lt;code&gt;O(1)&lt;/code&gt; amortised lookup actually buys you. These gaps don't show up in Week 2. They show up around Week 5 or Week 7, deep enough into the plan that you've already built a lot on top of unstable ground.&lt;/p&gt;

&lt;p&gt;When that happens, the plan quietly stops working. You either skip the prerequisite to "stay on schedule," which compounds the gap into every later topic, or you stop and rebuild, which eats two of the remaining weeks. Either way, the timeline collapses.&lt;/p&gt;

&lt;h2&gt;
  
  
  The order that actually works
&lt;/h2&gt;

&lt;p&gt;Here's the dependency order that emerges after watching dozens of engineers prep:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Weeks 1 to 2: Arrays and linked lists.&lt;/strong&gt; Two pointers, sliding window, interval merging on arrays. Reversal, fast and slow pointers, merge patterns on linked lists. About fifteen patterns. Everything else builds on these.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weeks 3 to 4: Hash tables and stacks.&lt;/strong&gt; Hash tables unlock counting, prefix sum, and the harder variants of sliding window. Stacks unlock monotonic stack, sequence validation, expression evaluation. Both are prerequisites for tree traversal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weeks 5 to 6: Queues, binary trees, BSTs.&lt;/strong&gt; Queues for level order BFS. Trees for preorder, postorder, root to leaf paths. BSTs for sorted traversal and range operations. Pattern density picks up fast here.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weeks 7 to 8: Heaps, recursion, backtracking.&lt;/strong&gt; Heaps for top K and comparator patterns. Recursion formalises head recursion, tail recursion, and multiple recursion. Backtracking introduces enumeration and constraint search.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weeks 9 to 10: Graphs, sorting, searching.&lt;/strong&gt; DFS, BFS, Dijkstra, topological sort. Quickselect and custom compare. Binary search variants including predicate search on the answer space.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weeks 11 to 12: Dynamic programming and review.&lt;/strong&gt; DP last, because DP is what every previous topic was preparing you for.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Wikipedia article on the &lt;a href="https://en.wikipedia.org/wiki/Spacing_effect" rel="noopener noreferrer"&gt;spacing effect&lt;/a&gt; summarises why this order also helps retention: practice distributed across weeks, with each week building on the last, produces stronger durable recall than the same volume crammed late. The dependency order isn't only logical, it's also how you keep the early weeks from fading by Week 12.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why DP belongs at Week 11, not Week 3
&lt;/h2&gt;

&lt;p&gt;Take Coin Change, a standard DP interview problem. To derive the solution rather than memorise it, you need three things lined up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A recurrence relation. Built on top of recursion fluency.&lt;/li&gt;
&lt;li&gt;Recognition of overlapping subproblems. Built on top of seeing how multiple recursive calls share state.&lt;/li&gt;
&lt;li&gt;Memoization. Built on top of understanding how cached results map to subproblem states.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now imagine an engineer who reaches DP in Week 3. They've covered arrays, maybe linked lists. They have not internalised the call stack, they have not seen recursion expressed as a recurrence, they have not seen overlapping subproblems show up anywhere. The Coin Change solution they study isn't reasoning, it's a string of four lines they're supposed to remember:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;coinChange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;coins&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;dp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;float&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;inf&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;amount&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;dp&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;amount&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;coins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;dp&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dp&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;dp&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;dp&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;dp&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="nf"&gt;float&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;inf&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The engineer who reaches the same problem in Week 11 sees this differently. They've spent four weeks with recursion. They've built backtracking on top of multiple-call recursion. They've seen the same &lt;code&gt;dp[i]&lt;/code&gt; shape applied to climbing stairs, then to longest increasing subsequence, then to Coin Change. The recurrence &lt;code&gt;dp[a] = min(dp[a], dp[a - c] + 1)&lt;/code&gt; doesn't have to be remembered, because they can re-derive it from the invariant: &lt;code&gt;dp[a]&lt;/code&gt; is the minimum number of coins to make amount &lt;code&gt;a&lt;/code&gt;, and the only way to make &lt;code&gt;a&lt;/code&gt; is to make &lt;code&gt;a - c&lt;/code&gt; for some coin &lt;code&gt;c&lt;/code&gt; and add one.&lt;/p&gt;

&lt;p&gt;That's the difference dependency order produces. Not more knowledge. Knowledge that holds together under interview pressure because every piece supports the one above it.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to add timed pressure (load-bearing)
&lt;/h2&gt;

&lt;p&gt;It is easy to practise in comfort conditions for eleven weeks and then panic in your first mock interview. The gap between "I solved this at my desk with no timer" and "I solved this in 20 minutes with no hints" is wider than most engineers expect. The right time to introduce timed practice is after the foundations are solid but before the advanced topics start. Around Week 6.&lt;/p&gt;

&lt;p&gt;By Week 6, you've covered arrays, linked lists, hash tables, stacks, queues, and started binary trees. That's enough pattern coverage to attempt timed problems honestly. You won't know every pattern yet, but you know enough to practise the skill of solving under pressure separately from the skill of learning new patterns.&lt;/p&gt;

&lt;p&gt;From Week 6 to Week 8, attempt previously completed problems under a timer. Twenty minutes for mediums, thirty for hards. No hints, no notes. The goal isn't a clean pass rate, it's training the feeling of working without the safety of references while the material is still fresh in your mind.&lt;/p&gt;

&lt;p&gt;From Week 9 to Week 12, switch to multi problem assessments under a 50 minute timer, the way actual rounds work. You'll have covered enough of the patterns by then to simulate realistic interview coverage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Productive struggle vs spinning
&lt;/h2&gt;

&lt;p&gt;Plans break. You'll have a week where work blows up, or you'll hit a topic that takes longer than expected. The better question isn't "how do I avoid falling behind," it's "how do I tell whether I'm stuck productively or just spinning."&lt;/p&gt;

&lt;p&gt;The two look similar from the inside but feel different in one specific way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Productive struggle.&lt;/strong&gt; You can name what you don't understand. "Merge sort is &lt;code&gt;O(n log n)&lt;/code&gt;, but the reason the divide step doesn't add extra cost isn't clicking." That's a specific gap you can target with a specific lesson, problem, or question.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spinning.&lt;/strong&gt; You're re-reading the same material without forming new questions. You've watched the same walkthrough three times and still can't reproduce the logic from a blank screen.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're spinning, change the input. Try solving a strictly simpler variant. Try writing the solution from memory without any reference, even if you produce something wrong. Try explaining the concept out loud as if to a junior. The act of generating an attempt, even a broken one, surfaces the missing piece more clearly than another round of passive review. This is the &lt;a href="https://en.wikipedia.org/wiki/Generation_effect" rel="noopener noreferrer"&gt;generation effect&lt;/a&gt; doing real work for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Concrete checkpoints (because vague progress isn't progress)
&lt;/h2&gt;

&lt;p&gt;Vague progress signals like "things are starting to click" don't survive interview-week stress. Use concrete checkpoints at each month boundary. If three or more items at any checkpoint don't describe you, don't move to the next month. Spend an extra week on the weak areas. A 13 week plan that's solid beats a 12 week plan with holes.&lt;/p&gt;

&lt;p&gt;By Week 4, you should be able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Solve array two pointer and sliding window problems without looking up the pattern.&lt;/li&gt;
&lt;li&gt;Trace linked list reversal mentally, tracking pointer state at each step.&lt;/li&gt;
&lt;li&gt;Explain &lt;em&gt;why&lt;/em&gt; a hash table lookup is &lt;code&gt;O(1)&lt;/code&gt; amortised, not just that it is.&lt;/li&gt;
&lt;li&gt;Identify which stack pattern applies from the problem statement alone.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By Week 8:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trace a binary tree postorder traversal on paper, tracking state at each node.&lt;/li&gt;
&lt;li&gt;Reason about BST problems through the sorted traversal invariant.&lt;/li&gt;
&lt;li&gt;Write a recursive solution and walk through how the call stack unwinds.&lt;/li&gt;
&lt;li&gt;Have attempted ten or more problems under timed conditions, with a pass rate above 50%.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By Week 12:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identify DP subproblems and formulate recurrences for problems you haven't seen.&lt;/li&gt;
&lt;li&gt;Choose between BFS and DFS for a graph problem and explain why.&lt;/li&gt;
&lt;li&gt;Complete a 50 minute multi problem assessment with at least one correct solution.&lt;/li&gt;
&lt;li&gt;Hit the timer without reaching for the solution tab first.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you can't honestly check most of these, your plan needs an extra week, not a heroic weekend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your final week
&lt;/h2&gt;

&lt;p&gt;Week 12 isn't for new material. If you're still covering fresh topics with seven days to go, something earlier broke. The final week is for consolidation under realistic conditions.&lt;/p&gt;

&lt;p&gt;Days 1 to 3 are targeted weak spot practice. Look back at the timed results from Weeks 9 to 11. Which topics had the lowest pass rate? Which patterns did you fail to identify within the first five minutes? Spend three days isolating those specific patterns and attempting two or three problems that test exactly that pattern. Don't re-study the entire topic. Isolate the gap.&lt;/p&gt;

&lt;p&gt;Days 4 to 5 are full simulation rounds. Two complete mock interview simulations. Forty five to fifty minutes, two problems, no IDE autocompletion, talk out loud as you write. If you don't have a partner, record yourself. The recording forces the same "someone is watching" pressure that slows people down in real interviews. Pay attention to time allocation. The most common failure pattern is spending thirty minutes on the first problem and having fifteen for the second. Practise the discipline of checking your progress at the 20 minute mark and making a deliberate decision about whether to keep debugging or move on.&lt;/p&gt;

&lt;p&gt;Days 6 to 7 are rest and review. Stop solving new problems. Review your notes on the three or four patterns you found hardest. Get enough sleep. The difference between a well rested engineer who's covered 90% of the material and an exhausted engineer who's crammed 100% is measurable in interview performance. Fatigue degrades pattern recognition speed before it degrades anything else, and recognition speed is exactly what interviews test.&lt;/p&gt;

&lt;h2&gt;
  
  
  The order is the advantage
&lt;/h2&gt;

&lt;p&gt;Compressing the whole roadmap into one rule: study what each topic requires, not what feels hardest. The Coin Change recurrence at Week 11 holds together because every layer below it (recursion in Week 7, stack mechanics in Week 4, array fundamentals in Week 1) was built before it was needed. An engineer who jumps to DP in Week 3 isn't ahead, they're standing on nothing.&lt;/p&gt;




&lt;p&gt;If you want the full version of this roadmap with exact problem lists, pattern breakdowns, and weekly schedules I’ve written it &lt;a href="https://www.codeintuition.io/blogs/90-day-faang-preparation" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;This post gives you the structure.&lt;br&gt;&lt;br&gt;
The full guide gives you the execution.&lt;/p&gt;




&lt;p&gt;What’s the topic in your prep where the dependency chain finally clicked—and which week did it happen?&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>career</category>
      <category>learning</category>
      <category>faang</category>
    </item>
    <item>
      <title>I Solved 200 LeetCode Problems and Still Froze in Interviews</title>
      <dc:creator>Prakhar Srivastava</dc:creator>
      <pubDate>Sun, 03 May 2026 16:27:58 +0000</pubDate>
      <link>https://dev.to/codeintuition/i-solved-200-leetcode-problems-and-still-froze-in-interviews-1hoj</link>
      <guid>https://dev.to/codeintuition/i-solved-200-leetcode-problems-and-still-froze-in-interviews-1hoj</guid>
      <description>&lt;p&gt;A few years ago I solved 200 LeetCode problems and still froze on Mediums I hadn't seen. The breakthrough wasn't another hundred problems. It was a different loop.&lt;/p&gt;

&lt;p&gt;A problem asks for the longest substring with at most K distinct characters. You've solved sliding window before. Maximum sum subarray of size K, done. Longest substring without repeating characters, done. This third one stalls you. Twenty minutes pass. Discuss says sliding window. You'd already solved sliding window problems. The recognition didn't work on this one.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Practice volume builds memory of specific problems, not the recognition you have to do before you start coding. The skill that transfers to unfamiliar interview mediums is identifying which technique applies from what the problem looks like. That recognition is what most LeetCode-style prep doesn't train, and that gap is what this post is about.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What LeetCode actually does well
&lt;/h2&gt;

&lt;p&gt;LeetCode is the default coding interview prep platform for a reason. The problem bank is the strongest available: 3,000+ problems across every common data structure, tagged by company and frequency. The company tagging is the most reliable public signal you can get for "what does Amazon currently ask." Nothing else is close on that dimension.&lt;/p&gt;

&lt;p&gt;The free tier is generous, the discuss forum is open, and the weekly contests are free. If you already understand the underlying patterns and you need volume to drill them, LeetCode is the right platform. The contest system is genuinely underrated for building speed under time. Top voted discuss replies on popular problems also hold up. They often explain the reasoning more clearly than the official editorials.&lt;/p&gt;

&lt;p&gt;So the question isn't whether LeetCode is good. It is good. The real question is whether the thing holding you back is the kind of thing LeetCode's design actually fixes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The gap volume doesn't close
&lt;/h2&gt;

&lt;p&gt;The earlier sliding window problems taught implementation on those specific problems, not the rule for when sliding window applies. The connection between "K distinct characters" and "variable sliding window" was never made explicit anywhere in your prep. You learned an implementation by example, not a recognition rule. When the visible parts of the problem change, the implementation memory doesn't carry over.&lt;/p&gt;

&lt;p&gt;Learning research calls this near transfer versus far transfer. Near transfer means solving problems that look like ones you've already solved. Far transfer means reasoning through problems that look different but follow the same underlying pattern. The Wikipedia article on &lt;a href="https://en.wikipedia.org/wiki/Transfer_of_learning" rel="noopener noreferrer"&gt;transfer of learning&lt;/a&gt; covers it. The short version: practice volume produces near transfer reliably and far transfer unreliably. You can grind 500 problems and still not generalise across the differences in description if no one ever taught you which features signal which pattern.&lt;/p&gt;

&lt;p&gt;That isn't a talent gap. It's a gap in the method.&lt;/p&gt;

&lt;h2&gt;
  
  
  What recognition actually looks like
&lt;/h2&gt;

&lt;p&gt;Recognition is reading a problem statement, naming the visible features, and matching them to a technique before any code is written. Every technique has a small set of trigger conditions, usually three to five. When all of them match, you've identified the technique.&lt;/p&gt;

&lt;p&gt;For variable sliding window, the trigger conditions are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The input is a contiguous range (a substring, a subarray, a window of consecutive elements).&lt;/li&gt;
&lt;li&gt;The optimisation target is the length of that range (longest, shortest, smallest valid).&lt;/li&gt;
&lt;li&gt;There is a condition you can check incrementally as the window expands or contracts (count of distinct characters, sum within bounds, set membership).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When all three apply, variable sliding window is the technique. K distinct characters hits all three. So does longest substring without repeating characters. So does minimum window substring. The triggers are the same. The descriptions differ.&lt;/p&gt;

&lt;p&gt;The skill is matching problems to triggers before coding. That's what doesn't get built by grinding problems sequentially.&lt;/p&gt;

&lt;h2&gt;
  
  
  A code template you can adapt
&lt;/h2&gt;

&lt;p&gt;Here's the variable sliding window template in Python. Once you recognise the technique, the implementation has very few moving parts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;variable_window&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;condition&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;left&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="n"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;init_state&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;best&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;right&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt;
        &lt;span class="n"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;expand&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;right&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
        &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="nf"&gt;condition&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;shrink&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;left&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
            &lt;span class="n"&gt;left&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="n"&gt;best&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;best&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;right&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;left&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;best&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For K distinct characters, &lt;code&gt;state&lt;/code&gt; is a hash map of character counts within the current window, &lt;code&gt;condition&lt;/code&gt; is &lt;code&gt;len(state) &amp;lt;= K&lt;/code&gt;, and &lt;code&gt;shrink&lt;/code&gt; decrements counts and removes keys whose count drops to zero. The recognition step (this is variable sliding window) is the decision that matters. The code follows.&lt;/p&gt;

&lt;p&gt;For longest substring without repeating characters, &lt;code&gt;state&lt;/code&gt; becomes a hash set and &lt;code&gt;condition&lt;/code&gt; is "no duplicate." Same skeleton. Same invariant.&lt;/p&gt;

&lt;p&gt;That's the payoff of recognition trained explicitly. Write the template once. Adapt per problem in two or three lines. Stop staring at problems hoping the right approach surfaces.&lt;/p&gt;

&lt;h2&gt;
  
  
  How practice and interview conditions diverge
&lt;/h2&gt;

&lt;p&gt;Recognition is half the gap. The other half is the conditions you practise under. LeetCode's practice environment shows you the problem name (which often hints at the technique), gives unlimited code executions, and doesn't penalise failed attempts. Real interviews give none of that.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;Practice (LeetCode default)&lt;/th&gt;
&lt;th&gt;Real interview&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Problem name&lt;/td&gt;
&lt;td&gt;Visible, often hints at category&lt;/td&gt;
&lt;td&gt;Not given&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code executions&lt;/td&gt;
&lt;td&gt;Unlimited, no penalty&lt;/td&gt;
&lt;td&gt;Limited, every failure costs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hints / discuss&lt;/td&gt;
&lt;td&gt;One click away&lt;/td&gt;
&lt;td&gt;Not available&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Time limit&lt;/td&gt;
&lt;td&gt;Self set, often skipped&lt;/td&gt;
&lt;td&gt;20 to 45 minutes hard&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reasoning&lt;/td&gt;
&lt;td&gt;Internal&lt;/td&gt;
&lt;td&gt;Spoken aloud&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The middle column is fine for the early learning phase. Open exploration helps when you're still building the model. The problem comes when those conditions are the only conditions you ever rehearse under, and the actual test wants something the rehearsal never asked for.&lt;/p&gt;

&lt;p&gt;Replicating interview pressure on your own takes one friend and a kitchen timer. The friend covers the problem name and reads the constraints aloud. You set the timer to 20 or 30 minutes. You don't open the IDE before stating the technique you intend to use. Two sessions a week against a friend who isn't shy about silence will do more for interview readiness than another fifty problems on the comfort setting.&lt;/p&gt;

&lt;h2&gt;
  
  
  A protocol you can run this week
&lt;/h2&gt;

&lt;p&gt;If the K distinct characters story sounded like your last three failed problems, the move isn't more volume. It's a tighter loop on recognition first.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pick one technique a week (variable sliding window, two pointers, prefix sum, monotonic stack, and so on).&lt;/li&gt;
&lt;li&gt;Write down the trigger conditions for that technique. Three to five features of the problem that, when they all apply, make this technique the right tool. Write them in your own words, not from a cheat sheet.&lt;/li&gt;
&lt;li&gt;Read five problems that use this technique without solving them. For each, name the triggers you can see in the problem statement before reading the constraints in detail.&lt;/li&gt;
&lt;li&gt;Solve three to five problems on this technique with the problem name and tag hidden. Force the recognition step.&lt;/li&gt;
&lt;li&gt;Once a week, do one timed pressure session: friend covers the title, you talk through your reasoning, no IDE until you've named the technique. If you can't name it inside five minutes, the problem rewinds to the recognition queue, not the implementation queue.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A pattern a week, eight to ten patterns covered, roughly ten weeks of focused work. That replaces the hundreds of mediums where the signal gets buried under details that don't matter.&lt;/p&gt;

&lt;p&gt;The longer write up of this is on &lt;a href="https://www.codeintuition.io/blogs/codeintuition-vs-leetcode" rel="noopener noreferrer"&gt;my own blog&lt;/a&gt;, with trigger conditions for prefix sum, two pointers, and monotonic stack alongside the variable sliding window walkthrough above, plus a worked example on minimum window substring that uses the same template.&lt;/p&gt;

&lt;p&gt;When did the recognition click for a pattern you used to grind without seeing, and what was the specific problem that broke it open?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Disclosure: I built Codeintuition, a structured learning platform for coding interviews. The post above is about the technique, not the platform.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>leetcode</category>
      <category>career</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
