DEV Community

Alex Hunter
Alex Hunter

Posted on • Originally published at leetcopilot.dev

LRU Cache (LeetCode 146): The Complete Guide to Solving It

Originally published on LeetCopilot Blog


LRU Cache is one of the most asked interview questions at FAANG. Here's how to solve it step-by-step with Hash Map + Doubly Linked List.

LRU Cache (LeetCode #146) is one of the most frequently asked interview questions at Google, Meta, Amazon, and Microsoft. It combines data structure design with practical caching concepts.

This guide covers everything you need to know to solve LRU Cache confidently in interviews.

TL;DR: Quick Solution

Data Structures: Hash Map + Doubly Linked List

Time Complexity: O(1) for both get and put

Key Insight: The hash map gives O(1) lookup. The doubly linked list gives O(1) insertion/deletion at both ends.


The Problem

Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.

Implement the LRUCache class:

  • LRUCache(int capacity) — Initialize with positive capacity
  • int get(int key) — Return value if exists, else -1
  • void put(int key, int value) — Update or insert. Evict LRU if at capacity.

Constraint: Both operations must run in O(1) average time.


Why This Problem is Tricky

The challenge is achieving O(1) for both operations:

Operation Challenge
get Need O(1) key lookup
put (insert) Need O(1) insert at front
put (evict) Need O(1) removal of LRU item
Update "recently used" Need O(1) move to front

Single data structure won't work:

  • Hash Map: O(1) lookup, but no ordering
  • Linked List: O(1) insert/delete, but O(n) lookup
  • Array: O(n) for removal

Solution: Combine both!


The Key Insight

Use a Hash Map + Doubly Linked List together:

Structure Purpose
Hash Map Key → Node (O(1) lookup)
Doubly Linked List Track recency order (O(1) move/remove)

How It Works:

  1. Most recently used → Front of list
  2. Least recently used → Back of list
  3. Hash Map → Direct access to any node

Step-by-Step Solution

Step 1: Define the Node

class Node:
    def __init__(self, key, val):
        self.key = key
        self.val = val
        self.prev = None
        self.next = None
Enter fullscreen mode Exit fullscreen mode

Each node stores:

  • key — needed for eviction (to remove from hash map)
  • val — the actual value
  • prev, next — doubly linked list pointers

Step 2: Initialize the Cache

class LRUCache:
    def __init__(self, capacity):
        self.capacity = capacity
        self.cache = {}  # key -> Node

        # Dummy head and tail (simplifies edge cases)
        self.head = Node(0, 0)
        self.tail = Node(0, 0)
        self.head.next = self.tail
        self.tail.prev = self.head
Enter fullscreen mode Exit fullscreen mode

Why dummy nodes?

  • No null checks needed
  • Insert/remove code is cleaner
  • Head.next = most recent
  • Tail.prev = least recent

Step 3: Helper Methods

def _remove(self, node):
    """Remove node from linked list"""
    prev, nxt = node.prev, node.next
    prev.next = nxt
    nxt.prev = prev

def _insert(self, node):
    """Insert node at front (most recent)"""
    prev, nxt = self.head, self.head.next
    prev.next = node
    nxt.prev = node
    node.prev = prev
    node.next = nxt
Enter fullscreen mode Exit fullscreen mode

Step 4: Implement get()

def get(self, key):
    if key in self.cache:
        node = self.cache[key]
        # Move to front (mark as recently used)
        self._remove(node)
        self._insert(node)
        return node.val
    return -1
Enter fullscreen mode Exit fullscreen mode

Logic:

  1. If key exists: move to front, return value
  2. If key doesn't exist: return -1

Step 5: Implement put()

def put(self, key, value):
    if key in self.cache:
        # Remove old node
        self._remove(self.cache[key])

    # Create and insert new node
    node = Node(key, value)
    self.cache[key] = node
    self._insert(node)

    # Evict if over capacity
    if len(self.cache) > self.capacity:
        # Remove LRU (node before tail)
        lru = self.tail.prev
        self._remove(lru)
        del self.cache[lru.key]
Enter fullscreen mode Exit fullscreen mode

Logic:

  1. If key exists: remove old node
  2. Create new node, add to cache and list
  3. If over capacity: evict LRU (tail.prev)

Complete Solution

class Node:
    def __init__(self, key, val):
        self.key = key
        self.val = val
        self.prev = None
        self.next = None

class LRUCache:
    def __init__(self, capacity):
        self.capacity = capacity
        self.cache = {}  # key -> Node

        # Dummy head and tail
        self.head = Node(0, 0)
        self.tail = Node(0, 0)
        self.head.next = self.tail
        self.tail.prev = self.head

    def _remove(self, node):
        prev, nxt = node.prev, node.next
        prev.next = nxt
        nxt.prev = prev

    def _insert(self, node):
        prev, nxt = self.head, self.head.next
        prev.next = node
        nxt.prev = node
        node.prev = prev
        node.next = nxt

    def get(self, key):
        if key in self.cache:
            node = self.cache[key]
            self._remove(node)
            self._insert(node)
            return node.val
        return -1

    def put(self, key, value):
        if key in self.cache:
            self._remove(self.cache[key])

        node = Node(key, value)
        self.cache[key] = node
        self._insert(node)

        if len(self.cache) > self.capacity:
            lru = self.tail.prev
            self._remove(lru)
            del self.cache[lru.key]
Enter fullscreen mode Exit fullscreen mode

Complexity Analysis

Operation Time Space
get O(1)
put O(1)
Overall O(1) O(capacity)

Visual Walkthrough

Initial State (capacity = 2):

head <-> tail
cache = {}
Enter fullscreen mode Exit fullscreen mode

After put(1, 1):

head <-> [1:1] <-> tail
cache = {1: Node(1,1)}
Enter fullscreen mode Exit fullscreen mode

After put(2, 2):

head <-> [2:2] <-> [1:1] <-> tail
cache = {1: Node(1,1), 2: Node(2,2)}
Enter fullscreen mode Exit fullscreen mode

After get(1):

head <-> [1:1] <-> [2:2] <-> tail  (1 moved to front)
Enter fullscreen mode Exit fullscreen mode

After put(3, 3) — evicts 2:

head <-> [3:3] <-> [1:1] <-> tail
cache = {1: Node(1,1), 3: Node(3,3)}  (2 evicted)
Enter fullscreen mode Exit fullscreen mode

Common Mistakes

Mistake Fix
Forgetting to store key in node Node needs key for eviction
Using singly linked list Need doubly linked for O(1) removal
Not updating cache on put Update cache even if key exists
Off-by-one in capacity check Check after insert, before evict

Python Shortcut: OrderedDict

Python's OrderedDict combines hash map + ordering:

from collections import OrderedDict

class LRUCache:
    def __init__(self, capacity):
        self.capacity = capacity
        self.cache = OrderedDict()

    def get(self, key):
        if key in self.cache:
            self.cache.move_to_end(key)
            return self.cache[key]
        return -1

    def put(self, key, value):
        if key in self.cache:
            self.cache.move_to_end(key)
        self.cache[key] = value
        if len(self.cache) > self.capacity:
            self.cache.popitem(last=False)
Enter fullscreen mode Exit fullscreen mode

Note: Know the manual solution first—interviewers may ask you to implement without built-ins.


Interview Tips

When Explaining:

  1. Start with the O(1) requirement
  2. Explain why you need two data structures
  3. Draw the linked list with dummy nodes
  4. Walk through an example

Follow-Up Questions:

  • "How would you handle concurrency?" → Use locks or concurrent data structures
  • "What if capacity is 0?" → Handle as edge case
  • "Can you implement LFU instead?" → LeetCode 460

Why This Problem is Asked

LRU Cache tests:

  • Data structure design — combining structures
  • Time complexity awareness — achieving O(1)
  • Edge case handling — eviction, updates
  • Practical knowledge — caching is everywhere

Real-world uses:

  • Database query caches
  • Web browser caches
  • Operating system page replacement
  • CDN caching

Related Problems

Problem Difficulty Concept
LFU Cache (#460) Hard Frequency-based eviction
Design HashMap (#706) Easy Hash table basics
Design LinkedList (#707) Medium Linked list implementation
All O'one Data Structure (#432) Hard Max/min tracking

Practice Strategy

  1. Understand the concept — Draw the linked list
  2. Code from scratch — Don't memorize, understand
  3. Test with examples — Walk through operations
  4. Use LeetCopilot — Get hints if stuck

FAQ

Why doubly linked list, not singly?
Singly linked list needs O(n) to find previous node for removal. Doubly linked list stores prev pointer for O(1) removal.

Why store key in the node?
When evicting, you need to remove from hash map too. The key is needed for that deletion.

Can I use a deque?
Deque gives O(n) for arbitrary removal. You need O(1) removal when accessing existing keys.


Conclusion

LRU Cache is a classic interview problem that tests your ability to combine data structures for optimal performance.

Key takeaways:

  1. Hash Map → O(1) key lookup
  2. Doubly Linked List → O(1) ordering operations
  3. Dummy nodes → Simplify edge cases
  4. Store key in node → Needed for eviction

Practice until you can implement it from scratch in under 15 minutes.

Good luck!


If you're looking for an AI assistant to help you master LeetCode patterns and prepare for coding interviews, check out LeetCopilot.

Top comments (0)