DEV Community

Cover image for The Library Method: Understanding @cache
Aaron Rose
Aaron Rose

Posted on

The Library Method: Understanding @cache

Episode 1 - The Secret Life of Python


Timothy arrives at the library, laptop open, wearing a frustrated expression that Margaret recognizes immediately.

Timothy: "Margaret, can I ask you something about Python decorators?"

Margaret: looks up from her desk, closing the book she was reading "Of course. Show me what you're working with."

Timothy: turns his laptop around "It's this @cache decorator. I mean, I know what it does - it makes recursive functions faster. But I don't understand how it does it. Where does it store the cache? How does it know what to cache? It just feels like... magic."

Margaret: nods thoughtfully "And you don't like magic."

Timothy: "Not in code, no. I can use it, but I can't explain it. That bothers me."

Margaret: smiles "Good. That means you're ready for deeper understanding." She pulls out a worn notebook from her desk drawer "Timothy, when I encounter Python features that seem magical, I have a process. It's rigorous, but it works every time. Would you like to see it?"

Timothy: "Absolutely."

Margaret: "We're going to understand @cache in four steps. First, we'll articulate what's happening in plain English. Then we'll implement it in Pascal to prove the algorithm works. Next, we'll translate it to manual Python so you see the mechanism explicitly. Finally, we'll return to the original Pythonic code - and you'll see there's no magic at all."

Timothy: raises an eyebrow "Pascal? Really? From the 1980s?"

Margaret: "That's precisely why it's useful. Pascal has no hidden features, no shortcuts, no magic. If we can implement caching in Pascal, we prove we understand the algorithm - not just Python's syntax. Think of it as executable pseudocode." She walks to her bookshelf and pulls down an old Pascal textbook "These books have taught me more about algorithms than any modern framework."

Timothy: "And it actually runs?"

Margaret: "Of course. We'll run all four versions, and they'll all produce identical output. That's our proof of understanding. Ready?"

Timothy: leans forward "Let's do it."


The Confusion

Margaret: "Show me the code that's confusing you."

Timothy: types

from functools import cache

@cache
def fibonacci(n):
    """Returns the nth Fibonacci number with automatic caching."""
    return n if n < 2 else fibonacci(n-1) + fibonacci(n-2)

if __name__ == "__main__":
    result = fibonacci(10)
    print(f"Fibonacci(10) = {result}")
    print(f"Cache performance: {fibonacci.cache_info()}")
Enter fullscreen mode Exit fullscreen mode

Timothy: runs it

Fibonacci(10) = 55
Cache performance: CacheInfo(hits=8, misses=11, maxsize=None, currsize=11)
Enter fullscreen mode Exit fullscreen mode

Timothy: "See? Eight cache hits. Eleven misses. But where's the dictionary? Where's the cache logic? It's just three lines of code."

Margaret: "Excellent questions. Let's answer them systematically."


The Concept

Margaret: picks up a piece of paper "Before we write any code, let's think about what caching means in plain English."

She writes:

What is @cache doing?

  • When you call fibonacci(5) for the first time, Python computes it and remembers the result
  • If you call fibonacci(5) again, Python looks up the stored result instead of recomputing
  • This lookup happens automatically - you don't see it in the code
  • The cache is like a notebook where Python writes: "fibonacci(5) = 5" so it doesn't have to recalculate

Why does fibonacci need this?

  • fibonacci(10) calls fibonacci(9) and fibonacci(8)
  • But fibonacci(9) ALSO calls fibonacci(8) - that's duplicate work!
  • Without caching: fibonacci(10) would make 177 function calls
  • With caching: fibonacci(10) makes only 19 calls
  • Every time we reuse a cached result, that's a "cache hit"

Timothy: "So it's just... storing previous results in a dictionary somewhere?"

Margaret: "Exactly. Now let's prove it by building that dictionary ourselves."


The Blueprint

Margaret: opens her Pascal compiler "In Pascal, we have to be explicit about everything. No hidden dictionaries. We'll manage the cache manually."

Timothy: "And this will produce the same output?"

Margaret: "Watch."

program FibonacciCacheDemo;

type
  TCache = array of Int64;

var
  fibCache: TCache;
  totalCalls: Integer = 0;
  cacheHits: Integer = 0;

function Fibonacci(n: Integer): Int64;
begin
  Inc(totalCalls);

  { Check cache first - this is the key optimization }
  if fibCache[n] <> -1 then
  begin
    Inc(cacheHits);
    Fibonacci := fibCache[n];
    Exit;
  end;

  { If not in cache, compute it }
  if n < 2 then
    Fibonacci := n
  else
    Fibonacci := Fibonacci(n-1) + Fibonacci(n-2);

  { Store result in cache for next time }
  fibCache[n] := Fibonacci;
end;

procedure InitializeCache(size: Integer);
var
  i: Integer;
begin
  SetLength(fibCache, size);
  for i := 0 to size-1 do
    fibCache[i] := -1;  { -1 means "not computed yet" }
end;

begin
  InitializeCache(20);

  WriteLn('Fibonacci(10) = ', Fibonacci(10));
  WriteLn('Cache performance: CacheInfo(hits=', cacheHits, 
          ', misses=', totalCalls - cacheHits, ')');
end.
Enter fullscreen mode Exit fullscreen mode

Margaret compiles and runs the Pascal code

Fibonacci(10) = 55
Cache performance: CacheInfo(hits=8, misses=11)
Enter fullscreen mode Exit fullscreen mode

Timothy: stares at the screen "It's... identical to the Python output."

Margaret: "Exactly. Look at the algorithm: check cache first, compute if not found, store result. That's all @cache is doing - Python just hides these steps."

Timothy: "So the cache is just an array - a list of previous results."

Margaret: "Precisely. In Python it's a dictionary, but the concept is identical. Now let's translate this logic directly to Python."


The Manual

Margaret: "Your turn. Take the Pascal logic and write it in Python - but no decorators, no magic. Just explicit cache management."

Timothy: types slowly, thinking through each line

def fibonacci_manual(n, cache=None, stats=None):
    """Manual cache implementation to reveal the mechanism."""
    # Initialize cache and statistics on first call
    if cache is None:
        cache = {}
        stats = {'calls': 0, 'hits': 0}

    stats['calls'] += 1

    # Check cache first (just like Pascal)
    if n in cache:
        stats['hits'] += 1
        return cache[n]

    # Compute result if not in cache
    if n < 2:
        result = n
    else:
        result = (fibonacci_manual(n-1, cache, stats) + 
                 fibonacci_manual(n-2, cache, stats))

    # Store result in cache (just like Pascal)
    cache[n] = result
    return result

if __name__ == "__main__":
    cache = {}
    stats = {'calls': 0, 'hits': 0}
    result = fibonacci_manual(10, cache, stats)

    print(f"Fibonacci(10) = {result}")
    print(f"Cache performance: CacheInfo(hits={stats['hits']}, "
          f"misses={stats['calls'] - stats['hits']})")
Enter fullscreen mode Exit fullscreen mode

Timothy runs it

Fibonacci(10) = 55
Cache performance: CacheInfo(hits=8, misses=11)
Enter fullscreen mode Exit fullscreen mode

Timothy: leans back "Same output again. So I just... built what @cache does."

Margaret: "You did. Every piece of the mechanism. Now look at your original code again."


The Revelation

Margaret pulls up the original Python code

from functools import cache

@cache  # ← This decorator wraps your function with the exact
        #    cache logic you just wrote manually!
def fibonacci(n):
    return n if n < 2 else fibonacci(n-1) + fibonacci(n-2)
    # ↑ Python checks the cache before calling this
    # ↑ Python stores the result after computing it
    # ↑ You don't see it, but it's doing what you built in Pascal

if __name__ == "__main__":
    result = fibonacci(10)
    print(f"Fibonacci(10) = {result}")
    print(f"Cache performance: {fibonacci.cache_info()}")
Enter fullscreen mode Exit fullscreen mode

Output:

Fibonacci(10) = 55
Cache performance: CacheInfo(hits=8, misses=11, maxsize=None, currsize=11)
Enter fullscreen mode Exit fullscreen mode

Timothy: staring at the screen "So @cache is just... automating the cache dictionary, the lookup, the storage..."

Margaret: "Everything we did manually. It's elegant automation, not magic."

Timothy: "The 8 cache hits are because we call the same numbers multiple times."

Margaret: "Right. fibonacci(10) needs fibonacci(8), but so does fibonacci(9). Instead of computing it twice, we use the cached result."

Timothy: grins "I could implement my own caching decorator now."

Margaret: "You could. That's how you know you truly understand it."


The Principle

Margaret: closes the Pascal textbook "This is what I mean by understanding versus using. You can use @cache without understanding it - many developers do. But when you understand the mechanism, you know:"

  • When to use it (expensive pure functions with repeated inputs)
  • When NOT to use it (functions with side effects, or unique inputs)
  • How to debug it (check cache_info() for hit rates)
  • Why it works (dictionary lookup is O(1), recomputation is expensive)

Timothy: "And the Pascal forces you to think through the algorithm."

Margaret: "Exactly. No shortcuts, no hidden features. Just pure logic. If you can write it in Pascal, you understand it well enough."

Timothy: starts packing up his laptop "Can we do this with other Python features? Like generators? Or context managers?"

Margaret: smiles "Absolutely. Come back anytime you encounter magic. We'll demystify it together."


Summary

🎯 What We Proved:
All three implementations produced identical output:

  • Fibonacci(10) = 55
  • Cache hits: 8
  • Cache misses: 11

This proves the mechanism is the same across all three approaches.

💡 What We Learned:

  • @cache uses a dictionary to store function results
  • It checks the dictionary before computing (cache hit)
  • It stores results after computing (cache miss)
  • This transforms O(2^n) complexity into O(n)
  • 177 calls without caching → 19 calls with caching

🔧 The Library Method:

  1. Structured English - Understand what's happening conceptually
  2. Pascal Blueprint - Prove the algorithm with executable pseudocode
  3. Manual Python - Translate the logic explicitly
  4. Pythonic Revelation - Appreciate the elegant abstraction

Next in The Library Method: Understanding __slots__ - Why some Python objects can't have new attributes


Aaron Rose is a software engineer and technology writer at tech-reader.blog and the author of Think Like a Genius.

Top comments (0)