DEV Community

Ratnesh Maurya
Ratnesh Maurya

Posted on • Originally published at blog.ratnesh-maurya.com on

Five Caching Strategies Every Backend Dev Should Know

Five Caching Strategies Every Backend Dev Should Know

Five Caching Strategies Every Backend Dev Should Know

If you build backend systems, you eventually run into caching.

Someone says “just put Redis in front of it” and suddenly your app is faster… until you start seeing stale data, cache misses, and weird bugs when you deploy.

This post explains five core caching strategies in plain language:

  • Cache-Aside
  • Read-Through
  • Write-Through
  • Write-Back
  • Write-Around

You’ll see how each one works , what can go wrong , and when to use it.

A cache is just a faster, smaller copy of some data. The hard part is keeping that copy useful : when to fill it, when to read from it, and when to update or skip it.


1. Cache-Aside (Lazy Loading)

Idea: The application talks to the cache and the database. It “checks the cache first, then falls back to the DB” and fills the cache on a miss.

If the key is in the cache (a hit), just return it. Super fast.

If the key is missing, the app queries the database instead.

The app stores the result in the cache and returns it to the user.

Enter fullscreen mode Exit fullscreen mode

Why people love it

  • Simple and flexible : logic lives in your code; you control what goes in the cache.
  • Resilient : if the cache dies, the app can still read from the DB (just slower).
  • Memory-efficient : only data that is actually read gets cached.

What can go wrong

  • The first request for a key is slow (cold cache).
  • If the DB is updated outside your code path (e.g. admin tool, another service), the cache can serve stale data until TTL expires or you manually invalidate.

When to use

  • Great default for read-heavy apps (blogs, product catalogs, many dashboards).
  • When you’re just adding caching to an existing codebase and want full control.

2. Read-Through

Idea: The application pretends the cache is the main database.

It calls the cache for every read. On a miss, the cache layer itself fetches from the DB, stores the result, and returns it.

With Cache-Aside , your app code calls both cache and DB. With Read-Through , your app only calls the cache; a loader behind the cache talks to the DB.

Pros

  • Cleaner application code : your code only knows “get from cache”.
  • Cache libraries or infrastructure can handle loading, retries, and metrics.

Cons

  • If some writes go directly to the DB (bypassing the cache API), the cache can easily become stale.
  • You need a cache provider or library that supports read-through loading.

When to use

  • Larger teams where you want centralized cache logic , not ad-hoc caching scattered everywhere.
  • Situations where infra/platform team owns the cache layer.

3. Write-Through

So far we talked about reads. Now let’s talk about writes : how we keep cache and DB in sync.

Write-Through means:

On a write, update the cache and the database synchronously. Only when both succeed is the write considered “done”.

Flow

  1. User updates something (e.g. changes quantity in a cart).
  2. App writes the new value into the cache.
  3. App also writes the same value into the database.
  4. Only if both succeed do we return “OK” to the user.

Pros

  • Strong consistency : cache and DB always match for that key.
  • Reads from the cache always see the latest value.

Cons

  • Slower writes : every write has to hit both cache and DB before returning.
  • If DB is slow, your API write latency is also slow.

When to use

  • When correctness matters more than raw speed:
    • balances in a wallet / bank ,
    • inventory in an e‑commerce site,
    • data where a stale read would be really bad.

4. Write-Back (Write-Behind)

Write-Back trades strict consistency for speed.

On a write, update only the cache and return success quickly. Later, an async process flushes those changes to the database.

Flow

  1. User sends a write.
  2. App writes to cache and returns 200 OK immediately.
  3. A background worker or the cache itself periodically writes batched updates into the DB.

Pros

  • Very fast writes : app isn’t blocked waiting for DB.
  • Great for high-throughput workloads:
    • counters (views, likes),
    • logs / telemetry,
    • social feed events.

Cons

  • Risk of data loss : if the cache crashes before flushing to DB, those changes are gone.
  • DB is eventually updated, not immediately.

When to use

  • Write-heavy workloads where losing a tiny fraction of data is acceptable:
    • metrics, dashboards, click-streams,
    • non-critical logs.

5. Write-Around

Write-Around takes a different approach:

Writes skip the cache and go straight to the DB. The cache is only filled later when data is read.

Flow

  1. A write goes directly into the database.
  2. The cache is not updated.
  3. On the next read, the app (or cache) sees a miss :
    • loads from DB,
    • populates the cache,
    • and returns the value.

Pros

  • Prevents “polluting” the cache with data that is written but rarely read.
  • Good when you have big write batches (imports, ETL jobs) that users won’t read immediately.

Cons

  • The first read after a write is always a cache miss (slower).
  • More frequent DB reads if many keys are only read once.

When to use

  • Large imports, archival data, logs that might be read only occasionally.
  • Systems where memory is tight and you want the cache to focus on truly hot data.

Side-by-side comparison

Here’s a quick comparison of these five strategies using the labels you provided.


How teams typically choose

To connect this with the interactive content you built:

These charts show two things:

  • Cache-Aside dominates in real systems because it’s simple and resilient.
  • Write-Back gives the fastest writes but carries the most risk; Write-Through is safest but slower on each write.

Which caching strategy should you pick? (a practical rule-set)

If you only remember one thing from this post, remember this: you pick a caching strategy based on the cost of a stale read and the cost of a slow write.

Here’s a fast, real-world decision guide:

  • Most read-heavy apps (catalogs, blogs, dashboards):

  • You want a clean abstraction and centralized caching (platform/infra owned):

  • A stale read would be painful (wallet balances, inventory, “did my payment go through?”):

  • Writes are hot and you can tolerate some eventual consistency (counters, analytics, clickstream):

  • You write lots of data that is rarely read (imports, ETL, logs):


Two failure modes teams hit in production (and what to do)

Caching rarely fails in obvious ways. It fails with “the DB is melting” or “why is data stale?”. Two patterns show up everywhere.

1) Cache stampede (thundering herd)

Symptom: a hot key expires (or cache restarts), and suddenly thousands of requests miss at once and slam the DB.

Mitigations:

  • Request coalescing / single-flight : only one request recomputes the value; others wait and reuse it.
  • Stale-while-revalidate : keep serving a slightly stale value while one background refresh updates it.
  • Jittered TTLs : add randomization to expiration times so many keys don’t expire at the same second.

2) Hot key / hotspotting

Symptom: one key becomes so popular (home feed, trending item, auth config) that a single cache shard or a single DB row gets hammered.

Mitigations:

  • Shard the key (e.g. trending:v1:0..N) and merge results.
  • Use a two-layer cache : per-instance in-memory (tiny TTL) + shared Redis.
  • Precompute or cache at a higher level (edge/CDN) if the data is public.

What to monitor so caching doesn’t become “magic”

Whatever strategy you choose, watch these:

  • Cache hit rate (overall and by endpoint)
  • DB QPS (does caching actually reduce it?)
  • p95/p99 latency (cache helps only if tail latency improves)
  • Eviction rate / memory pressure (are you constantly evicting hot data?)
  • Stale reads / correctness signals (app-specific: inventory mismatches, stale dashboards, etc.)

If you can’t measure these, caching will eventually surprise you.


Putting it all together (for a newbie)

If you’re just starting with caching:

  1. Start with Cache-Aside for reads. It’s simple, safe, and easy to reason about.

  2. Add Read-Through if you want a cleaner abstraction and your cache provider supports it.

  3. For writes:

  4. Whatever you choose, always:

With just these five strategies, you can handle most caching problems you’ll run into as a backend engineer.

Top comments (0)