DEV Community

Cover image for Write-Through Pattern: Never Serve Stale Data Again
ali ehab algmass
ali ehab algmass

Posted on

Write-Through Pattern: Never Serve Stale Data Again

How to keep your cache and database in perfect sync — on every single write.


If you read the Cache-Aside pattern article, you already know about lazy caching: only populate the cache when data is actually requested. Write-Through flips the philosophy.

With Write-Through, every write goes to the cache and the database — simultaneously, in the same operation. The cache is never out of date because it's always updated alongside the source of truth.

Let's dig in.


The Core Idea

Write-Through enforces a simple rule: you never write to the database without also writing to the cache.

  1. Write → Update the cache first, then write through to the database.
  2. Read → Cache is always warm. Just read from it.

No invalidation. No stale windows. No cold misses on recently written data.


The Write Flow

Client → Write to Cache
              │
              └──→ Write to Database ✅
                        │
                   Both updated atomically
Enter fullscreen mode Exit fullscreen mode

Compare that to Cache-Aside, where the write path is:

Client → Write to Database
              │
              └──→ Invalidate Cache key
                        │
                   Next read triggers a miss
Enter fullscreen mode Exit fullscreen mode

Write-Through is eager: the cache is always ready.


Show Me the Code

Here's a practical example in Laravel using the Cache facade and Eloquent:

<?php

namespace App\Services;

use App\Models\User;
use Illuminate\Support\Facades\Cache;

class UserService
{
    private const CACHE_TTL = 3600; // 1 hour — longer TTLs make sense here

    public function getUserById(int $userId): ?User
    {
        // Cache is always warm after a write, so reads are simple
        return Cache::get("user:{$userId}") ?? User::find($userId);
    }

    public function createUser(array $data): User
    {
        // 1. Write to DB first to get the generated ID
        $user = User::create($data);

        // 2. Immediately populate the cache
        Cache::put("user:{$user->id}", $user, self::CACHE_TTL);

        return $user;
    }

    public function updateUser(int $userId, array $updates): ?User
    {
        $user = User::find($userId);

        if (!$user) {
            return null;
        }

        // 1. Update the database
        $user->update($updates);
        $user->refresh();

        // 2. Write updated model to cache immediately — no stale window
        Cache::put("user:{$userId}", $user, self::CACHE_TTL);

        return $user;
    }

    public function deleteUser(int $userId): void
    {
        User::destroy($userId);

        // Remove from cache on delete
        Cache::forget("user:{$userId}");
    }
}
Enter fullscreen mode Exit fullscreen mode

Notice that reads are now trivially simple — there's no remember() fallback needed because the cache is guaranteed to be warm after any write.


Taking It Further: A Dedicated Repository

For cleaner architecture, encapsulate the Write-Through logic in a repository so the service layer never has to think about caching at all:

<?php

namespace App\Repositories;

use App\Models\User;
use Illuminate\Support\Facades\Cache;

class UserRepository
{
    private const TTL = 3600;

    private function key(int $id): string
    {
        return "user:{$id}";
    }

    public function find(int $id): ?User
    {
        return Cache::get($this->key($id)) ?? User::find($id);
    }

    public function save(User $user): User
    {
        $user->save();
        Cache::put($this->key($user->id), $user->fresh(), self::TTL);
        return $user;
    }

    public function delete(int $id): void
    {
        User::destroy($id);
        Cache::forget($this->key($id));
    }
}
Enter fullscreen mode Exit fullscreen mode

Now your controllers and services just call $this->users->save($user) and caching is completely invisible.


Why Not Just Use Cache-Aside?

Good question. The two patterns solve different problems:

Cache-Aside Write-Through
Cache population Lazy (on first read) Eager (on every write)
Stale data risk Yes, briefly after writes No
Cold start problem Yes No — writes always warm the cache
Wasted cache space Low (only hot data cached) Higher (all written data cached)
Write complexity Simple Slightly higher
Best for Read-heavy, uneven access Write-heavy, consistency-critical

Write-Through shines when you cannot tolerate stale reads — user profiles, pricing data, inventory counts, anything where serving outdated data has a real cost.


When Write-Through Shines ✨

Scenario Why it works
Consistency-critical data Cache is always in sync with the DB
Write-then-read patterns Data is hot immediately after creation
Session or profile data Updated frequently, read constantly
High-traffic reads after writes No thundering herd on first read

The Tradeoffs (Be Honest About Them)

No pattern is a silver bullet. Write-Through has its own gotchas:

1. Write Latency

Every write now touches two systems. If Redis is slow or unavailable, your write path suffers. Design accordingly — and consider whether a write failure should roll back the DB write too.

public function updateUser(int $userId, array $updates): ?User
{
    return DB::transaction(function () use ($userId, $updates) {
        $user = User::lockForUpdate()->find($userId);
        $user->update($updates);
        $user->refresh();

        // Write to cache inside the transaction boundary
        Cache::put("user:{$userId}", $user, self::CACHE_TTL);

        return $user;
    });
}
Enter fullscreen mode Exit fullscreen mode

2. Cache Bloat

Unlike Cache-Aside, Write-Through caches everything you write — even records that will never be read again. For high-volume write workloads with sparse reads, you're wasting memory.

Fix: Use shorter TTLs, or apply Write-Through selectively only to your most-read resources.

3. Cold Cache on Startup

Write-Through only populates the cache on writes. If you restart with an empty cache and no writes happen, reads will still miss. For pre-existing data, consider a warm-up job:

// In a seeder, scheduled command, or queue job
User::chunk(200, function ($users) {
    foreach ($users as $user) {
        Cache::put("user:{$user->id}", $user, 3600);
    }
});
Enter fullscreen mode Exit fullscreen mode

4. Consistency on Failure

If the DB write succeeds but the cache write fails (or vice versa), you have a split-brain problem. Wrapping both in a transaction helps for the DB side, but Redis has no native transaction rollback. Use monitoring and a fallback read path.


Write-Through vs. Other Patterns

Pattern Who manages cache When data loads Stale risk
Cache-Aside Application On first read (lazy) Yes, briefly
Read-Through Cache layer On first read (lazy) Yes, briefly
Write-Through Application On every write (eager) No
Write-Behind Cache layer Async after write Yes, briefly

A Real-World Architecture

             WRITE PATH
┌─────────┐   update    ┌─────────────┐
│  Client │ ───────────▶│  App Layer   │
└─────────┘             └──────┬──────┘
                               │  write to both
               ┌───────────────┴───────────────┐
               ▼                               ▼
  ┌────────────────────┐          ┌────────────────────┐
  │   Cache (Redis)    │          │  Database (MySQL)   │
  │   always current   │          │  source of truth    │
  └────────────────────┘          └────────────────────┘

             READ PATH
┌─────────┐    read     ┌─────────────┐
│  Client │ ───────────▶│  App Layer   │
└─────────┘             └──────┬──────┘
                               │
                  ┌────────────▼────────────┐
                  │      Cache (Redis)       │
                  │   almost always a HIT    │
                  └─────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

The read path becomes nearly trivial — you're just serving from Redis on almost every request.


Key Takeaways

  • Write-Through is eager: the cache is updated on every write, not lazily on the first read.
  • No stale windows: reads after writes always get fresh data.
  • Write latency is the tradeoff: two systems touched per write — design for it.
  • Combine with TTLs: even "always fresh" caches should expire eventually.
  • Best paired with Cache-Aside: use Write-Through for your most critical, frequently-read data, and Cache-Aside for everything else.

Wrapping Up

Write-Through is the right tool when consistency matters more than cache efficiency. If your users expect to see their changes reflected immediately — and they usually do — Write-Through is the pattern that makes that guarantee reliable.

It asks a bit more of your write path, but it gives you something valuable in return: a cache you can actually trust.


Building a caching strategy and not sure which pattern fits where? Drop a comment — always happy to talk through the tradeoffs.

Top comments (0)