How to build a smarter caching layer that only fetches what it needs — and when it needs it.
If you've ever watched a database slowly buckle under repetitive reads for the same data, you've already felt the problem that the Cache-Aside pattern (also called Lazy Loading) was born to solve.
Let's break it down.
The Core Idea
With Cache-Aside, your application manages the cache directly. There's no magic middleware automatically syncing your database to Redis. Instead, you follow a simple contract:
- Read → Check the cache first. Cache hit? Return the data. Cache miss? Fetch from the DB, store it in the cache, then return it.
- Write → Update the database, then invalidate (or update) the cache entry.
That's it. Simple, powerful, and widely used in production systems.
The Read Flow
Client → Check Cache
├── HIT → Return cached data ✅
└── MISS → Fetch from DB
→ Write to cache
→ Return data
Show Me the Code
Here's a practical example in Laravel using its built-in Cache facade and Eloquent ORM:
<?php
namespace App\Services;
use App\Models\User;
use Illuminate\Support\Facades\Cache;
class UserService
{
private const CACHE_TTL = 300; // 5 minutes
public function getUserById(int $userId): ?User
{
$cacheKey = "user:{$userId}";
// 1. Check cache — on miss, fetch from DB and store automatically
return Cache::remember($cacheKey, self::CACHE_TTL, function () use ($userId) {
logger('Cache MISS — querying DB');
return User::find($userId);
});
}
public function updateUser(int $userId, array $updates): void
{
// 1. Update the source of truth
User::whereId($userId)->update($updates);
// 2. Invalidate stale cache entry
Cache::forget("user:{$userId}");
}
}
Laravel's Cache::remember() handles the check-fetch-store cycle for you in one clean call. Configure your cache driver to Redis in config/cache.php:
'default' => env('CACHE_STORE', 'redis'),
And in your .env:
CACHE_STORE=redis
REDIS_HOST=127.0.0.1
REDIS_PORT=6379
Clean, predictable, and completely under your control.
Why Not Just Cache Everything Upfront?
That's the Cache-Through or Write-Through approach — and it has its place — but it comes with a cost: you load data into the cache whether it's ever requested or not.
Cache-Aside is lazy: it only caches what actually gets requested. For systems with large datasets where only a fraction of records are "hot," this is far more memory-efficient.
When Cache-Aside Shines ✨
| Scenario | Why it works |
|---|---|
| Read-heavy workloads | Repeated reads hit cache instead of DB |
| Uneven access patterns | Only popular data gets cached |
| Resilience requirements | App still works if cache goes down |
| Microservices | Each service owns its own caching logic |
The Tradeoffs (Be Honest About Them)
No pattern is a silver bullet. Cache-Aside has real gotchas:
1. Cache Stampede (Thundering Herd)
When a popular cache key expires, dozens of concurrent requests can all get a cache miss simultaneously and hammer the database.
Fix: Use a distributed lock (mutex) via Laravel's Cache::lock():
public function getUserWithLock(int $userId): ?User
{
$cacheKey = "user:{$userId}";
$lockKey = "lock:user:{$userId}";
$cached = Cache::get($cacheKey);
if ($cached !== null) {
return $cached;
}
// Acquire a short-lived atomic lock
$lock = Cache::lock($lockKey, 5);
if (!$lock->get()) {
// Another process is fetching — wait briefly and retry
usleep(100_000); // 100ms
return $this->getUserWithLock($userId);
}
try {
$user = User::find($userId);
if ($user) {
Cache::put($cacheKey, $user, self::CACHE_TTL);
}
return $user;
} finally {
$lock->release();
}
}
2. Stale Data
Between a write and a cache invalidation, a window exists where users might read stale data. For most use cases this is acceptable — but know your tolerance.
3. Cold Start Penalty
A fresh cache (after a restart or first deployment) means every request is a cache miss. Plan for a warm-up period or pre-populate critical keys.
4. Cache Invalidation Complexity
Phil Karlton famously said:
"There are only two hard things in Computer Science: cache invalidation and naming things."
With Cache-Aside, invalidation is your responsibility. Miss an invalidation path during a write, and stale data lingers.
Cache-Aside vs. Other Patterns
| Pattern | Who manages cache | When data loads |
|---|---|---|
| Cache-Aside | Application | On first read (lazy) |
| Read-Through | Cache layer | On first read (lazy) |
| Write-Through | Cache layer | On every write (eager) |
| Write-Behind | Cache layer | Async after write |
Cache-Aside gives you the most control and the most flexibility — ideal when you want to keep your caching logic close to the application and avoid tight coupling to a caching middleware.
A Real-World Architecture
Here's how this looks in a typical web service:
┌─────────┐ read ┌─────────────┐
│ Client │ ─────────────▶│ API Layer │
└─────────┘ └──────┬──────┘
│
┌────────────▼────────────┐
│ Cache (Redis) │
│ HIT → return early │
└────────────┬────────────┘
│ MISS
┌────────────▼────────────┐
│ Database (Postgres) │
│ fetch + populate cache │
└─────────────────────────┘
On writes, the API updates the DB and deletes the affected cache keys. Simple and auditable.
Key Takeaways
- Cache-Aside is lazy: only data that's actually requested gets cached.
- Your app owns the cache: no hidden magic, full transparency.
- Handle the edge cases: stampedes, cold starts, and stale reads are real — plan for them.
- TTL is your friend: always set an expiration. Caches should never be permanent sources of truth.
- Combine with monitoring: track your cache hit ratio. Below ~80% in a read-heavy system is a signal to investigate.
Wrapping Up
The Cache-Aside pattern is one of those foundational building blocks that, once you internalize it, you'll reach for constantly. It's transparent, resilient (the app degrades gracefully if the cache is unavailable), and gives you precise control over what lives in memory.
Start simple: add it to your most expensive, most repeated reads. Measure the difference. Then iterate.
Did this help? Drop a ❤️ or share a caching horror story in the comments — I'd love to hear how you've handled cache invalidation in the wild.
Top comments (0)