Redis is the de facto standard for application-layer caching, used by companies from startups to hyperscalers to reduce database load, cut API response times from hundreds of milliseconds to single digits, and absorb traffic spikes that would otherwise overwhelm backend infrastructure. But caching is deceptively subtle — the wrong strategy leads to stale data, cache stampedes, or a cache that provides no benefit at all.
This guide covers the essential Redis caching patterns for Node.js applications: when to use each one, how to implement it correctly, and how to avoid the common pitfalls that cause production incidents.
Redis Fundamentals for Caching
Redis is an in-memory data structure store. Its primary caching advantage over Memcached is richer data types (strings, hashes, lists, sets, sorted sets) and atomic operations. For caching in Node.js, the most important client is ioredis:
npm install ioredis
npm install -D @types/ioredis # For older versions; ioredis v5+ ships types
// src/lib/redis.ts
import Redis from 'ioredis';
import { env } from '../config/env';
// Singleton pattern — reuse connections
let redis: Redis | null = null;
export function getRedis(): Redis {
if (!redis) {
redis = new Redis({
host: env.REDIS_HOST,
port: env.REDIS_PORT,
password: env.REDIS_PASSWORD,
db: env.REDIS_DB,
// Connection resilience
retryStrategy: (times) => Math.min(times * 100, 3000),
maxRetriesPerRequest: 3,
enableReadyCheck: true,
lazyConnect: true,
// Prevent memory leaks from EventEmitter
maxListeners: 20,
});
redis.on('error', (err) => logger.error({ msg: 'Redis error', err }));
redis.on('connect', () => logger.info('Redis connected'));
}
return redis;
}
Pattern 1: Cache-Aside (Lazy Loading)
Cache-aside (also called lazy loading) is the most common caching pattern. The application code manages the cache explicitly: check the cache first, if miss fetch from the database and populate the cache, then return the result.
// The cache-aside pattern
async function getUserById(userId: string): Promise<User> {
const cacheKey = `user:${userId}`;
const redis = getRedis();
// 1. Check cache
const cached = await redis.get(cacheKey);
if (cached) {
return JSON.parse(cached) as User;
}
// 2. Cache miss — fetch from DB
const user = await db.user.findUnique({ where: { id: userId } });
if (!user) throw new NotFoundError('User', userId);
// 3. Populate cache with TTL
await redis.setex(cacheKey, 3600, JSON.stringify(user)); // TTL: 1 hour
return user;
}
Cache Key Design
Cache keys should be deterministic, descriptive, and namespaced to avoid collisions:
// Good key patterns:
'user:abc123' // Single resource
'user:abc123:profile' // Sub-resource
'posts:list:page:2:limit:20' // Paginated list
'posts:list:userId:abc123:page:1' // Filtered list
'search:query:hashofparams' // Search results (hash the params)
// Cache key builder helper
function cacheKey(...parts: (string | number)[]): string {
return parts.join(':');
}
// Version prefix — lets you invalidate all cache by incrementing version
const CACHE_VERSION = 'v1';
function versionedKey(...parts: (string | number)[]): string {
return `${CACHE_VERSION}:${parts.join(':')}`;
}
Abstracting with a Cache Service
// src/lib/cache.ts
import { getRedis } from './redis';
export class CacheService {
private redis = getRedis();
async get<T>(key: string): Promise<T | null> {
const value = await this.redis.get(key);
return value ? (JSON.parse(value) as T) : null;
}
async set<T>(key: string, value: T, ttlSeconds: number): Promise<void> {
await this.redis.setex(key, ttlSeconds, JSON.stringify(value));
}
async del(key: string): Promise<void> {
await this.redis.del(key);
}
async getOrSet<T>(
key: string,
fetcher: () => Promise<T>,
ttlSeconds: number
): Promise<T> {
const cached = await this.get<T>(key);
if (cached !== null) return cached;
const value = await fetcher();
await this.set(key, value, ttlSeconds);
return value;
}
// Delete all keys matching a pattern (use carefully — scans whole keyspace)
async delByPattern(pattern: string): Promise<number> {
const keys = await this.redis.keys(pattern);
if (keys.length === 0) return 0;
return this.redis.del(...keys);
}
}
export const cache = new CacheService();
// Usage:
const user = await cache.getOrSet(
`user:${userId}`,
() => db.user.findUniqueOrThrow({ where: { id: userId } }),
3600
);
Pattern 2: Write-Through Cache
In write-through caching, every write to the database also updates the cache. This keeps the cache always consistent with the database, at the cost of write latency.
async function updateUser(userId: string, data: UpdateUserInput): Promise<User> {
// Write to DB
const user = await db.user.update({
where: { id: userId },
data,
omit: { password: true },
});
// Immediately update cache (write-through)
await cache.set(`user:${userId}`, user, 3600);
// Invalidate any list caches that included this user
await cache.del(`users:list`);
return user;
}
async function deleteUser(userId: string): Promise<void> {
await db.user.delete({ where: { id: userId } });
// Invalidate cache entry
await cache.del(`user:${userId}`);
await cache.del(`users:list`);
}
Write-through is better than cache-aside when you have multiple readers of a resource and need consistency. The trade-off: writes are slower because they must update both the database and the cache atomically (or accept brief inconsistency).
Pattern 3: Preventing Cache Stampedes
A cache stampede (also called "thundering herd") happens when a popular cache key expires and many concurrent requests simultaneously find a cache miss — all of them hit the database at once. On high-traffic sites, this can bring down your database.
Solution 1: Mutex Lock (Single Refill)
import { Mutex } from 'async-mutex'; // npm install async-mutex
const mutexMap = new Map<string, Mutex>();
function getMutex(key: string): Mutex {
if (!mutexMap.has(key)) {
mutexMap.set(key, new Mutex());
}
return mutexMap.get(key)!;
}
async function getWithStampedeProtection<T>(
key: string,
fetcher: () => Promise<T>,
ttlSeconds: number
): Promise<T> {
// First check — no lock needed
const cached = await cache.get<T>(key);
if (cached !== null) return cached;
// Acquire mutex for this cache key
const mutex = getMutex(key);
const release = await mutex.acquire();
try {
// Double-check after acquiring lock (another request may have filled the cache)
const cachedAfterLock = await cache.get<T>(key);
if (cachedAfterLock !== null) return cachedAfterLock;
// We're the "winner" — fetch from DB and populate cache
const value = await fetcher();
await cache.set(key, value, ttlSeconds);
return value;
} finally {
release();
}
}
Solution 2: Probabilistic Early Expiration (XFetch)
The XFetch algorithm proactively refreshes cache entries slightly before they expire, avoiding the stampede window entirely:
interface CacheEntry<T> {
value: T;
expiresAt: number; // Unix timestamp
delta: number; // Time it took to compute (ms)
}
async function xfetchGet<T>(
key: string,
fetcher: () => Promise<T>,
ttlSeconds: number,
beta = 1.0 // Higher = more eager recomputation
): Promise<T> {
const redis = getRedis();
const raw = await redis.get(key);
if (raw) {
const entry = JSON.parse(raw) as CacheEntry<T>;
const now = Date.now() / 1000;
const remaining = entry.expiresAt - now;
// Early recomputation probability increases as TTL decreases
const shouldRecompute = -entry.delta * beta * Math.log(Math.random()) >= remaining;
if (!shouldRecompute) return entry.value;
}
// Fetch and store with metadata
const start = Date.now();
const value = await fetcher();
const delta = (Date.now() - start) / 1000;
const entry: CacheEntry<T> = {
value,
expiresAt: Date.now() / 1000 + ttlSeconds,
delta,
};
await redis.setex(key, ttlSeconds + Math.ceil(delta * 5), JSON.stringify(entry));
return value;
}
Pattern 4: Cache Invalidation Strategies
"There are only two hard things in computer science: cache invalidation and naming things." — Phil Karlton
TTL-Based Expiration
The simplest strategy: cache entries expire after a fixed time. Use when some staleness is acceptable:
// Product catalog: update every hour (acceptable staleness)
await cache.set(`product:${id}`, product, 3600);
// Stock prices: update every second (very low tolerance for staleness)
await cache.set(`price:${symbol}`, price, 1);
// User session: extend TTL on each access (sliding expiration)
async function getUserSession(sessionId: string) {
const redis = getRedis();
const key = `session:${sessionId}`;
const session = await cache.get(key);
if (session) {
await redis.expire(key, 1800); // Extend TTL on access
return session;
}
return null;
}
Event-Driven Invalidation
Invalidate cache entries when the underlying data changes, rather than waiting for TTL:
// After any write operation, invalidate related cache keys
class UserService {
async update(id: string, data: UpdateUserInput): Promise<User> {
const user = await db.user.update({ where: { id }, data });
await this.invalidateUserCache(id);
return user;
}
async invalidateUserCache(id: string): Promise<void> {
await Promise.all([
cache.del(`user:${id}`),
cache.del(`user:${id}:profile`),
cache.del(`user:${id}:posts`),
// Wildcard invalidation for list caches
cache.delByPattern(`users:list:*`),
]);
}
}
Cache Tags (Dependency Tracking)
Tag-based invalidation lets you invalidate all cache entries associated with a resource:
async function setWithTags<T>(
key: string,
value: T,
ttl: number,
tags: string[]
): Promise<void> {
const redis = getRedis();
const pipeline = redis.pipeline();
// Store the value
pipeline.setex(key, ttl, JSON.stringify(value));
// Add the key to each tag set
tags.forEach((tag) => {
pipeline.sadd(`tag:${tag}`, key);
pipeline.expire(`tag:${tag}`, ttl + 60); // Tag lives slightly longer
});
await pipeline.exec();
}
async function invalidateTag(tag: string): Promise<void> {
const redis = getRedis();
const tagKey = `tag:${tag}`;
// Get all cache keys for this tag
const keys = await redis.smembers(tagKey);
if (keys.length === 0) return;
// Delete all tagged keys and the tag itself
await redis.del(...keys, tagKey);
}
// Usage:
await setWithTags(`post:${postId}`, post, 3600, [`post:${postId}`, `user:${post.userId}:posts`]);
// Invalidate all of a user's post caches at once
await invalidateTag(`user:${userId}:posts`);
Pattern 5: Caching at Different Layers
Full Response Caching (HTTP Layer)
// Cache complete HTTP responses for public, read-heavy endpoints
import { Request, Response, NextFunction } from 'express';
export function httpCache(ttlSeconds: number) {
return async (req: Request, res: Response, next: NextFunction) => {
// Only cache GET requests
if (req.method !== 'GET') return next();
const key = `http:${req.originalUrl}`;
const cached = await cache.get<{ body: unknown; headers: Record<string, string> }>(key);
if (cached) {
res.set(cached.headers);
res.set('X-Cache', 'HIT');
return res.json(cached.body);
}
// Intercept the response
const originalJson = res.json.bind(res);
res.json = (body: unknown) => {
res.set('X-Cache', 'MISS');
const headers = { 'Content-Type': 'application/json' };
cache.set(key, { body, headers }, ttlSeconds).catch(console.error);
return originalJson(body);
};
next();
};
}
// Apply to specific routes
router.get('/products', httpCache(300), productsController.list);
Database Query Result Caching
// Cache expensive aggregations
async function getDashboardStats(userId: string) {
const key = `dashboard:${userId}:stats`;
return cache.getOrSet(key, async () => {
// Expensive multi-table aggregation
const [postCount, commentCount, viewTotal] = await Promise.all([
db.post.count({ where: { authorId: userId } }),
db.comment.count({ where: { authorId: userId } }),
db.post.aggregate({
where: { authorId: userId },
_sum: { viewCount: true },
}),
]);
return {
postCount,
commentCount,
totalViews: viewTotal._sum.viewCount ?? 0,
};
}, 300); // 5-minute cache for dashboard stats
}
Monitoring Cache Performance
Track hit rate, latency, and memory usage to ensure your cache is actually helping:
// Wrap cache operations to emit metrics
class InstrumentedCache extends CacheService {
private hits = 0;
private misses = 0;
async get<T>(key: string): Promise<T | null> {
const value = await super.get<T>(key);
if (value !== null) {
this.hits++;
} else {
this.misses++;
}
return value;
}
getHitRate(): number {
const total = this.hits + this.misses;
return total === 0 ? 0 : this.hits / total;
}
}
// Log Redis INFO STATS periodically
async function logRedisMetrics() {
const redis = getRedis();
const info = await redis.info('stats');
// Parse keyspace_hits, keyspace_misses, instantaneous_ops_per_sec
logger.info({ type: 'redis_metrics', info });
}
Common Mistakes to Avoid
- Not setting TTLs: Always set TTLs. Eternal cache entries fill memory and serve stale data forever.
- Caching everything: Cache selectively — only expensive operations that are read frequently. Caching rarely-read data wastes memory with no benefit.
-
Using KEYS in production:
KEYS patternscans the entire keyspace and blocks Redis. UseSCANinstead for production key pattern scanning. - Not handling cache failures gracefully: If Redis is unavailable, fall through to the database rather than failing the entire request.
- Serializing non-serializable objects: Functions, circular references, and class instances don't serialize. Always test your JSON serialization of cached objects.
// Graceful degradation when Redis is unavailable
async function getWithFallback<T>(
key: string,
fetcher: () => Promise<T>,
ttl: number
): Promise<T> {
try {
return await cache.getOrSet(key, fetcher, ttl);
} catch (redisError) {
logger.warn({ msg: 'Cache unavailable, falling through to DB', key, err: redisError });
return fetcher(); // Always returns data, just slower
}
}
Conclusion
Effective Redis caching can reduce database load by 90%+ and cut response times from hundreds of milliseconds to single digits. The key patterns to master are:
- Cache-aside: The default. Check cache → on miss, fetch DB → populate cache.
- Write-through: Consistency-first. Update cache on every write.
- Stampede prevention: Use mutex locks or probabilistic early expiration for popular keys.
- Smart invalidation: TTL for tolerable staleness, event-driven for critical consistency, cache tags for bulk invalidation.
For more on backend performance, see our PostgreSQL performance tuning checklist and REST API testing guide.
Free Developer Tools
If you found this article helpful, check out DevToolkit — 40+ free browser-based developer tools with no signup required.
Popular tools: JSON Formatter · Regex Tester · JWT Decoder · Base64 Encoder
🛒 Get the DevToolkit Starter Kit on Gumroad — source code, deployment guide, and customization templates.
Top comments (0)