DEV Community

Atlas Whoff
Atlas Whoff

Posted on

Caching Strategies That Actually Work in Production

The Cache Taxonomy

Not all caches are the same. Choosing the wrong one is worse than no cache at all.

1. In-Process Memory Cache

Fastest possible. Zero network hops. Lives in your Node.js process.

import NodeCache from 'node-cache';

const cache = new NodeCache({ 
  stdTTL: 300,      // 5 minutes default
  checkperiod: 60,  // cleanup every 60s
  maxKeys: 1000,    // prevent unbounded growth
});

async function getUser(userId: string): Promise<User> {
  const cacheKey = `user:${userId}`;
  const cached = cache.get<User>(cacheKey);
  if (cached) return cached;

  const user = await db.users.findUnique({ where: { id: userId } });
  cache.set(cacheKey, user);
  return user;
}
Enter fullscreen mode Exit fullscreen mode

Good for: Config data, reference data, per-instance computation.

Bad for: Multi-instance deployments (each instance has different state), large datasets.

2. Redis Cache

Shared across all instances. Survives deploys (usually).

import { createClient } from 'redis';

const redis = createClient({ url: process.env.REDIS_URL });
await redis.connect();

async function getUser(userId: string): Promise<User> {
  const cacheKey = `user:${userId}`;

  const cached = await redis.get(cacheKey);
  if (cached) return JSON.parse(cached);

  const user = await db.users.findUnique({ where: { id: userId } });

  await redis.setEx(
    cacheKey, 
    300, // TTL in seconds
    JSON.stringify(user)
  );

  return user;
}

// Invalidate on update
async function updateUser(userId: string, data: Partial<User>) {
  const updated = await db.users.update({ where: { id: userId }, data });
  await redis.del(`user:${userId}`);
  return updated;
}
Enter fullscreen mode Exit fullscreen mode

3. Stale-While-Revalidate

Serve stale data immediately, refresh in background. Users never wait.

async function getWithSWR<T>(
  key: string,
  fetcher: () => Promise<T>,
  staleTTL: number,
  freshTTL: number
): Promise<T> {
  const cached = await redis.hGetAll(key);

  if (cached.data) {
    const age = Date.now() - parseInt(cached.timestamp);
    const data = JSON.parse(cached.data) as T;

    if (age < freshTTL * 1000) {
      return data; // Fresh, return immediately
    }

    if (age < staleTTL * 1000) {
      // Stale but usable - return immediately AND refresh in background
      setImmediate(async () => {
        const fresh = await fetcher();
        await redis.hSet(key, {
          data: JSON.stringify(fresh),
          timestamp: Date.now().toString(),
        });
        await redis.expire(key, staleTTL);
      });
      return data;
    }
  }

  // Cache miss or expired - fetch synchronously
  const fresh = await fetcher();
  await redis.hSet(key, {
    data: JSON.stringify(fresh),
    timestamp: Date.now().toString(),
  });
  await redis.expire(key, staleTTL);
  return fresh;
}

// Usage
const leaderboard = await getWithSWR(
  'leaderboard:global',
  fetchLeaderboard,
  3600, // serve stale for up to 1 hour
  60    // refresh if older than 1 minute
);
Enter fullscreen mode Exit fullscreen mode

4. Cache-Aside vs Read-Through

Cache-Aside (manual):

// You manage the cache explicitly
let data = await cache.get(key);
if (!data) {
  data = await db.query(...);
  await cache.set(key, data, ttl);
}
return data;
Enter fullscreen mode Exit fullscreen mode

Read-Through (automatic):

// Cache fetches from DB automatically on miss
const readthrough = new ReadThroughCache({
  async fetch(key: string) {
    return db.query(extractId(key));
  },
  ttl: 300,
});

const data = await readthrough.get(key); // handles miss automatically
Enter fullscreen mode Exit fullscreen mode

Cache-aside is more flexible. Read-through reduces boilerplate.

5. Cache Invalidation Strategies

TTL-based: Simple, eventually consistent.

await redis.setEx(key, 300, JSON.stringify(data));
Enter fullscreen mode Exit fullscreen mode

Event-based: Invalidate on mutation.

async function updateProduct(id: string, data: Partial<Product>) {
  const product = await db.products.update({ where: { id }, data });

  // Invalidate all related cache keys
  await redis.del(`product:${id}`);
  await redis.del(`category:${product.categoryId}:products`);
  await redis.del('featured-products');

  return product;
}
Enter fullscreen mode Exit fullscreen mode

Version-based: Cache key includes version.

const version = await redis.incr('product-catalog-version');
const cacheKey = `products:v${version}`;
Enter fullscreen mode Exit fullscreen mode

Cache Key Design

// Bad: too broad
cache.set('users', allUsers);

// Good: granular
cache.set(`user:${id}`, user);
cache.set(`user:${id}:orders`, orders);
cache.set(`org:${orgId}:users`, members);

// Include query params in key
const key = `products:page${page}:limit${limit}:sort${sort}`;

// Use hash tags for Redis Cluster
const key = `{user:${id}}:profile`; // keeps user data in same slot
Enter fullscreen mode Exit fullscreen mode

The Hard Truth

Caching introduces bugs. Stale data that users can't explain. Updates that don't appear immediately.

Principles that help:

  1. Cache reads. Don't cache writes.
  2. Always set a TTL. Never cache forever.
  3. Invalidate aggressively on mutation.
  4. Log cache hit rates. < 80% means your key design is wrong.
  5. Cache at the edge of your data layer, not in business logic.

A cache isn't a performance fix. It's a performance tool. Use it deliberately.


Redis caching, SWR patterns, and cache invalidation logic built in: Whoff Agents AI SaaS Starter Kit.

Top comments (0)