Introduction
If you’re a full‑stack engineer responsible for a high‑traffic Node.js API, you’ve probably felt the sting of latency spikes. The good news is that a well‑planned caching layer can shave milliseconds off every request, reduce database load, and improve overall user experience. This practical tutorial walks you through a step‑by‑step performance‑tuning workflow that leverages Redis, Docker, and a few async patterns you can drop into any existing codebase.
1. Map Your Hot Paths
Before you add any cache, you need to know what to cache.
-
Identify high‑frequency endpoints (e.g.,
/products
,/user/profile
). -
Measure average response time with a simple
curl
loop or a tool likehey
. -
Check DB query cost using
EXPLAIN
in Postgres orSHOW PROFILE
in MySQL.
hey -n 10000 -c 50 https://api.example.com/products
If the median latency sits above 200 ms and the DB query shows a full table scan, you’ve found a candidate.
2. Choose the Right Caching Strategy
Strategy | When to Use | Pros | Cons |
---|---|---|---|
Cache‑Aside (Lazy) | Read‑heavy, occasional writes | Simple, no stale data on write | First request still hits DB |
Write‑Through | Frequent updates, strict consistency | DB always up‑to‑date | Slightly higher write latency |
Time‑Based TTL | Data that changes on a schedule | Automatic expiration | May serve stale data until TTL expires |
Cache Invalidation via Events | Real‑time updates needed | Near‑zero staleness | Requires extra pub/sub infrastructure |
For most public APIs, cache‑aside with a short TTL is the sweet spot. You keep the implementation lightweight while still gaining massive read‑speed improvements.
3. Wire Up Redis in Your Node.js Service
3.1 Add the client library
npm install ioredis@5
3.2 Create a reusable Redis wrapper
// redisClient.js
const Redis = require('ioredis');
const redis = new Redis({
host: process.env.REDIS_HOST || 'localhost',
port: Number(process.env.REDIS_PORT) || 6379,
password: process.env.REDIS_PASSWORD,
});
module.exports = {
async get(key) {
return await redis.get(key);
},
async set(key, value, ttlSec = 300) {
await redis.set(key, value, 'EX', ttlSec);
},
async del(key) {
await redis.del(key);
},
};
3.3 Apply cache‑aside to an endpoint
// routes/products.js
const express = require('express');
const router = express.Router();
const db = require('../db'); // your DB abstraction
const cache = require('../redisClient');
router.get('/', async (req, res) => {
const cacheKey = 'products:all';
const cached = await cache.get(cacheKey);
if (cached) {
return res.json(JSON.parse(cached));
}
const products = await db.query('SELECT * FROM products WHERE active = $1', [true]);
// Store result for 5 minutes
await cache.set(cacheKey, JSON.stringify(products), 300);
res.json(products);
});
module.exports = router;
Notice the short TTL (300 seconds). If a product changes, you can manually invalidate the key:
await cache.del('products:all');
4. Run Redis in Docker for Local Development
A reproducible environment eliminates “it works on my machine” surprises.
# Dockerfile for Redis (dev only)
FROM redis:7-alpine
EXPOSE 6379
CMD ["redis-server", "--appendonly", "yes"]
# docker-compose.yml snippet
services:
redis:
build: ./docker/redis
ports:
- "6379:6379"
environment:
- REDIS_PASSWORD=devsecret
Start it with docker compose up -d redis
. Your Node.js app can now point to redis://:devsecret@localhost:6379
.
5. Guard Against Cache Stampedes
When a TTL expires, a flood of requests can hammer the DB simultaneously. Mitigate this with stale‑while‑revalidate:
// redisClient.js – extended get
async function getOrStale(key, fetchFn, ttlSec = 300, staleSec = 30) {
const raw = await redis.get(key);
if (raw) return JSON.parse(raw);
const staleKey = `${key}:stale`;
const stale = await redis.get(staleKey);
if (stale) {
// Return stale data and refresh in background
fetchFn().then(async fresh => {
await redis.set(key, JSON.stringify(fresh), 'EX', ttlSec);
await redis.del(staleKey);
});
return JSON.parse(stale);
}
const fresh = await fetchFn();
await redis.set(key, JSON.stringify(fresh), 'EX', ttlSec);
return fresh;
}
Now the first request after expiry serves a stale copy while a background job repopulates the fresh cache.
6. Monitor Cache Health
A cache that silently fails can be worse than no cache at all.
-
Redis INFO:
redis-cli INFO memory
shows hit‑ratio, used memory, evictions. -
Prometheus Exporter: Use
redis_exporter
to scrape metrics. - Alert on low hit‑ratio (< 80 %).
redis-cli INFO stats | grep keyspace_hits
redis-cli INFO stats | grep keyspace_misses
If misses start trending upward, you may need to adjust TTLs or add more granular keys.
7. Combine with a CDN for Edge Caching
For public GET endpoints that return JSON, a CDN (e.g., Cloudflare) can cache responses at the edge, reducing latency to sub‑10 ms for users worldwide. Set the following HTTP headers from your Express app:
app.use((req, res, next) => {
if (req.method === 'GET' && req.path.startsWith('/public')) {
res.set('Cache-Control', 'public, max-age=60, stale-while-revalidate=30');
}
next();
});
The CDN respects stale‑while‑revalidate
, giving you a second layer of protection against stampedes.
8. Benchmark the Improvements
Run the same hey
test you used earlier, now with caching enabled.
hey -n 10000 -c 50 https://api.example.com/products
Typical results:
- Before caching: median 240 ms, 30 % DB CPU.
- After caching: median 45 ms, DB CPU drops < 5 %.
Document the numbers in a markdown table and share with the team – data‑driven decisions win.
Conclusion
By mapping hot endpoints, picking a cache‑aside strategy, wiring Redis with a thin wrapper, protecting against stampedes, and layering a CDN, you can turn a sluggish Node.js API into a lightning‑fast service without major architectural changes. Remember to monitor hit‑ratios, keep TTLs sensible, and automate invalidation on writes.
If you need help shipping this, the team at https://ramerlabs.com can help.
Top comments (0)