Most Node.js apps hit the database on every request. That works until traffic grows and your API starts returning responses in seconds instead of milliseconds.
These Redis caching patterns remove most database reads and turn slow endpoints into memory lookups.
1 Cache database queries instead of running them on every request
Most APIs query the database even when the same data was requested seconds ago.
Before (PostgreSQL on every request)
export async function getJobs(filters) {
const jobs = await prisma.jobPosting.findMany({
where: {
...(filters.remote && { remote: true })
},
include: {
company: { select: { name: true, website: true } }
},
orderBy: { createdAt: "desc" },
take: 20
})
return jobs
}
After (Redis cache aside pattern)
import redis from "../lib/redis"
export async function getJobs(filters) {
const cacheKey = `jobs:${JSON.stringify(filters)}`
const cached = await redis.get(cacheKey)
if (cached) {
return JSON.parse(cached)
}
const jobs = await prisma.jobPosting.findMany({
where: {
...(filters.remote && { remote: true })
},
include: {
company: { select: { name: true, website: true } }
},
orderBy: { createdAt: "desc" },
take: 20
})
await redis.set(cacheKey, JSON.stringify(jobs), "EX", 300)
return jobs
}
The first request queries PostgreSQL. The next thousands read directly from Redis in microseconds.
2 Prevent cache stampede when popular keys expire
When a cache key expires, hundreds of requests can hit the database simultaneously.
Before (every request rebuilds cache)
const cached = await redis.get(cacheKey)
if (!cached) {
const data = await fetchJobsFromDatabase()
await redis.set(cacheKey, JSON.stringify(data), "EX", 300)
return data
}
After (Redis mutex lock)
const cached = await redis.get(cacheKey)
if (cached) {
return JSON.parse(cached)
}
const lockKey = `lock:${cacheKey}`
const lock = await redis.set(lockKey, "1", "NX", "EX", 10)
if (lock) {
const data = await fetchJobsFromDatabase()
await redis.set(cacheKey, JSON.stringify(data), "EX", 300)
await redis.del(lockKey)
return data
}
await new Promise(r => setTimeout(r, 100))
return getJobs(filters)
Only one request rebuilds the cache. All others wait and read the fresh result.
3 Cache expensive joins instead of repeating them
Joins are one of the most expensive operations in relational databases.
Before
const jobs = await prisma.jobPosting.findMany({
include: {
company: true
}
})
After
const cacheKey = "jobs:latest"
const cached = await redis.get(cacheKey)
if (cached) {
return JSON.parse(cached)
}
const jobs = await prisma.jobPosting.findMany({
include: {
company: true
}
})
await redis.set(cacheKey, JSON.stringify(jobs), "EX", 300)
return jobs
Instead of repeating joins thousands of times per hour, the database runs the query once.
This pattern compounds with techniques used in the Node.js memory leak detection and resolution guide because memory pressure and database load often appear together in production systems.
4 Use Redis INCR for rate limiting APIs
Many APIs need rate limiting to prevent abuse.
Before (database counters)
const record = await prisma.rateLimit.findUnique({
where: { ip }
})
if (record.requests > 100) {
return res.status(429).send("Too many requests")
}
After (Redis atomic counters)
const key = `ratelimit:${ip}`
const current = await redis.incr(key)
if (current === 1) {
await redis.expire(key, 60)
}
if (current > 100) {
return res.status(429).json({
error: "Too many requests"
})
}
Redis increments counters atomically in memory. No race conditions. No database load.
5 Track trending content with Redis sorted sets
Ranking data with SQL requires aggregation and sorting.
Before
const trending = await prisma.jobViews.groupBy({
by: ["jobId"],
_count: { jobId: true },
orderBy: { _count: { jobId: "desc" } },
take: 10
})
After
await redis.zincrby("trending:jobs", 1, jobId)
const trending = await redis.zrevrange(
"trending:jobs",
0,
9
)
Redis keeps the leaderboard sorted automatically. Retrieving the top results takes microseconds.
6 Cache function results with a reusable wrapper
Adding caching logic everywhere quickly becomes messy.
Before
async function getCompany(id) {
return prisma.company.findUnique({
where: { id }
})
}
After
export function withCache(fn, { ttl, prefix }) {
return async (...args) => {
const key = `${prefix}:${JSON.stringify(args)}`
const cached = await redis.get(key)
if (cached) {
return JSON.parse(cached)
}
const result = await fn(...args)
if (result) {
await redis.set(key, JSON.stringify(result), "EX", ttl)
}
return result
}
}
const getCompany = withCache(fetchCompany, {
ttl: 3600,
prefix: "company"
})
The business logic stays clean while caching remains centralized.
Most Node.js applications do not need complex infrastructure to become fast. A Redis instance and a few cache keys often remove 80 to 95 percent of database queries.
Pick your most frequently requested endpoint, add a cache layer, measure the response time before and after. Reducing an API from 1.5 seconds to 150 milliseconds usually takes less than a day of work. That is one of the highest impact optimizations you can make in a Node.js system.
Top comments (0)