Redis is often added to a stack as an afterthought — usually when caching becomes an obvious bottleneck. Used deliberately from the start, it solves five distinct problems: caching, sessions, rate limiting, pub/sub, and job queues.
Connection Setup
// lib/redis.ts
import { Redis } from 'ioredis'
const redis = new Redis(process.env.REDIS_URL!, {
maxRetriesPerRequest: 3,
enableReadyCheck: false,
lazyConnect: true,
})
redis.on('error', (err) => console.error('Redis error:', err))
export default redis
Using a singleton prevents connection storms in serverless environments.
Caching API Responses
Cache expensive computations or external API calls:
async function getCachedData<T>(
key: string,
fetcher: () => Promise<T>,
ttlSeconds = 300
): Promise<T> {
const cached = await redis.get(key)
if (cached) return JSON.parse(cached)
const data = await fetcher()
await redis.setex(key, ttlSeconds, JSON.stringify(data))
return data
}
// Usage
const products = await getCachedData(
'products:featured',
() => db.product.findMany({ where: { featured: true } }),
60 * 5 // 5 minute TTL
)
Rate Limiting
Token bucket rate limiting with atomic Redis operations:
async function rateLimit(
identifier: string,
limit = 10,
windowSeconds = 60
): Promise<{ success: boolean; remaining: number }> {
const key = `rate_limit:${identifier}`
const pipeline = redis.pipeline()
pipeline.incr(key)
pipeline.expire(key, windowSeconds)
const results = await pipeline.exec()
const count = results![0][1] as number
return {
success: count <= limit,
remaining: Math.max(0, limit - count),
}
}
// In your API route
export async function POST(req: Request) {
const ip = req.headers.get('x-forwarded-for') ?? 'unknown'
const { success, remaining } = await rateLimit(ip, 100, 3600) // 100/hr
if (!success) {
return Response.json(
{ error: 'Rate limit exceeded' },
{ status: 429, headers: { 'X-RateLimit-Remaining': '0' } }
)
}
// ...
}
Session Storage
Store session data in Redis instead of JWTs for instant revocation:
// With next-auth + Redis adapter
import { getServerSession } from 'next-auth'
import { RedisAdapter } from '@auth/redis-adapter'
export const authOptions = {
adapter: RedisAdapter(redis),
// sessions stored in Redis, revocable by deleting the key
}
Pub/Sub for Real-Time Features
// Publisher
async function publishEvent(channel: string, data: unknown) {
await redis.publish(channel, JSON.stringify(data))
}
// Subscriber (separate connection required)
const subscriber = redis.duplicate()
await subscriber.subscribe('order:updates')
subscriber.on('message', (channel, message) => {
const event = JSON.parse(message)
// broadcast to WebSocket clients
io.to(event.orderId).emit('order:updated', event)
})
// Trigger from order processing
await publishEvent('order:updates', { orderId: '123', status: 'shipped' })
Job Queues with BullMQ
import { Queue, Worker } from 'bullmq'
const emailQueue = new Queue('email', { connection: redis })
// Enqueue a job
await emailQueue.add('welcome', {
to: user.email,
name: user.name,
}, {
delay: 1000 * 60 * 5, // send after 5 minutes
attempts: 3,
backoff: { type: 'exponential', delay: 2000 },
})
// Process jobs
const worker = new Worker('email', async (job) => {
await sendEmail(job.data)
}, { connection: redis })
worker.on('failed', (job, err) => {
console.error(`Job ${job?.id} failed:`, err)
})
Key Naming Conventions
Structure your keys with colons as separators:
user:123:profile -- User profile
user:123:sessions -- User sessions set
products:featured -- Featured product list
rate_limit:192.168.1.1 -- Rate limit counter
queue:email:waiting -- BullMQ internal key
Consistent naming lets you use redis.keys('user:123:*') to find all keys for a user and scan with SCAN instead of KEYS in production.
The AI SaaS Starter at whoffagents.com ships with Redis configured for rate limiting and session storage, with ioredis singleton pattern and BullMQ email queue ready to use. $99 one-time.
Top comments (0)