cache-manager is the default caching solution in NestJS. It works. It gets you started. And then, at some point, you hit a wall.
Maybe it's the moment you realize CacheInterceptor only works on controller methods, not service methods. Maybe it's when two pods simultaneously rebuild the same expensive cache entry after it expires. Maybe it's when you need to invalidate all caches related to a specific user and discover there's no built-in way to do tag-based invalidation.
This article walks through the specific limitations you'll hit with cache-manager in production and how to migrate to a caching layer that handles them.
Where cache-manager breaks down
1. Interceptors only work on controllers
NestJS's CacheInterceptor decorates HTTP endpoints:
@Controller('users')
@UseInterceptors(CacheInterceptor)
export class UserController {
@Get(':id')
@CacheTTL(300)
async getUser(@Param('id') id: string) {
return this.userService.findById(id);
}
}
This works for the controller layer. But what about caching inside services? What about a ReportService.generateMonthlyReport() that's called by both an HTTP endpoint and a cron job?
// This does NOT work — CacheInterceptor requires HTTP context
@Injectable()
export class ReportService {
@UseInterceptors(CacheInterceptor) // ← No effect on service methods
async generateMonthlyReport(month: string) {
// expensive computation...
}
}
You end up manually calling cacheManager.get() and cacheManager.set() with boilerplate in every method.
2. No stampede protection
When a cached value expires, the next request triggers a cache miss and rebuilds the value. If 100 requests arrive during the rebuild, all 100 hit the database simultaneously. This is the cache stampede problem.
cache-manager has no built-in protection. You'd need to implement locking or request coalescing yourself.
3. Single-tier cache only
cache-manager stores values in one place. If you want in-memory caching for the hottest keys (avoiding Redis round-trips entirely) with Redis as a fallback for the long tail, you need to build that two-tier system manually.
4. No tag-based invalidation
An e-commerce app caches product listings, user carts, category pages, and search results. A product price changes. Which caches need invalidating?
With cache-manager, you either know every specific cache key to delete (brittle) or flush everything (wasteful). There's no way to say "invalidate everything tagged with product:123."
5. No stale-while-revalidate
Modern caching returns the stale value immediately while refreshing in the background. This keeps response times consistent even during cache rebuilds. cache-manager doesn't support this pattern.
Migration: step by step
Step 1: Replace the module registration
Before:
import { CacheModule } from '@nestjs/cache-manager';
import { redisStore } from 'cache-manager-redis-yet';
@Module({
imports: [
CacheModule.registerAsync({
useFactory: async () => ({
store: await redisStore({
socket: {
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT, 10),
},
password: process.env.REDIS_PASSWORD,
ttl: 300000,
}),
}),
}),
],
})
export class AppModule {}
After:
import { RedisModule } from '@nestjs-redisx/core';
import { CachePlugin } from '@nestjs-redisx/cache';
@Module({
imports: [
RedisModule.forRootAsync({
useFactory: () => ({
clients: {
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT, 10),
password: process.env.REDIS_PASSWORD,
},
}),
plugins: [
CachePlugin.registerAsync({
useFactory: () => ({
l1: { max: 1000, ttl: 30 },
l2: { defaultTtl: 300 },
antiStampede: true,
}),
}),
],
}),
],
})
export class AppModule {}
You immediately get L1 (in-memory) + L2 (Redis) caching and stampede protection, without changing any application code yet.
Step 2: Replace manual cache calls with @Cached
Before:
@Injectable()
export class UserService {
constructor(@Inject(CACHE_MANAGER) private cache: Cache) {}
async getUser(id: string): Promise<User> {
const cacheKey = `user:${id}`;
const cached = await this.cache.get<User>(cacheKey);
if (cached) return cached;
const user = await this.userRepo.findById(id);
await this.cache.set(cacheKey, user, 300000);
return user;
}
}
After:
import { Cached } from '@nestjs-redisx/cache';
@Injectable()
export class UserService {
@Cached({ ttl: 300, tags: ['users'] })
async getUser(id: string): Promise<User> {
return this.userRepo.findById(id);
}
}
The @Cached decorator handles the cache-aside pattern: check cache, return if hit, execute method if miss, store result, return. It works on any method in any @Injectable() class, not just controller endpoints.
The tags: ['users'] parameter enables tag-based invalidation later.
Step 3: Replace CacheInterceptor with @Cached
Before:
@Controller('users')
export class UserController {
@Get(':id')
@UseInterceptors(CacheInterceptor)
@CacheTTL(300)
async getUser(@Param('id') id: string) {
return this.userService.findById(id);
}
}
After — move caching to the service layer:
@Controller('users')
export class UserController {
@Get(':id')
async getUser(@Param('id') id: string) {
return this.userService.getUser(id); // caching handled in service
}
}
Now the same caching applies whether getUser is called from an HTTP request, a cron job, a queue worker, or another service.
Step 4: Add tag-based invalidation
import { CacheEvict } from '@nestjs-redisx/cache';
@Injectable()
export class UserService {
@Cached({ ttl: 300, tags: (id: string) => [`users`, `user:${id}`] })
async getUser(id: string): Promise<User> {
return this.userRepo.findById(id);
}
@CacheEvict({ tags: (id: string) => [`user:${id}`] })
async updateUser(id: string, data: UpdateUserDto): Promise<User> {
return this.userRepo.update(id, data);
}
@CacheEvict({ tags: ['users'] })
async importUsers(users: User[]): Promise<void> {
await this.userRepo.bulkInsert(users);
// This invalidates ALL user-related caches at once
}
}
When updateUser('123') is called, everything tagged with user:123 is invalidated. When importUsers() is called, everything tagged with users is invalidated. No manual key management.
Step 5: Enable stale-while-revalidate
For endpoints where slightly stale data is acceptable but consistent response times are critical:
@Cached({
ttl: 300,
swr: 600, // Serve stale for up to 600s while refreshing
tags: ['dashboard'],
})
async getDashboardData(): Promise<DashboardData> {
// Expensive aggregation query
return this.analyticsRepo.aggregate();
}
After 300 seconds, the cached value is "stale." The next request gets the stale value instantly (zero latency) while a background refresh fetches fresh data. The stale value is served for up to 600 seconds. This eliminates the latency spike that normally occurs when a popular cache entry expires.
What you gain
| Feature | cache-manager | After migration |
|---|---|---|
| Controller caching | Yes | Yes |
| Service method caching | Manual |
@Cached decorator |
| Stampede protection | No | Built-in |
| Two-tier (memory + Redis) | Manual | Built-in L1/L2 |
| Tag-based invalidation | No |
@CacheEvict with tags |
| Stale-while-revalidate | No | Built-in SWR |
| Cache metrics | No | Via MetricsPlugin |
The migration can be done incrementally. Start with the module swap and stampede protection (zero code changes). Then migrate service methods one at a time from manual cache.get/set to @Cached. Add tags and SWR as you need them.
GitHub: https://github.com/nestjs-redisx/nestjs-redisx
Documentation: https://nestjs-redisx.dev
Top comments (0)