A Complete Guide with Workers, Rate Limiting, Retries & Dead-Letter Queues (DLQs) ππ₯
When your NestJS application grows, running cron jobs becomes⦠tricky.
Running multiple instances means your cron jobs will run multiple times, which is usually very bad π
.
To avoid duplicate jobs and chaos, weβll build a robust distributed cron system using:
- NestJS
- BullMQ (modern queue system)
- Redis
- Worker processes
- Retry logic
- Dead-letter queues (DLQs)
- Rate-limiting
And all wrapped in a multi-instance setup using Docker Compose.
Letβs go. π οΈβ¨
π§± Architecture Overview
We will build two separate NestJS applications:
ββββββββββββββββββββββ
β Scheduler Service β β runs cron jobs
β (NestJS App #1) β
ββββββββββββ¬ββββββββββ
β pushes jobs
βΌ
βββββββββββββββ
β BullMQ β
β (Redis) β
ββββββββ¬βββββββ
β workers pull jobs
βΌ
ββββββββββββββββββββββ
β Worker Service β β processes jobs
β (NestJS App #2) β
ββββββββββββββββββββββ
- Only the Scheduler runs cron jobs
- Only the Worker processes jobs
- Both communicate through Redis
- All instances stay synchronized
- No duplicated tasks π
π§° Step 1 β Install Redis
The easiest method is Docker:
docker run -d --name redis -p 6379:6379 redis:7
π¦ Step 2 β Install Dependencies
Install BullMQ & NestJS integration:
npm i bullmq @nestjs/bullmq
npm i -D @types/bull
Install scheduling support:
npm i @nestjs/schedule
(Optional) install Bull Dashboard:
npm i @bull-board/api @bull-board/express
π Step 3 β Create a Shared Redis Config
Create src/bull-config.ts:
export const redisConfig = {
connection: {
host: process.env.REDIS_HOST ?? 'localhost',
port: Number(process.env.REDIS_PORT ?? 6379),
},
};
Youβll reuse this in both apps.
π Step 4 β Build the Scheduler App
(The app that runs cron jobs and pushes them into queues)
Enable ScheduleModule:
// scheduler/app.module.ts
@Module({
imports: [
ScheduleModule.forRoot(),
BullModule.forRoot(redisConfig),
BullModule.registerQueue({
name: 'email_queue',
}),
],
})
export class AppModule {}
Create a Cron Job
// scheduler/email-cron.service.ts
import { Cron, CronExpression } from '@nestjs/schedule';
import { InjectQueue } from '@nestjs/bullmq';
import { Queue } from 'bullmq';
@Injectable()
export class EmailCronService {
constructor(@InjectQueue('email_queue') private emailQueue: Queue) {}
@Cron(CronExpression.EVERY_10_SECONDS)
async handleCron() {
console.log('β° Adding job to email queue...');
await this.emailQueue.add(
'send-welcome-email',
{ userId: Math.floor(Math.random() * 1000) },
{
attempts: 3,
backoff: { type: 'exponential', delay: 3000 },
removeOnComplete: true,
removeOnFail: false,
rateLimiter: {
max: 5,
duration: 1000,
},
}
);
}
}
What we added:
- π Retries (3 attempts)
- β³ Exponential backoff
- π¦ Rate-limiting (5 jobs per second)
- π§Ή Cleanup completed jobs
π οΈ Step 5 β Build the Worker App
(This app consumes & executes queue jobs)
// worker/app.module.ts
@Module({
imports: [
BullModule.forRoot(redisConfig),
BullModule.registerQueue({
name: 'email_queue',
}),
],
providers: [EmailProcessor],
})
export class AppModule {}
Create the Job Processor
// worker/email.processor.ts
import { Processor, WorkerHost } from '@nestjs/bullmq';
import { Job } from 'bullmq';
@Processor('email_queue')
export class EmailProcessor extends WorkerHost {
async process(job: Job<any>): Promise<any> {
try {
console.log(`π¨ Processing email job for User ${job.data.userId}`);
if (Math.random() < 0.3) {
throw new Error('Simulated email failure!');
}
return { success: true };
} catch (err) {
console.error('β Job failed:', err);
throw err;
}
}
}
π Step 6 β Dead-Letter Queue (DLQ)
Create a secondary queue called email_queue:dlq.
Modify worker:
@Processor('email_queue')
export class EmailProcessor extends WorkerHost {
async onFailed(job: Job, error: any) {
const dlq = new Queue('email_dlq', redisConfig);
await dlq.add('dlq-job', {
failedJob: job.id,
data: job.data,
error: error.message,
});
console.log(`π Moved job ${job.id} to DLQ`);
}
}
Now any job that fails after max retries goes into the dead-letter queue,
ready for manual review or reprocessing.
π§± Step 7 β Dockerize Everything
Here is the full working docker-compose.yml:
version: "3.9"
services:
redis:
image: redis:7
ports:
- "6379:6379"
scheduler:
build: ./scheduler
depends_on:
- redis
environment:
REDIS_HOST: redis
deploy:
replicas: 1
worker:
build: ./worker
depends_on:
- redis
environment:
REDIS_HOST: redis
deploy:
replicas: 3
What this achieves:
- Scheduler has 1 replica β cron runs once
- Worker can have as many replicas as needed β job processing scales horizontally
π Optional: Add BullMQ Dashboard
Inside the worker app:
import { createBullBoard } from '@bull-board/api';
import { BullMQAdapter } from '@bull-board/api/bullMQAdapter';
import { ExpressAdapter } from '@bull-board/express';
@Module({})
export class DashboardModule implements OnModuleInit {
onModuleInit() {
const serverAdapter = new ExpressAdapter();
const emailQueue = new Queue('email_queue', redisConfig);
createBullBoard({
queues: [new BullMQAdapter(emailQueue)],
serverAdapter,
});
serverAdapter.setBasePath('/dashboard');
app.use('/dashboard', serverAdapter.getRouter());
}
}
Open dashboard:
http://localhost:3000/dashboard
π Final Result: A Rock-Solid Cron Job System
You now have:
β
A Scheduler app that runs cron jobs
β
A Worker app that processes tasks
β No duplicated cron executions
π¦ Rate-limiting
π Retry logic with exponential backoff
π Dead-letter queues
π³ Horizontal scaling with Docker
ποΈ Optional admin dashboard
All production-ready.
All beautifully decoupled.
All powered by Redis & BullMQ.
Top comments (2)
This definitely does not solve the problems in the conclusion section: multiple instance will introduce one job in the queue each, geneating N jobs executions. In particular:
Hi ivanva,
Thanks for the comment, and you're absolutely right, so I corrected the article and added a couple of extras.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.