Picture this: your app is blowing up. Hundreds of appointment confirmations need to go out, 48-hour reminders are piling up, and your server is sweating. You could process everything in one big, synchronous pile — but that's like trying to run a restaurant where one chef cooks, serves, cleans, and takes reservations all at once.
Enter Bull — NestJS's battle-tested job queue library. And today, we're going to talk about the unsung heroes of Bull: consumers and concurrency.
Grab a coffee. Let's fix that restaurant. ☕
The Problem: Jobs Are Piling Up
Every non-trivial backend eventually faces this moment. You need to:
- Send an order confirmation the second someone books an appointment
- Fire off a reminder 48 hours before that appointment
- Not block your main thread doing any of it
Bull solves this beautifully — but how you set up your consumers determines whether your queue hums like a well-oiled machine or groans under pressure.
There are three patterns worth knowing. Each has its moment to shine.
Pattern 1: The One-Chef Kitchen (Single Consumer)
Sometimes, simplicity wins. One processor class. Multiple job handlers. Clean, contained, easy to reason about.
Think of it as a single skilled chef who knows how to make every dish — they just work on a few orders at a time.
@Processor('appointmentQueue')
export class AppointmentProcessor {
// Chef's special #1: Order Confirmations
@Process('sendOrderConfirmationMessageJob', { concurrency: 3 })
async handleOrderConfirmationMessage(job: Job) {
console.log('Processing order confirmation job:', job.id);
// Your magic here
}
// Chef's special #2: 48-Hour Reminders
@Process('send48HourReminderForAppointmentJob', { concurrency: 2 })
async handle48HourReminder(job: Job) {
console.log('Processing 48-hour reminder job:', job.id);
// Your magic here
}
}
What's happening here?
-
One class, two jobs —
AppointmentProcessorowns everything related to appointments. -
Concurrency per job — Notice how
sendOrderConfirmationMessageJobgets3workers while the reminder job gets2. You're not guessing; you're deliberately allocating processing power based on expected load. -
The
{ concurrency: X }option inside@Process()is your dial — turn it up for high-traffic jobs, keep it lower for jobs that are resource-heavy or rate-limited by external APIs.
When to use this: Small-to-medium queues, tightly related job types, or when you want everything in one place. Great for getting started fast.
Pattern 2: The Brigade Kitchen (Multiple Consumers)
Now imagine your restaurant got a Michelin star. One chef can't cut it anymore. You hire specialists — a pasta chef, a pastry chef, a grill master.
This is the multi-consumer pattern: each job type gets its own dedicated processor class.
// Specialist #1: The Confirmation Pro
@Processor('appointmentQueue')
export class OrderConfirmationProcessor {
@Process('sendOrderConfirmationMessageJob', { concurrency: 3 })
async handleOrderConfirmation(job: Job) {
console.log('Processing order confirmation job:', job.id);
// Confirmation logic
}
}
// Specialist #2: The Reminder Guru
@Processor('appointmentQueue')
export class ReminderProcessor {
@Process('send48HourReminderForAppointmentJob', { concurrency: 2 })
async handle48HourReminder(job: Job) {
console.log('Processing 48-hour reminder job:', job.id);
// Reminder logic
}
}
Why split them up?
- Separation of concerns — Each class does one thing and does it well. When the reminder logic needs to change, you open one small file, not a sprawling processor.
-
Independent scaling — Need more confirmation workers? Crank up the concurrency on
OrderConfirmationProcessorwithout touching anything else. - Easier testing — Unit testing a focused class is a joy compared to wrestling with a monolith.
When to use this: Growing codebases, jobs with very different logic or dependencies, or teams where different developers own different job types.
Pattern 3: The Ghost Kitchen (Dynamic Processing with onModuleInit)
Here's where things get interesting. What if you don't want to use decorators at all? What if you want to wire everything up programmatically, maybe based on config, environment variables, or runtime conditions?
Meet the ghost kitchen — fully functional, no storefront, completely dynamic.
import { InjectQueue } from '@nestjs/bull';
import { Queue } from 'bull';
import { Injectable, OnModuleInit } from '@nestjs/common';
@Injectable()
export class JobProcessingService implements OnModuleInit {
constructor(
@InjectQueue('appointmentQueue') private appointmentQueue: Queue
) {}
async onModuleInit() {
// Wire up confirmation jobs with 3 concurrent workers
await this.appointmentQueue.process(
'sendOrderConfirmationMessageJob',
3,
async (job) => {
console.log('Processing order confirmation job:', job.id);
// Confirmation logic
}
);
// Wire up reminder jobs with 2 concurrent workers
await this.appointmentQueue.process(
'send48HourReminderForAppointmentJob',
2,
async (job) => {
console.log('Processing 48-hour reminder job:', job.id);
// Reminder logic
}
);
}
}
The trick here?
-
onModuleInit()fires when the NestJS module boots up — the perfect moment to register your workers before any jobs start flowing. -
Concurrency as the second argument — In
queue.process(jobName, concurrency, handler), that middle number is your power lever. No decorators, just plain code. - Maximum flexibility — You can read concurrency values from a config service, toggle jobs on/off per environment, or even spin up workers conditionally. Decorators can't do that.
When to use this: When you need runtime flexibility, config-driven behaviour, or you're working in a service-first architecture where decorators feel out of place.
The Secret Ingredient: Concurrency
All three patterns orbit the same concept — concurrency. Let's demystify it.
Concurrency in Bull is simply: "how many workers can chew through this job type at the same time?"
| Concurrency | What it means |
|---|---|
1 |
One job at a time. Safe, slow, sequential. |
3 |
Three jobs running simultaneously. Faster, but uses more resources. |
10 |
High throughput. Great for I/O-bound tasks; risky for CPU-heavy ones. |
A few rules of thumb:
- I/O-bound jobs (sending emails, hitting APIs) → higher concurrency is fine; your workers spend most time waiting, not computing.
- CPU-bound jobs (image processing, PDF generation) → keep concurrency low; too many parallel workers will starve your CPU.
- Rate-limited external APIs → concurrency should respect the API's limits, not just your server's capacity.
Putting It All Together
Here's a quick cheat sheet for picking your pattern:
| Scenario | Best Pattern |
|---|---|
| Small app, few job types | Single Consumer |
| Separate teams or complex logic | Multiple Consumers |
| Config-driven or dynamic setup | onModuleInit() |
The beauty of Bull is that none of these patterns are mutually exclusive. You can use a single consumer for simple jobs and a dedicated processor for a particularly complex one — mix and match as your needs evolve.
Wrapping Up
Job queues aren't glamorous. They live in the background, quietly doing the work that keeps your users happy — sending that confirmation email before they start to wonder, firing that reminder just in time.
But how you architect your consumers? That's where the craft lives.
Whether you go with a one-chef kitchen, a full brigade, or a ghost kitchen running on vibes and onModuleInit — now you know the trade-offs, and you can choose with confidence.
Go build something that scales. 🚀
Top comments (0)