DEV Community

Ronak Navadia
Ronak Navadia

Posted on

Stop Confusing Workers with Concurrency

A conversation with every developer who's ever asked "but aren't they the same thing?"


Let me tell you about a conversation I've had at least a dozen times.

A developer comes to me, frustrated. They've set up Bull for background jobs. They've even set the concurrency option to 5, feeling confident. But their API is still grinding to a halt whenever heavy jobs kick in. And when they try to "scale their workers", they end up with five copies of their entire API running β€” which wasn't the plan at all.

Sound familiar? Let's fix this, once and for all β€” not with theory, but with the clear mental model you actually need.


First, what actually happens when you run your app?

When you run node dist/main.js, something very specific happens on your machine:

πŸ’‘ The truth: You start one Node.js process. One event loop. One chunk of memory. One running instance of your entire application.

That's it. Everything you've built β€” your controllers, your services, your Bull job processors β€” all of it lives inside that single process.

This isn't a problem at first. But as your system grows, that single process starts to become a bottleneck. And this is exactly where the confusion about "workers" begins.


So what is a "worker", really?

Here's the thing nobody says clearly enough: a worker is just another Node.js process.

Not a thread. Not a magic Bull setting. Not a special NestJS construct. It's a process β€” the same kind of process you start when you run node dist/main.js. The only difference is in what it does once it starts.

API Process Worker Process
Starts with app.listen(3000) createApplicationContext()
Does Waits for HTTP requests Polls a queue for jobs
Handles Validation, auth, routing Sending emails, processing files, etc.
Visibility Has a port the world can reach No port, invisible from outside

A process becomes an API server only because it calls app.listen(). A process becomes a worker only because it processes background jobs instead.

"API and Worker are not different code β€” they are different ways of running your app."

Same code. Different entry point. Different role.


Then what does Bull's concurrency option actually do?

This is the question that trips everyone up. And it's a completely fair question.

When you write something like this:

email.processor.ts

@Processor('email')
export class EmailProcessor {
  @Process({ concurrency: 5 })
  async handle(job: Job) {
    // send the email
  }
}
Enter fullscreen mode Exit fullscreen mode

You are not creating 5 workers. You are telling the one worker process that's already running to handle up to 5 jobs at the same time β€” all within its single event loop.

Term What it means
Concurrency How many jobs one process handles simultaneously
Multiple Workers How many separate processes are doing the handling

Think of it this way:

  • 🍳 Concurrency = one chef trying to watch 5 pots at once
  • πŸ‘¨β€πŸ³πŸ‘¨β€πŸ³πŸ‘¨β€πŸ³ Multiple workers = hiring 5 separate chefs, each at their own stove

For I/O-heavy jobs (like sending emails or making API calls), concurrency works beautifully.
For CPU-heavy jobs (like image processing or video encoding), you want separate processes β€” because CPU work actually blocks the event loop.


Why mixing API and Worker in one process causes pain

Here's how most NestJS apps start out. Everything in one process:

app.module.ts (the problematic way)

@Module({
  imports: [
    BullModule.forRoot({ ... }),
    BullModule.registerQueue({ name: 'email' }),
    EmailModule, // contains the @Processor
  ],
})
export class AppModule {}
Enter fullscreen mode Exit fullscreen mode

When you start this app with node dist/main.js, you get:

  • βœ… HTTP server listening on port 3000
  • βœ… Bull processor actively picking up jobs

Fine for small scale. But three ugly problems emerge as you grow:


⚠️ Problem 1 · Slow API

A heavy background job β€” resizing images, crunching data β€” hogs the event loop. Your API users start seeing timeouts. They didn't change anything. Your background jobs strangled the API.


⚠️ Problem 2 · Wasteful Scaling

You want 3 more job processors, so you spin up 3 more instances of your app. Surprise β€” you now have 3 extra API servers too, all competing on the same port or behind a load balancer.

You wanted kitchen staff. You hired three more waiters who also cook.


⚠️ Problem 3 · No Isolation

A runaway job crashes the process. Down goes your API too. Two unrelated concerns, one shared fate.


The fix: two entry points, two roles

The solution is elegant. You write two entry point files β€” one for each role. They both import AppModule, but they start it differently.

main.ts β€” the API process

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  await app.listen(3000); // gives it a port β€” makes it an API
}
bootstrap();
Enter fullscreen mode Exit fullscreen mode

worker.ts β€” the worker process

async function bootstrap() {
  await NestFactory.createApplicationContext(AppModule);
  // no port β€” just loads modules and starts processing
}
bootstrap();
Enter fullscreen mode Exit fullscreen mode

Now you can start them independently:

node dist/main.js    # starts the API
node dist/worker.js  # starts a worker
node dist/worker.js  # starts another worker
Enter fullscreen mode Exit fullscreen mode

Three separate processes. One handles HTTP. Two process jobs. They never step on each other.


The hidden gotcha you will hit

Even after splitting your entry points, there's a trap. Both files still import AppModule β€” and AppModule still registers your Bull processor. So your "API process" will also start picking up jobs. You've separated the files but not the behaviour.

Fix it with an environment variable:

app.module.ts β€” now with role awareness

@Module({
  imports: [
    // Only load the job processor in worker processes
    ...(process.env.RUN_WORKER === 'true'
      ? [EmailProcessorModule]
      : []),
  ],
})
export class AppModule {}
Enter fullscreen mode Exit fullscreen mode

Then start them like this:

# API β€” no job processing
RUN_WORKER=false node dist/main.js

# Worker β€” only job processing
RUN_WORKER=true node dist/worker.js
Enter fullscreen mode Exit fullscreen mode

Now your API process has zero awareness of job processors. It queues jobs and walks away. The worker processes pick those jobs up.


How they talk to each other (they don't, directly)

Your API and your workers never call each other. There are no inter-process function calls. Instead, they communicate through Redis via Bull's queue:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Client   β”‚  ───► β”‚      API Process        β”‚  ───► β”‚ Redis Queue β”‚  ───► β”‚      Worker Process      β”‚
β”‚            β”‚       β”‚   adds job to queue     β”‚       β”‚             β”‚       β”‚  picks up & executes     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Enter fullscreen mode Exit fullscreen mode

In your API controller:

await this.emailQueue.add('sendWelcome', { userId: 42 });
// API's job is done. It forgets about this immediately.
Enter fullscreen mode Exit fullscreen mode

In your worker processor:

@Process('sendWelcome')
async handleWelcome(job: Job) {
  await this.mailer.send(job.data.userId);
  // Worker handles it whenever it's free.
}
Enter fullscreen mode Exit fullscreen mode

The queue is the contract between them. Durable, decoupled, and fault-tolerant.


Managing it all with PM2

Manually running terminal commands gets old fast. PM2 is the standard solution β€” it manages your processes, restarts them on crash, and lets you scale workers with a single flag.

# Start the API (1 instance)
pm2 start dist/main.js --name api --env RUN_WORKER=false

# Start 3 worker processes
pm2 start dist/worker.js --name worker -i 3 --env RUN_WORKER=true

# See what's running
pm2 list

# Scale workers to 5 without restarting anything
pm2 scale worker 5
Enter fullscreen mode Exit fullscreen mode

This gives you independent control. Need to handle a spike in emails? Scale the workers. Getting more API traffic? Scale the API. They move independently because they are independent.


The restaurant analogy (because it genuinely helps)

Your system is a restaurant. Here's the full cast:

Restaurant Role Your System Equivalent Technical Term
Customer walks in HTTP request arrives Client request
Waiter takes the order API receives request, validates, queues a job API Process
Order ticket goes to kitchen Job lands in Redis Bull Queue
Chef cooks the meal Worker picks up the job and executes it Worker Process
Chef can multitask on 3 orders One worker with concurrency: 3 Bull concurrency
Hiring a second chef Starting a second worker process Process scaling

A great restaurant doesn't make the waiter cook and manage inventory at the same time. Separation of concerns isn't just an architectural principle β€” it's common sense.


The complete picture, in one place

Let's bring it all together with one final summary. The key ideas, no fluff:

Concept What it means
A process A running Node.js app with its own memory and event loop
An API process A process that called app.listen()
A worker process A process that didn't β€” it just processes queue jobs
Concurrency How many jobs one worker handles at once, not separate processes
Separation Different entry points + env flags to control which modules load
PM2 The thing that manages, monitors, and scales all of it

"Your API handles requests. Your workers handle work.
They share a codebase β€” but never a process."


Once this clicks, a whole category of "why is my API slow?" and "why are my jobs not running where I expect?" questions just… disappear. You start thinking in processes instead of in files, and that changes how you architect everything.

If you have questions, or if something in here is still fuzzy β€” that's what the comments are for. I've had this conversation many times, and each version of it has made this explanation a little sharper. Your confusion is probably someone else's confusion too.

Top comments (0)