DEV Community

Akhil
Akhil

Posted on

Message Queues: When and How to Stop Doing Work Inside HTTP Requests

There's a habit in early-stage backend development where everything happens inside a request handler. User signs up → create the record, send the welcome email, notify Slack, sync to the CRM, trigger the analytics event — all before the response goes back.
This works until it doesn't. The response takes 3 seconds. One of those services goes down and the whole signup breaks. Traffic spikes and the queue of requests backs up.
Message queues are the fix. Here's when to use them and how to actually set one up.

The Mental Model
An HTTP request is synchronous — the client waits for your response. A message queue decouples the work: the request handler publishes a message and responds immediately. A separate worker process picks up the message and does the actual work.
Client → POST /signup →
[Create user record]
[Publish "user.created" message] ← takes milliseconds
[Respond 201 Created]

Background worker picks up "user.created" →
[Send welcome email]
[Notify Slack]
[Sync to CRM]
[Fire analytics event]
The client gets a fast response. The background work happens reliably, with retries if anything fails.

When to Use a Queue
Use a queue when the work:

Doesn't need to be done before the response (email, notifications, analytics)
Could take variable time (third-party API calls, file processing)
Should be retried automatically if it fails
Needs to be rate-limited (bulk email sends, external API rate limits)
Could overwhelm a downstream service if done in parallel (sending 10,000 emails at once)

Don't use a queue when:

The client needs the result immediately (payment processing, inventory check)
The work needs to be transactional with the database operation

Setting Up BullMQ with Redis
BullMQ is the current standard for job queues in Node.js. It's built on Redis.
bashnpm install bullmq ioredis

javascript// queues/index.js
const { Queue } = require('bullmq');
const Redis = require('ioredis');

const connection = new Redis(process.env.REDIS_URL, {
  maxRetriesPerRequest: null  // Required by BullMQ
});

const emailQueue = new Queue('emails', { connection });
const analyticsQueue = new Queue('analytics', { connection });

module.exports = { emailQueue, analyticsQueue, connection };
javascript// In your route handler
const { emailQueue } = require('./queues');

app.post('/signup', async (req, res) => {
  const user = await User.create({
    email: req.body.email,
    password: await bcrypt.hash(req.body.password, 12)
  });

  // Queue the follow-up work — non-blocking
  await emailQueue.add('welcome', {
    userId: user.id,
    email: user.email,
    name: user.name
  }, {
    attempts: 3,           // retry up to 3 times on failure
    backoff: {
      type: 'exponential',
      delay: 2000          // wait 2s, then 4s, then 8s between retries
    }
  });

  res.status(201).json({ id: user.id, email: user.email });
});

Enter fullscreen mode Exit fullscreen mode

The Worker Process
This runs separately from your web server — either a different file/process or a different container:
javascript// workers/email.worker.js

const { Worker } = require('bullmq');
const { connection } = require('../queues');

const worker = new Worker('emails', async (job) => {
  const { type } = job.name;
  const { userId, email, name } = job.data;

  if (job.name === 'welcome') {
    await sendWelcomeEmail({ to: email, name });
    console.log(`Welcome email sent to ${email}`);
  }

  if (job.name === 'password-reset') {
    await sendPasswordResetEmail({ to: email, token: job.data.token });
  }

}, {
  connection,
  concurrency: 5  // process 5 jobs at once
});

worker.on('failed', (job, err) => {
  console.error(`Job ${job.id} failed: ${err.message}`);
  // Send to error tracking (Sentry, etc.)
});
Enter fullscreen mode Exit fullscreen mode

Running Both Together in Development
Add a script to package.json:

json{
  "scripts": {
    "dev": "nodemon src/index.js",
    "worker": "nodemon workers/email.worker.js",
    "dev:all": "concurrently \"npm run dev\" \"npm run worker\""
  }
}
Enter fullscreen mode Exit fullscreen mode

In production, run the worker as a separate process or container. This means you can scale them independently — if email sending falls behind, add more worker containers without scaling the web server.

Monitoring Your Queues
Add BullBoard for a visual dashboard:
javascriptconst { createBullBoard } = require('@bull-board/api');
const { BullMQAdapter } = require('@bull-board/api/bullMQAdapter');
const { ExpressAdapter } = require('@bull-board/express');
const { emailQueue } = require('./queues');

const serverAdapter = new ExpressAdapter();
serverAdapter.setBasePath('/admin/queues');

createBullBoard({
queues: [new BullMQAdapter(emailQueue)],
serverAdapter
});

app.use('/admin/queues', serverAdapter.getRouter());
This gives you a UI at /admin/queues showing active jobs, failed jobs, completed jobs, and the ability to retry failures manually.

Dead Letter Queues for Jobs That Keep Failing
Some jobs will fail permanently (invalid email address, deleted user record). After the retry limit is exhausted, move them to a dead letter queue for inspection rather than losing them:
javascriptemailQueue.add('welcome', jobData, {
attempts: 3,
backoff: { type: 'exponential', delay: 2000 }
});

worker.on('failed', async (job, err) => {
if (job.attemptsMade >= job.opts.attempts) {
// Move to dead letter queue
await deadLetterQueue.add(job.name, {
originalData: job.data,
failureReason: err.message,
failedAt: new Date().toISOString()
});
}
});

Top comments (0)