DEV Community

1xApi
1xApi

Posted on • Originally published at 1xapi.com

How to Offload Heavy API Work with BullMQ 5 Background Jobs in Node.js (2026 Guide)

Why Your API Should Never Do Heavy Work Inline

Every millisecond counts. When a user hits your API endpoint to process an image, send a welcome email, or generate a PDF report, they shouldn't have to wait for that work to finish before getting a response.

Yet most APIs do exactly this — and it quietly kills performance, scalability, and user experience.

The fix is a background job queue. And in Node.js, BullMQ is the production-grade solution. Currently on version 5.66.5 (January 2026), BullMQ is backed by Redis and trusted by companies processing millions of jobs per day.

In this guide you'll learn how to:

  • Set up BullMQ v5 with Redis in a Node.js API
  • Offload slow work (emails, image processing, reports) to background workers
  • Handle retries, backoff, and dead-letter patterns
  • Add OpenTelemetry observability to your job queues
  • Design a production-ready queue architecture

What Is BullMQ?

BullMQ is a robust, Redis-backed job queue library for Node.js (and Python, Elixir, PHP). It gives you:

  • Persistent jobs — jobs survive server restarts via Redis
  • Exactly-once semantics — no duplicate processing under normal conditions
  • Priority queues — high-priority jobs skip the line
  • Delayed jobs — schedule work for the future
  • Retries with backoff — automatic failure recovery
  • Rate limiting — throttle worker throughput
  • Job progress tracking — real-time progress events
  • OpenTelemetry support — distributed tracing (added January 2026)

The current release is BullMQ 5.66.5 (January 12, 2026). v5 introduced improved queue markers for faster worker wake-up and cleaner attemptsMade vs attemptsStarted semantics.


The Problem: Blocking Your API Response

Here's a typical offender — an API endpoint that sends a welcome email inline:

// ❌ BAD: Blocks the response for 1–3 seconds
app.post('/api/users', async (req, res) => {
  const user = await db.users.create(req.body);

  // This blocks! Email sending takes 1-3s
  await emailService.sendWelcomeEmail(user.email);

  res.status(201).json(user); // User waits 3+ seconds
});
Enter fullscreen mode Exit fullscreen mode

The user waits for email delivery. If the email service is slow or down, your API fails. This is a common antipattern.

The fix: Accept the request, persist the job, return immediately, and process in the background.


Setup: BullMQ v5 + Redis

Prerequisites

  • Node.js 20+ (LTS) or Bun 1.2+
  • Redis 7+ running locally or via managed service (Upstash, Redis Cloud)

Install

npm install bullmq ioredis
# For TypeScript
npm install -D @types/node typescript
Enter fullscreen mode Exit fullscreen mode

Project structure

src/
  queues/
    index.ts         # Queue definitions
    workers/
      email.worker.ts
      image.worker.ts
  api/
    routes/users.ts
  redis.ts
Enter fullscreen mode Exit fullscreen mode

Redis connection (shared)

// src/redis.ts
import { Redis } from 'ioredis';

// BullMQ v5: connection is MANDATORY (no longer optional)
export const redisConnection = new Redis({
  host: process.env.REDIS_HOST ?? 'localhost',
  port: Number(process.env.REDIS_PORT ?? 6379),
  maxRetriesPerRequest: null, // Required by BullMQ
});
Enter fullscreen mode Exit fullscreen mode

BullMQ v5 breaking change: Passing a connection object is now mandatory. In v4 it showed a warning; v5 throws an error without it.


Define Your Queues

// src/queues/index.ts
import { Queue } from 'bullmq';
import { redisConnection } from '../redis';

// Email queue
export const emailQueue = new Queue('email', {
  connection: redisConnection,
  defaultJobOptions: {
    attempts: 3,
    backoff: {
      type: 'exponential',
      delay: 2000, // Start at 2s, then 4s, 8s
    },
    removeOnComplete: { count: 100 }, // Keep last 100 completed
    removeOnFail: { count: 500 },     // Keep last 500 failed for debugging
  },
});

// Image processing queue (CPU-heavy, limit concurrency)
export const imageQueue = new Queue('image-processing', {
  connection: redisConnection,
  defaultJobOptions: {
    attempts: 2,
    backoff: { type: 'fixed', delay: 5000 },
    removeOnComplete: true,
    removeOnFail: { count: 200 },
  },
});

// Report generation (low priority, can wait)
export const reportQueue = new Queue('report-generation', {
  connection: redisConnection,
  defaultJobOptions: {
    attempts: 2,
    removeOnComplete: true,
    removeOnFail: true,
  },
});
Enter fullscreen mode Exit fullscreen mode

Refactoring the API Endpoint

Now your endpoint adds a job and returns instantly:

// src/api/routes/users.ts
import { Router } from 'express';
import { emailQueue } from '../queues';

const router = Router();

router.post('/', async (req, res) => {
  try {
    const user = await db.users.create(req.body);

    // ✅ GOOD: Add job and return immediately (< 5ms overhead)
    await emailQueue.add('welcome-email', {
      userId: user.id,
      email: user.email,
      name: user.name,
    });

    res.status(201).json({
      user,
      message: 'Account created. Welcome email is on its way!',
    });
  } catch (err) {
    res.status(500).json({ error: 'Failed to create user' });
  }
});

export default router;
Enter fullscreen mode Exit fullscreen mode

Response time drops from 1–3 seconds to under 10 milliseconds.


Writing Workers

Workers are separate Node.js processes that pull jobs from the queue and process them.

Email Worker

// src/queues/workers/email.worker.ts
import { Worker, Job } from 'bullmq';
import { redisConnection } from '../../redis';
import { sendEmail } from '../../services/email';

interface EmailJobData {
  userId: string;
  email: string;
  name: string;
}

const emailWorker = new Worker<EmailJobData>(
  'email',
  async (job: Job<EmailJobData>) => {
    const { email, name, userId } = job.data;

    console.log(`Processing email job ${job.id} for user ${userId}`);

    // Update job progress (visible in dashboards)
    await job.updateProgress(10);

    await sendEmail({
      to: email,
      subject: `Welcome to 1xAPI, ${name}!`,
      html: `<h1>Hi ${name}, your account is ready.</h1>`,
    });

    await job.updateProgress(100);

    return { sent: true, email };
  },
  {
    connection: redisConnection,
    concurrency: 10, // Process 10 emails simultaneously
  }
);

emailWorker.on('completed', (job) => {
  console.log(`✅ Email sent: job ${job.id}`);
});

emailWorker.on('failed', (job, err) => {
  console.error(`❌ Email failed: job ${job?.id}`, err.message);
});

emailWorker.on('error', (err) => {
  console.error('Worker error:', err);
});
Enter fullscreen mode Exit fullscreen mode

Image Processing Worker (with rate limiting)

// src/queues/workers/image.worker.ts
import { Worker, Job } from 'bullmq';
import { redisConnection } from '../../redis';
import sharp from 'sharp';
import path from 'path';

interface ImageJobData {
  inputPath: string;
  outputPath: string;
  width: number;
  height: number;
  userId: string;
}

const imageWorker = new Worker<ImageJobData>(
  'image-processing',
  async (job: Job<ImageJobData>) => {
    const { inputPath, outputPath, width, height } = job.data;

    await job.updateProgress(0);

    // Resize image with sharp
    await sharp(inputPath)
      .resize(width, height, { fit: 'cover' })
      .webp({ quality: 85 })
      .toFile(outputPath);

    await job.updateProgress(100);

    return { outputPath };
  },
  {
    connection: redisConnection,
    concurrency: 3, // CPU-bound: keep concurrency low
    limiter: {
      max: 50,
      duration: 60_000, // Max 50 images per minute
    },
  }
);
Enter fullscreen mode Exit fullscreen mode

Delayed Jobs and Scheduled Work

BullMQ shines for time-based patterns:

// Send a reminder 24 hours after signup
await emailQueue.add(
  'signup-reminder',
  { userId: user.id, email: user.email },
  { delay: 24 * 60 * 60 * 1000 } // 24 hours in ms
);

// Recurring report every Monday at 9:00 AM
await reportQueue.add(
  'weekly-report',
  { type: 'weekly-summary' },
  {
    repeat: {
      pattern: '0 9 * * 1', // Cron: Monday 9am
    },
  }
);

// Retry failed payment after 1 hour
await paymentQueue.add(
  'retry-charge',
  { invoiceId: 'inv_123' },
  {
    delay: 60 * 60 * 1000,
    attempts: 5,
    backoff: { type: 'exponential', delay: 3600_000 },
  }
);
Enter fullscreen mode Exit fullscreen mode

BullMQ v5: The attemptsStarted vs attemptsMade Fix

This is one of the most important v5 improvements for reliability.

Before v5: attemptsMade incremented every time a job started, even if it was manually rate-limited mid-processing. This broke exponential backoff calculations.

v5 fix:

  • attemptsStarted — increments every time a job begins execution
  • attemptsMade — only increments when a job completes or fails

This means your backoff timers are now accurate:

// Exponential backoff now works correctly in v5
const worker = new Worker('payment', async (job) => {
  // If you call job.moveToDelayed() here (manual rate-limit),
  // attemptsMade does NOT increment — backoff stays correct
  if (await isRateLimited()) {
    await job.moveToDelayed(Date.now() + 5000);
    return; // Will restart without burning an attempt
  }

  await chargeCard(job.data);
}, { connection: redisConnection });
Enter fullscreen mode Exit fullscreen mode

OpenTelemetry Integration (January 2026)

BullMQ announced native OpenTelemetry support in January 2026 via the bullmq-otel package:

npm install @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node
npm install bullmq-otel
Enter fullscreen mode Exit fullscreen mode
// src/telemetry.ts
import { NodeSDK } from '@opentelemetry/sdk-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { BullMQInstrumentation } from 'bullmq-otel';

const sdk = new NodeSDK({
  traceExporter: new OTLPTraceExporter({
    url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT,
  }),
  instrumentations: [
    new BullMQInstrumentation(), // Auto-trace all BullMQ queues/workers
  ],
});

sdk.start();
Enter fullscreen mode Exit fullscreen mode

This gives you distributed traces across your API and workers — see exactly how long jobs take, which ones fail, and where bottlenecks are.


Production Checklist

Before shipping to production:

Redis:

  • Use a managed Redis (Upstash, Redis Cloud, AWS ElastiCache)
  • Enable maxmemory-policy: noeviction — BullMQ needs keys to persist
  • Set up Redis persistence (AOF + RDB)

Workers:

  • Run workers as separate processes (not in your API server)
  • Use PM2 or Docker to manage worker processes
  • Set gracefulShutdown: 5000 to let in-flight jobs finish

Monitoring:

const worker = new Worker('email', processor, {
  connection: redisConnection,
  settings: {
    stalledInterval: 30_000,    // Check for stalled jobs every 30s
    maxStalledCount: 2,          // Move to failed after 2 stalled checks
  },
});
Enter fullscreen mode Exit fullscreen mode

Bull Board dashboard (optional but recommended):

npm install @bull-board/express @bull-board/api
Enter fullscreen mode Exit fullscreen mode
import { createBullBoard } from '@bull-board/api';
import { BullMQAdapter } from '@bull-board/api/bullMQAdapter';
import { ExpressAdapter } from '@bull-board/express';

const serverAdapter = new ExpressAdapter();
serverAdapter.setBasePath('/admin/queues');

createBullBoard({
  queues: [
    new BullMQAdapter(emailQueue),
    new BullMQAdapter(imageQueue),
    new BullMQAdapter(reportQueue),
  ],
  serverAdapter,
});

app.use('/admin/queues', serverAdapter.getRouter());
Enter fullscreen mode Exit fullscreen mode

Performance Impact: Real Numbers

Moving heavy work to background queues typically delivers:

Operation Inline (before) With BullMQ (after)
Welcome email 1,200ms 8ms
Image resize 3,500ms 12ms
PDF generation 6,000ms 15ms
Webhook delivery 800ms 6ms

Your P99 latency drops dramatically. Users get instant feedback. Workers handle the heavy lifting asynchronously.


Summary

Background job queues are one of the highest-leverage improvements you can make to any production API. With BullMQ v5.66.5 (2026):

  1. Install bullmq + ioredis, always pass a connection
  2. Define queues with sensible retry and cleanup defaults
  3. Offload heavy work — emails, images, reports, webhooks
  4. Write focused workers in separate processes
  5. Use delays and cron for scheduled/deferred work
  6. Add OpenTelemetry for full observability
  7. Monitor with Bull Board or your existing APM

The architectural principle is simple: your API's job is to accept work, not do it. BullMQ handles the rest.


Further Reading

Top comments (0)