Your API hashes a password. The event loop blocks. Every other request waits. Node is single-threaded, but it does not have to be.
The Problem
CPU-bound work (hashing, image processing, JSON parsing large payloads, compression) blocks the event loop. While Node processes that work, it cannot handle incoming requests, timers, or I/O callbacks.
Basic Worker Thread
import { Worker, isMainThread, parentPort, workerData } from "worker_threads";
if (isMainThread) {
const worker = new Worker(__filename, { workerData: { input: "heavy-task" } });
worker.on("message", (result) => console.log("Result:", result));
worker.on("error", (err) => console.error("Worker error:", err));
} else {
// This runs in a separate thread
const result = heavyComputation(workerData.input);
parentPort?.postMessage(result);
}
Worker Pool
Creating a new worker per request is expensive. Pool them:
import { Worker } from "worker_threads";
import os from "os";
class WorkerPool {
private workers: Worker[] = [];
private queue: Array<{ data: unknown; resolve: Function; reject: Function }> = [];
private available: Worker[] = [];
constructor(script: string, size = os.cpus().length) {
for (let i = 0; i < size; i++) {
const w = new Worker(script);
this.workers.push(w);
this.available.push(w);
}
}
run(data: unknown): Promise<unknown> {
return new Promise((resolve, reject) => {
const worker = this.available.pop();
if (\!worker) { this.queue.push({ data, resolve, reject }); return; }
worker.once("message", (result) => {
this.available.push(worker);
const next = this.queue.shift();
if (next) this.run(next.data).then(next.resolve, next.reject);
resolve(result);
});
worker.postMessage(data);
});
}
}
When to Use Worker Threads
Yes: Password hashing, image/video processing, CSV parsing, compression, encryption, CPU-intensive validation.
No: Database queries, HTTP requests, file I/O. These are already async and non-blocking. Worker threads add overhead without benefit.
SharedArrayBuffer for Zero-Copy
For large data, postMessage copies the data. Use SharedArrayBuffer to share memory between threads without copying. Use Atomics for synchronization.
Common Mistakes
- Using workers for I/O: Database calls are already non-blocking
- Creating a worker per request: Spawning is expensive, use a pool
- Not handling worker crashes: Always listen for error and exit events
- Too many workers: More workers than CPU cores causes context switching overhead
Part of my Production Backend Patterns series. Follow for more practical backend engineering.
If this was useful, consider:
- Sponsoring on GitHub to support more open-source tools
- Buying me a coffee on Ko-fi
You Might Also Like
- Graceful Shutdown in Node.js: Stop Dropping Requests (2026)
- BullMQ Job Queues in Node.js: Background Processing Done Right (2026 Guide)
- Scaling WebSocket Connections: From Single Server to Distributed Architecture (2026)
Follow me for more production-ready backend content!
If this helped you, buy me a coffee on Ko-fi!
Top comments (0)