You've profiled your Node.js server. The main thread looks healthy. CPU usage is low. Event loop lag is minimal. But users are still reporting slow responses.
Then you remember: half your request processing happens in worker threads.
Node.js worker_threads have their own V8 instances, their own event loops, and their own performance characteristics. A worker thread can be completely blocked — running a tight loop, parsing a massive JSON payload, executing a catastrophic regex — and the main thread's profiler won't see any of it.
With node-loop-detective v1.9.0, you can now list all inspector targets and profile individual worker threads. Two new flags: --list-targets and --target.
The Invisible Threads
Worker threads are increasingly common in Node.js applications. They're used for:
- CPU-intensive computation (image processing, data transformation, ML inference)
- Parallel request processing in frameworks like Piscina
- Background jobs (report generation, data export, batch processing)
- Offloading blocking operations from the main thread
The irony is that worker threads are often created specifically to handle heavy work — which makes them the most likely threads to have performance problems. But until now, most diagnostic tools only profiled the main thread.
Here's what happens when you profile a Node.js app that uses worker threads:
loop-detective 12345 -d 30
────────────────────────────────────────────────────────────
Event Loop Detective Report
────────────────────────────────────────────────────────────
Duration: 30012ms
Samples: 13521
Hot funcs: 3
Diagnosis
────────────────────────────────────────────────────────────
LOW healthy
No obvious event loop blocking patterns detected
→ Try profiling for a longer duration or during peak load
The main thread is healthy. But the workers are drowning. You just can't see them.
How the V8 Inspector Handles Threads
When Node.js opens its inspector (via --inspect or SIGUSR1), it exposes a /json/list HTTP endpoint that returns all available debugging targets. Each target is a separate V8 isolate with its own WebSocket URL:
[
{
"id": "1a2b3c",
"title": "Main thread",
"url": "file:///app/server.js",
"webSocketDebuggerUrl": "ws://127.0.0.1:9229/1a2b3c"
},
{
"id": "4d5e6f",
"title": "Worker #1",
"url": "file:///app/worker.js",
"webSocketDebuggerUrl": "ws://127.0.0.1:9229/4d5e6f"
},
{
"id": "7g8h9i",
"title": "Worker #2",
"url": "file:///app/worker.js",
"webSocketDebuggerUrl": "ws://127.0.0.1:9229/7g8h9i"
}
]
Previously, loop-detective always connected to targets[0] — the main thread. The worker threads were right there in the list, but we never looked at them.
Discovering Targets
The first step is knowing what's available:
loop-detective --port 9229 --list-targets
Available inspector targets:
[0] Main thread
file:///app/server.js
[1] Worker #1
file:///app/worker.js
[2] Worker #2
file:///app/worker.js
Use --target <index> to connect to a specific target.
Each target has an index, a title, and a URL (the script that created it). The main thread is always index 0.
For automation, --json gives you structured output:
loop-detective --port 9229 --list-targets --json
[
{ "index": 0, "id": "1a2b3c", "title": "Main thread", "url": "file:///app/server.js" },
{ "index": 1, "id": "4d5e6f", "title": "Worker #1", "url": "file:///app/worker.js" },
{ "index": 2, "id": "7g8h9i", "title": "Worker #2", "url": "file:///app/worker.js" }
]
Profiling a Worker Thread
Once you know the target index, profile it like any other process:
loop-detective --port 9229 --target 1 -d 30
✔ Connected to Node.js process
Profiling for 30s with 50ms lag threshold...
⚠ Event loop lag: 1245ms at 2025-03-15T14:23:45.123Z
────────────────────────────────────────────────────────────
Event Loop Detective Report
────────────────────────────────────────────────────────────
Duration: 30008ms
Samples: 8234
Hot funcs: 7
Diagnosis
────────────────────────────────────────────────────────────
HIGH cpu-hog
Function "transformData" consumed 72.1% of CPU time (21630ms)
at /app/worker.js:45
→ Consider breaking this into smaller async chunks
1. transformData
██████████████████░░ 21630ms (72.1%)
/app/worker.js:45:1
There it is. Worker #1 has a transformData function consuming 72% of CPU. The main thread never saw this because the work was offloaded to the worker.
A Real-World Debugging Session
Here's a typical workflow for diagnosing a worker thread issue:
# Step 1: Profile the main thread — looks healthy
loop-detective 12345 -d 10
# Result: "healthy" — no blocking detected
# Step 2: List all targets
loop-detective --port 9229 --list-targets
# Shows: [0] Main, [1] Worker #1, [2] Worker #2, [3] Worker #3
# Step 3: Profile each worker
loop-detective --port 9229 --target 1 -d 10
# Worker #1: healthy
loop-detective --port 9229 --target 2 -d 10
# Worker #2: cpu-hog on transformData — 72% CPU!
loop-detective --port 9229 --target 3 -d 10
# Worker #3: healthy
# Step 4: Deep dive on Worker #2
loop-detective --port 9229 --target 2 -d 60 --save-profile ./worker2.cpuprofile
# Open worker2.cpuprofile in Chrome DevTools for flame graph
In a script:
#!/bin/bash
# Profile all worker threads
TARGETS=$(loop-detective --port 9229 --list-targets --json)
COUNT=$(echo "$TARGETS" | node -e "process.stdin.on('data',d=>console.log(JSON.parse(d).length))")
for i in $(seq 0 $((COUNT - 1))); do
echo "=== Profiling target $i ==="
loop-detective --port 9229 --target $i -d 10 --json > "target-${i}.json"
done
How It Works Internally
The implementation touches two layers:
Inspector: Target Selection
The Inspector class now accepts a targetIndex parameter and uses it to select the WebSocket URL:
class Inspector {
constructor({ host, port, targetIndex = 0 }) {
this.targetIndex = targetIndex;
// ...
}
async getWebSocketUrl() {
const targets = await this.getTargets();
if (this.targetIndex >= targets.length) {
throw new Error('Target index ' + this.targetIndex + ' out of range');
}
return targets[this.targetIndex].webSocketDebuggerUrl;
}
}
Detective: Target Discovery
The Detective class exposes a listTargets() method that uses the retry logic (same exponential backoff as regular connections) to handle the case where the inspector is still starting:
async listTargets() {
this._activateInspector();
const port = this._getInspectorPort();
const host = this.config.inspectorHost || '127.0.0.1';
// Retry with backoff (inspector may still be starting)
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const inspector = new Inspector({ host, port });
return await inspector.getTargets();
} catch (err) {
if (attempt === maxRetries) throw err;
await this._sleep(delay);
}
}
}
What Works and What Doesn't
Everything that works on the main thread works on worker threads:
- ✅ CPU profiling and heavy function detection
- ✅ Event loop lag detection
- ✅ Blocking pattern analysis (cpu-hog, json-heavy, regex, gc, sync-io, crypto)
- ✅
--save-profilefor flame graphs - ✅
--watchfor continuous monitoring - ✅
--jsonfor structured output
I/O tracking (http, fetch, dns, net patches) also works, but with a caveat: worker threads typically don't make HTTP requests directly. They receive data from the main thread via postMessage and send results back. If a worker does make network calls, the I/O tracker will catch them.
When Workers Don't Show Up
A few situations where --list-targets might not show your workers:
Workers created after inspector activation. If a worker is spawned after you send SIGUSR1, it may not appear in the target list. Re-run
--list-targetsto refresh.Workers that have already exited. Short-lived workers (process a task, then terminate) may not be visible when you list targets. Use
--watchon the main thread to catch the pattern, then target long-lived workers.Workers without inspector support. Workers created with
{ execArgv: [] }explicitly disable the inspector. They won't appear in the target list.
Try It
npm install -g node-loop-detective@1.9.0
# List all targets
loop-detective --port 9229 --list-targets
# Profile a worker thread
loop-detective --port 9229 --target 1 -d 30
The main thread is just one thread. In modern Node.js applications, the interesting performance problems are often hiding in the workers.
Top comments (0)