DEV Community

楊東霖
楊東霖

Posted on • Originally published at devplaybook.cc

Node.js Memory Management and Profiling: Find and Fix Memory Leaks in 2026

Node.js Memory Management and Profiling: Find and Fix Memory Leaks

Memory leaks in Node.js servers cause gradual performance degradation and eventual OOM crashes. Unlike garbage-collected languages where leaks "can't happen," V8's garbage collector can't collect objects that are still referenced — even accidentally. Here's how to find and fix them.

Understanding Node.js Memory

The V8 Heap Structure

Node.js Process Memory
├── Heap (V8 managed)
│   ├── New Space (Young generation) — small, collected frequently
│   ├── Old Space (Old generation) — large, collected less often
│   ├── Code Space — JIT-compiled code
│   └── Large Object Space — objects > 1MB
├── Stack — call stack, not GC managed
├── External Memory — Buffer (native, outside V8)
└── OS Memory — process overhead
Enter fullscreen mode Exit fullscreen mode

The Old Space is where memory leaks live. Objects that survive multiple GC cycles get promoted here. V8 only collects them in major GC cycles (expensive) or when heap pressure forces it.

Memory Metrics to Watch

// Check current memory usage
const memUsage = process.memoryUsage();
console.log({
  rss: `${Math.round(memUsage.rss / 1024 / 1024)}MB`,     // Total process memory
  heapUsed: `${Math.round(memUsage.heapUsed / 1024 / 1024)}MB`, // Used heap
  heapTotal: `${Math.round(memUsage.heapTotal / 1024 / 1024)}MB`, // Allocated heap
  external: `${Math.round(memUsage.external / 1024 / 1024)}MB`, // Buffer memory
});

// Output:
// { rss: '85MB', heapUsed: '45MB', heapTotal: '60MB', external: '2MB' }
Enter fullscreen mode Exit fullscreen mode

Red flag: heapUsed growing monotonically over time without reaching a plateau = memory leak.


Detecting Memory Leaks

Pattern 1: Monotonically Growing Heap

// Monitor heap growth in production
setInterval(() => {
  const mem = process.memoryUsage();
  console.log(JSON.stringify({
    timestamp: new Date().toISOString(),
    heapUsedMB: Math.round(mem.heapUsed / 1024 / 1024),
    rssMB: Math.round(mem.rss / 1024 / 1024),
  }));
}, 30_000); // Every 30 seconds
Enter fullscreen mode Exit fullscreen mode

If heapUsedMB grows 5-10MB per hour, you have a leak.

Pattern 2: Heap Snapshot Comparison

The most reliable leak detection method: take heap snapshots before and after load, compare retained objects.

// Take heap snapshot
const v8 = require('v8');
const fs = require('fs');

function takeHeapSnapshot(filename) {
  const snapshot = v8.writeHeapSnapshot(filename);
  console.log(`Heap snapshot written to: ${snapshot}`);
  return snapshot;
}

// Endpoint to trigger snapshot (dev/staging only!)
app.get('/debug/heap-snapshot', (req, res) => {
  const path = takeHeapSnapshot(`/tmp/heap-${Date.now()}.heapsnapshot`);
  res.json({ path });
});
Enter fullscreen mode Exit fullscreen mode

Compare snapshots in Chrome DevTools (Memory tab → Load profile):

  1. Baseline snapshot (before load)
  2. Apply load for 10 minutes
  3. Force GC (Chrome DevTools → Memory → collect garbage)
  4. Second snapshot
  5. Compare: look for "Delta" column — objects with large positive deltas are leaks

Common Memory Leak Patterns

1. Unbounded Caches

// ❌ Memory leak: Map grows without bound
const cache = new Map();

app.get('/user/:id', async (req, res) => {
  const { id } = req.params;
  if (cache.has(id)) return res.json(cache.get(id));

  const user = await db.getUser(id);
  cache.set(id, user); // Never evicted → grows forever
  res.json(user);
});

// ✅ Use LRU cache with size limit
import LRU from 'lru-cache';

const cache = new LRU({
  max: 1000,          // Maximum items
  ttl: 1000 * 60 * 5, // 5 minute TTL
});

app.get('/user/:id', async (req, res) => {
  const { id } = req.params;
  const cached = cache.get(id);
  if (cached) return res.json(cached);

  const user = await db.getUser(id);
  cache.set(id, user);
  res.json(user);
});
Enter fullscreen mode Exit fullscreen mode

2. Event Listener Accumulation

// ❌ Adding listeners without removing them
class DataProcessor extends EventEmitter {
  processRequest(req) {
    // New listener added every request — never removed
    req.on('data', (chunk) => this.handleData(chunk));
    // If req goes out of scope but listener holds reference to this,
    // neither gets GC'd
  }
}

// ✅ Always clean up event listeners
class DataProcessor extends EventEmitter {
  processRequest(req) {
    const handleData = (chunk) => this.handleData(chunk);
    req.on('data', handleData);

    // Clean up on completion
    req.once('end', () => {
      req.removeListener('data', handleData);
    });
    req.once('error', () => {
      req.removeListener('data', handleData);
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

Check for listener accumulation:

// Node.js warns when > 10 listeners on one emitter
// Reduce threshold for early warning:
emitter.setMaxListeners(5);
Enter fullscreen mode Exit fullscreen mode

3. Closures Holding Large Objects

// ❌ Closure keeps large buffer in memory
function processData() {
  const largeBuffer = Buffer.alloc(100 * 1024 * 1024); // 100MB

  return function processChunk(chunk) {
    // This function references largeBuffer even if not used
    return chunk.length; // Never uses largeBuffer!
  };
}

const processor = processData();
// largeBuffer is now permanently in memory as long as processor exists

// ✅ Don't capture what you don't need
function processData() {
  const largeBuffer = Buffer.alloc(100 * 1024 * 1024);
  const result = computeFromBuffer(largeBuffer);
  // largeBuffer goes out of scope here
  return result; // Return computed value, not closure over buffer
}
Enter fullscreen mode Exit fullscreen mode

4. Async Operations Holding References

// ❌ Promise chain keeps objects alive until resolved
async function leakyOperation() {
  const users = await db.getAllUsers(); // Could be thousands of objects

  // Long-running operation with users in scope the whole time
  await delay(60_000); // Wait 60 seconds

  return users.length; // users held in memory for 60 seconds
}

// ✅ Process and release early
async function efficientOperation() {
  const userCount = await db.getUserCount(); // Only fetch what you need
  await delay(60_000);
  return userCount; // Primitive, not object array
}
Enter fullscreen mode Exit fullscreen mode

Profiling Tools

clinic.js — The Best Node.js Profiling Suite

npm install -g clinic

# Flame graph: find CPU hotspots
clinic flame -- node server.js
# Then load test: autocannon http://localhost:3000/

# Heap profiler: find memory leaks
clinic heapprofiler -- node server.js

# Doctor: overall health check
clinic doctor -- node server.js
Enter fullscreen mode Exit fullscreen mode

Clinic.js generates visual HTML reports. The flame graph shows call stacks at time of CPU activity — wide bars = time spent there.

Node.js Built-in Inspector

# Start with inspector enabled
node --inspect server.js
# or for immediate break:
node --inspect-brk server.js
Enter fullscreen mode Exit fullscreen mode

Open Chrome → chrome://inspect → configure target → inspect

In Chrome DevTools:

  • Memory tab: heap snapshots, allocation tracking
  • Performance tab: CPU profiling, long tasks
  • Profiler tab: V8 sampling profiler

0x: Flame Graphs Without clinic.js

npm install -g 0x

# Profile your server under load
0x server.js &
# Apply load
autocannon http://localhost:3000/api/users -d 30
kill %1
Enter fullscreen mode Exit fullscreen mode

0x generates flame graphs showing where time is spent. Hot paths appear as wide, tall columns.


Production Memory Monitoring

Prometheus + Node.js

import { collectDefaultMetrics, Registry, Gauge } from 'prom-client';

const registry = new Registry();
collectDefaultMetrics({ register: registry });

// Custom memory gauges
const heapGauge = new Gauge({
  name: 'nodejs_heap_used_bytes',
  help: 'V8 heap used in bytes',
  registers: [registry],
});

setInterval(() => {
  heapGauge.set(process.memoryUsage().heapUsed);
}, 10_000);

// /metrics endpoint for Prometheus scraping
app.get('/metrics', async (req, res) => {
  res.set('Content-Type', registry.contentType);
  res.end(await registry.metrics());
});
Enter fullscreen mode Exit fullscreen mode

Set up a Grafana alert when nodejs_heap_used_bytes grows more than 50MB over 1 hour.

PM2 Memory Restart + Monitoring

// ecosystem.config.js
{
  "apps": [{
    "name": "api",
    "script": "server.js",
    "max_memory_restart": "512M",  // Restart if heap > 512MB
    "node_args": "--max-old-space-size=512",
    "instances": "max",
    "exec_mode": "cluster"
  }]
}
Enter fullscreen mode Exit fullscreen mode

This is a safety net, not a fix. If you're hitting the memory limit, investigate the leak.


Heap Dump Analysis Workflow

# 1. Take snapshots before and after load
curl http://localhost:3000/debug/heap-snapshot
ab -n 10000 -c 100 http://localhost:3000/api/users
curl http://localhost:3000/debug/heap-snapshot

# 2. Compare in Chrome DevTools
# Memory → Load profile (snapshot 1)
# Memory → Load profile (snapshot 2)
# Switch to "Comparison" view
# Sort by "# Delta" descending

# 3. Look for growing object types
# Common culprits: Buffer, String, Array, your custom classes
Enter fullscreen mode Exit fullscreen mode

Interpreting Heapsnapshot Comparison

Object Type High Delta? Likely Cause
(closure) Yes Closures not released
String Yes String cache or response buffering
Array Yes Unbounded arrays (event queues, caches)
Buffer Yes Stream buffers not consumed
Custom class Yes Objects in Maps/Sets not evicted

Memory Leak Fix Checklist

  • [ ] Audit all Map and Set usages — add size limits or TTL
  • [ ] Verify all event listeners are removed on component/request end
  • [ ] Check for closures capturing large objects unnecessarily
  • [ ] Add --max-old-space-size Node.js flag matching available RAM
  • [ ] Set up memory monitoring with Prometheus or similar
  • [ ] Configure PM2 max_memory_restart as a safety net
  • [ ] Add heap snapshot endpoint (protected) for production diagnostics

Related Articles


Summary

Node.js memory leaks follow predictable patterns: unbounded caches, orphaned event listeners, closures over large objects, and long-lived async operations. The detection method is always the same: take heap snapshots before and after load, compare object counts in Chrome DevTools.

Use clinic.js or 0x for CPU profiling and v8.writeHeapSnapshot() for memory profiling. Set up Prometheus memory metrics in production and configure max_memory_restart in PM2 as a safety net while you investigate.

The goal isn't to prevent GC — it's to stop accidentally preventing GC by holding references you don't need.


Level Up Your Dev Workflow

Found this useful? Explore DevPlaybook — cheat sheets, tool comparisons, and hands-on guides for modern developers.

🛒 Get the DevToolkit Starter Kit on Gumroad — 40+ browser-based dev tools, source code + deployment guide included.

Top comments (0)