Node.js Out of Memory: How to Fix and Find the Leak (Real Commands)
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
This error kills your Node.js process instantly. Here's how to fix it fast and then find the actual cause.
Quick Fix: Get Back Online
# Start with more heap
node --max-old-space-size=4096 server.js
# In PM2 ecosystem.config.js
module.exports = {
apps: [{
name: 'myapp',
script: 'server.js',
node_args: '--max-old-space-size=4096',
max_memory_restart: '512M' // Auto-restart before crash
}]
}
This gets you back online. Now find the leak.
Is It a Leak or a One-Time Spike?
// Add to your app to monitor
setInterval(() => {
const mem = process.memoryUsage();
console.log({
heapUsed: Math.round(mem.heapUsed / 1024 / 1024) + 'MB',
heapTotal: Math.round(mem.heapTotal / 1024 / 1024) + 'MB',
rss: Math.round(mem.rss / 1024 / 1024) + 'MB'
});
}, 5000);
- Heap grows steadily and never drops → Memory leak
- Heap spikes during specific operations → One-time allocation issue
Finding Memory Leaks
Method 1: Heap Snapshot (Best)
const v8 = require('v8');
// Take snapshot before suspected leak
v8.writeHeapSnapshot('./heap-before.heapsnapshot');
// Do the operation
await runSuspectedLeakyCode();
// Take snapshot after
v8.writeHeapSnapshot('./heap-after.heapsnapshot');
Open both in Chrome DevTools → Memory tab → Compare snapshots. Look for objects that increased.
Method 2: Clinic.js (Easiest)
npm install -g clinic
clinic heap -- node server.js
# Browse to http://localhost:3000, then press Ctrl+C
# Opens flame graph showing memory allocations
The 5 Most Common Node.js Memory Leaks
1. Event Listeners Never Removed
// BAD — listeners accumulate
app.on('request', handler);
// GOOD — remove when done
const handler = () => {};
emitter.on('event', handler);
// Later:
emitter.off('event', handler);
// Check listener count
require('events').EventEmitter.defaultMaxListeners = 15; // Default is 10
process.on('warning', (warning) => {
if (warning.name === 'MaxListenersExceededWarning') {
console.log(warning); // You have a leak here
}
});
2. Global Caches Without Size Limits
// BAD — grows forever
const cache = {};
app.get('/data/:id', (req, res) => {
cache[req.params.id] = heavyData;
});
// GOOD — use LRU cache
const LRU = require('lru-cache');
const cache = new LRU({ max: 500 }); // Max 500 items
3. Closures Holding Large Objects
// BAD
function processLargeFile(data) {
const processed = transformData(data); // data = 500MB
return () => {
return processed.summary; // Closure keeps 'data' alive
};
}
// GOOD
function processLargeFile(data) {
const summary = transformData(data).summary;
data = null; // Explicit release
return () => summary;
}
4. Timers Not Cleared
// BAD
function startPolling() {
setInterval(async () => {
const data = await fetchData();
results.push(data); // results grows forever
}, 1000);
}
// GOOD
const MAX_RESULTS = 1000;
function startPolling() {
const interval = setInterval(async () => {
const data = await fetchData();
if (results.length > MAX_RESULTS) results.shift();
results.push(data);
}, 1000);
// Return cleanup function
return () => clearInterval(interval);
}
5. Streams Not Destroyed
// BAD
const stream = fs.createReadStream(file);
stream.on('data', (chunk) => {});
// Stream never closed on error
// GOOD
const stream = fs.createReadStream(file);
stream.on('data', (chunk) => {});
stream.on('error', (err) => {
stream.destroy();
});
stream.on('close', () => {});
Production Safety Net
# ecosystem.config.js
module.exports = {
apps: [{
name: 'api',
script: './src/server.js',
instances: 'max',
exec_mode: 'cluster',
max_memory_restart: '400M',
node_args: '--max-old-space-size=512',
env: { NODE_ENV: 'production' }
}]
};
Memory leaks in Node.js are subtle and hard to debug in production. Step2Dev analyzes your heap dumps and error logs to pinpoint the exact leak location — paste your error and get the fix in 60 seconds.
Top comments (0)