If you've ever deployed a Next.js app to Railway (or any memory-constrained host), you've probably hit this: the app builds fine, then gets OOM-killed in production — or vice versa, the build fails because you capped memory too low.
The root issue is that next build is memory-hungry (it happily eats 2–4 GB for large apps), but your production runtime needs far less. Setting NODE_OPTIONS=--max-old-space-size=400 globally breaks builds. Setting it to 4096 globally leaves your dyno vulnerable to OOM.
Here's how to solve it cleanly.
The Problem
You might be doing this:
ENV NODE_OPTIONS="--max-old-space-size=4096"
RUN npm run build
CMD ["npm", "start"]
This tells Node to use up to 4 GB — great for the build, dangerous at runtime. On Railway's hobby plan (512 MB total), your app will get killed randomly when memory spikes.
Or you tried the opposite:
ENV NODE_OPTIONS="--max-old-space-size=400"
RUN npm run build
CMD ["npm", "start"]
And now your builds are failing with JavaScript heap out of memory.
The Fix: Set NODE_OPTIONS for Build, Override CMD at Runtime
The key insight: ENV in a Dockerfile sets a persistent environment variable, but CMD can override which flags the Node binary starts with — independently of NODE_OPTIONS.
# Build stage: give next build the memory it needs
ENV NODE_OPTIONS="--max-old-space-size=4096"
RUN npm run build
RUN npm prune --omit=dev
EXPOSE 3000
# Runtime: cap to 400 MB — Railway hobby has 512 MB total
CMD ["node", "--max-old-space-size=400", "node_modules/.bin/next", "start"]
When Node starts via CMD, flags passed directly to the binary take precedence over NODE_OPTIONS for the heap size. The ENV you set for build doesn't carry forward into how the process is invoked — because CMD is constructing the full command, not inheriting from npm start.
Why not just unset NODE_OPTIONS?
You can doENV NODE_OPTIONS=""before CMD, but that's fragile — if something else sets it later, you're exposed. Passing--max-old-space-sizedirectly to the binary is explicit and unambiguous.
Bonus: Cap Child Processes Separately
If your app spawns a child Node process (like a Remotion render, a worker script, or a sandboxed eval), it inherits the parent's memory cap by default. That can mean your one remaining 100 MB of headroom gets eaten by a child job.
Pass the flag explicitly when spawning:
const { stdout, stderr } = await execFileAsync(
process.execPath, // same node binary as parent
["--max-old-space-size=300", scriptPath, propsPath, outputPath],
{ timeout: 15 * 60 * 1000, maxBuffer: 50 * 1024 * 1024 }
);
Now the parent is capped at 400 MB and the child at 300 MB — on a 512 MB dyno, that leaves ~100 MB for the OS and leaves you well clear of the OOM line (most of the time the parent and child aren't both at peak simultaneously).
Summary
| Context | Memory limit | How |
|---|---|---|
next build |
4096 MB |
ENV NODE_OPTIONS="--max-old-space-size=4096" before RUN npm run build
|
| Next.js runtime | 400 MB | CMD ["node", "--max-old-space-size=400", ...] |
| Spawned child process | 300 MB | Pass flag in execFileAsync args array |
This pattern works for any memory-constrained Node deployment — Railway, Fly.io, Render, or any Docker host with tight limits.
Tip: Railway shows memory usage graphs per deployment. Check the Memory panel — if you're regularly hitting >80% of your plan limit, it's time to either bump the plan or tune these caps. For most Next.js apps, 400 MB at runtime is plenty unless you're doing heavy in-process work (like video rendering).
Top comments (0)