There is a moment every developer dreads. You finally dockerize your Node.js app, run docker images, and stare at a 1.2GB image wondering how a simple web server got that fat.
I have been there. And multi-stage builds changed everything.
This is the story of how I took a bloated Docker image from 1.2GB down to 180MB — without sacrificing developer experience or runtime functionality.
The Problem With the Naive Approach
Most tutorials teach you to write a Dockerfile like this:
FROM node:20
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["node", "dist/index.js"]
This works. It really does. But there is a hidden cost.
That node:20 base image weighs in at around 1.1GB on its own. Then you pile on your node_modules — including all the devDependencies you need to compile TypeScript, run tests, or bundle assets. By the time Docker finishes building, you have a production image carrying hundreds of megabytes of tools it will never use at runtime.
You are shipping your entire kitchen just to deliver a pizza.
Enter Multi-Stage Builds
Multi-stage builds let you use multiple FROM statements in a single Dockerfile. Each stage is isolated — it has its own filesystem and only keeps what you explicitly copy forward.
The pattern works like this: use a fat image to build, then copy only the artifacts you need into a lean runtime image.
Here is the before and after for a TypeScript Node.js app:
Before (single stage):
FROM node:20
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["node", "dist/index.js"]
Image size: 1.24GB
After (multi-stage):
# Stage 1: Builder
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --include=dev
COPY . .
RUN npm run build
# Stage 2: Production runner
FROM node:20-alpine AS runner
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD ["node", "dist/index.js"]
Image size: 182MB
Same app. Same behavior. 85% smaller.
Breaking Down What Just Happened
Stage 1: Builder
The builder stage uses node:20-alpine — already a leaner base than the default node:20. We install all dependencies (including dev) and compile the TypeScript source to JavaScript in dist/.
Notice the AS builder label. That name is how we reference this stage later.
Stage 2: Runner
The runner starts fresh. A clean node:20-alpine slate. We:
- Install only production dependencies with
npm ci --omit=dev - Copy the compiled output from the builder stage using
COPY --from=builder
The builder stage gets discarded. Its node_modules, TypeScript source, devDependencies — all gone. Docker does not include them in the final image at all.
Going Further: The Distroless Approach
If 182MB still feels heavy, you can go even leaner with Google's Distroless images:
# Stage 1: Builder
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --include=dev
COPY . .
RUN npm run build
RUN npm ci --omit=dev
# Stage 2: Distroless production image
FROM gcr.io/distroless/nodejs20-debian12 AS runner
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["dist/index.js"]
Distroless images contain only your application runtime — no shell, no package manager, no system utilities. This gets you:
- Smaller attack surface for security
- Smaller final image (often under 120MB for Node.js apps)
- No interactive shell means attackers cannot easily exec into a running container
The trade-off: debugging becomes harder without shell access. Use this for production, not local dev.
A Real-World Pattern: Three Stages
In production projects, I often use three stages: dependencies, builder, and runner. This pattern improves layer caching dramatically.
# Stage 1: Install all dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci
# Stage 2: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# Stage 3: Production
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY package*.json ./
RUN npm ci --omit=dev
COPY --from=builder /app/dist ./dist
USER node
EXPOSE 3000
CMD ["node", "dist/index.js"]
Key things to notice here:
-
USER node— run as a non-root user. Default Docker containers run as root, which is a security risk. -
ENV NODE_ENV=production— tells Node.js to skip dev optimizations and use production settings. - The
depsstage cachesnode_modulesseparately. If your source code changes butpackage.jsondoes not, Docker reuses the cached deps layer and rebuilds only what changed.
Practical Tips for Multi-Stage Builds
Use .dockerignore aggressively. Before building, make sure Docker is not copying unnecessary files into the build context:
node_modules
.git
.env
coverage
dist
*.log
.DS_Store
Name your stages semantically. Use AS builder, AS runner, AS test rather than relying on positional index numbers. Named stages make COPY --from= statements self-documenting.
Target specific stages during development. You can build up to a specific stage using --target:
# Build only up to the builder stage, useful for debugging build issues
docker build --target builder -t myapp:debug .
# Build the final production image
docker build --target runner -t myapp:prod .
Check your actual image size after every significant change:
docker images | grep myapp
It is easy to accidentally bloat the final image by copying the wrong directory. Measure, do not assume.
The Numbers Side by Side
| Approach | Base Image | Final Size |
|---|---|---|
| Single stage (node:20) | node:20 | ~1.24GB |
| Single stage (alpine) | node:20-alpine | ~420MB |
| Multi-stage (alpine) | node:20-alpine | ~182MB |
| Multi-stage (distroless) | distroless/nodejs20 | ~115MB |
Each step meaningfully shrinks what gets pushed to your registry, pulled by your CI/CD pipeline, and loaded into your Kubernetes nodes.
Why This Matters Beyond Disk Space
Smaller images are not just about saving storage costs (though that adds up in registries). They have real operational impact:
- Faster deployments — your CI/CD pipeline spends less time pushing and pulling layers
- Faster pod startup in Kubernetes when a node does not have the image cached
- Reduced attack surface — fewer packages in the final image means fewer potential vulnerabilities
- Lower egress costs if you pull images across regions or from external registries
A team I worked with reduced their average deployment time from 4 minutes to 90 seconds just by switching to multi-stage builds. That is not a micro-optimization — that is a meaningful quality-of-life improvement across dozens of daily deploys.
Wrapping Up
Multi-stage builds are one of those Docker features that feel like a revelation the first time you use them. The concept is simple: build in a fat container, run in a lean one, throw away everything in between.
If you are still shipping single-stage Docker images to production, today is a good day to change that. Your registry bill, your deploy times, and your security team will all thank you.
Start with the two-stage pattern. Add a dedicated deps stage once you want better caching. Switch to distroless when you are ready to get serious about security. Each step is incremental and reversible.
Go measure your current image size right now. I am willing to bet there is room to improve.
Top comments (0)