Most Docker tutorials end at the win.
"Look, smaller image! Ship it!" And then you're left alone at 11pm wondering why your perfectly optimized container is crashing in production doing something it did fine before.
This article doesn't do that. We're going through both sides: how I got from 1.58GB to 186MB, every error I hit along the way, and the honest conversation about what Alpine actually takes away from you. Because the shrink is real. But so are the trade-offs.
First, What Even Is a Docker Image?
Your app works on your machine because your machine has Node installed, the right OS, the right dependencies. Someone else's server has none of that. Docker fixes this by packaging your app together with everything it needs to run — the runtime, the OS slice, the dependencies — into a sealed portable unit called an image.
A Dockerfile is the recipe. docker build executes it and produces the image. That image can now run anywhere Docker is installed, identically.
The problem is most beginners write that recipe without thinking about what goes into the package. I learned this the hard way — and I want to save you the 11pm production surprise. So let's do this properly: the win, the errors, and everything the win quietly broke.
The Fat Build
Here's the Dockerfile I started with:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["node", "app.js"]
Clean. Readable. Standard tutorial stuff.
When you build this and check the image size, the number that comes back stops you cold. 1.58 gigabytes. For a Node.js app that runs a simple HTTP server.
Every layer bakes into that image permanently. RUN npm install alone contributes megabytes of frozen layer. COPY . . adds more on top. Every one of those is locked inside the image forever.
The problem is not the app. The app is tiny. The problem is node:18. That base image is built on Debian Linux — a full operating system — and ships with compilers, build tools, package managers, debugging utilities, and about 400MB of things you will never use in production. When your npm install runs on top of that, all of it bakes into the final image together.
You are shipping the construction site instead of the finished building.
The .dockerignore vs .gitignore Mistake
Before we go further , this caught me early and it will catch you too.
.dockerignore and .gitignore are completely separate files.
-
.dockerignoretells Docker what not to copy into the build context. -
.gitignoretells Git what not to track.
I had a .dockerignore but no .gitignore. When I pushed to GitHub, my entire node_modules folder went with it — hundreds of files committed to the repo. I had to go back and clean the git history.
Always create both. They often contain the same entries but they serve different tools entirely. Get this right before you build anything else.
Enter Multi-Stage Builds
The fix is separating your build environment from your runtime environment.
- Build environment needs everything: the full OS, npm, build tools, all of it.
- Runtime environment needs almost nothing: just Node and your app files.
Multi-stage builds let you use both in one Dockerfile, but only ship the second one.
# Stage 1: builder (does the work, never ships)
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
# Stage 2: runtime (only this becomes your image)
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/app.js ./app.js
COPY --from=builder /app/package.json ./package.json
CMD ["node", "app.js"]
The COPY --from=builder line is the bridge. It reaches back into Stage 1 and pulls only what you specify. Everything else in Stage 1 — the full Debian OS, the compiler tools, the cache — gets discarded and never touches the final image.
Simple idea. But getting there cost me three separate errors.
Error 1: The Empty Dockerfile
ERROR: failed to build: failed to solve: the Dockerfile cannot be empty
I ran docker build before writing anything in the file. The file existed but was empty. Not a deep error — but worth including because it's the kind of thing that makes you feel stupid for ten seconds before you realise it's just a file issue.
Fix: write something in the file before you build it.
Error 2: The NUL Character Ambush
After the fat build succeeded I set up my .dockerignore using PowerShell's echo command:
echo "node_modules" > .dockerignore
echo ".git" >> .dockerignore
echo "*.log" >> .dockerignore
echo ".env" >> .dockerignore
Built again. Got this:
<input>:1:1: invalid character NUL
<input>:1:3: invalid character NUL
<input>:1:5: invalid character NUL
Sixteen lines of it.
PowerShell's echo writes files in UTF-16 LE with a BOM by default. Docker's parser expects UTF-8. The invisible encoding header and the null bytes between every character made the entire file unreadable to Docker.
The build still finished because Docker warned and continued — but my .dockerignore was being completely ignored. node_modules was getting copied into the build context on every single build, silently, without telling me.
The fix — always do this on Windows:
"node_modules`n.git`n*.log`n.env" | Out-File -FilePath .dockerignore -Encoding utf8
Or create the file in VS Code and confirm it saves as UTF-8. Never trust PowerShell echo for config files that other tools will read.
Error 3: The builder Name Collision (The Sneaky One)
This is the one that will catch most beginners.
I wrote my multi-stage Dockerfile but forgot AS builder on my first FROM statement:
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/app.js ./app.js
COPY --from=builder /app/package.json ./package.json
CMD ["node", "app.js"]
Built it. Got this:
ERROR: failed to build: failed to solve: builder: failed to resolve
source metadata for docker.io/library/builder:latest: pull access
denied, repository does not exist
Docker looked at --from=builder and thought I was referencing an external Docker Hub image called builder. It went to Docker Hub looking for library/builder:latest. That image does not exist.
--from=builder only works when builder is an alias defined with AS builder in an earlier FROM statement. Without it, Docker has nothing to reference locally and defaults to treating builder as an external image name.
The fix:
# AS builder here is not optional
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
# Stage 2: no alias needed, this is the final image
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/app.js ./app.js
COPY --from=builder /app/package.json ./package.json
CMD ["node", "app.js"]
AS builder on the first FROM gives Stage 1 a name. --from=builder references that name. Without it, Docker goes looking on the internet for something that doesn't exist.
The Result
| Image | Disk Usage | Content Size |
|---|---|---|
| myapp:fat | 1.58GB | 397MB |
| myapp:slim | 186MB | 45.6MB |
88% reduction. Same app.
The slim image history only contains COPY node_modules, COPY app.js, COPY package.json. That's it. The entire Debian OS, the build tools, the npm cache — none of it made it through. COPY --from=builder is surgical. You get exactly what you name and nothing else.
Now The Part Most Articles Skip
The slim image runs fine for a basic Node app. But "the app is the same" is only true if your app doesn't touch anything Alpine removed.
Both images produce the same output. Same server on port 3000. Good so far.
Now run this:
docker run --rm myapp:slim bash
Bash does not exist in Alpine. Alpine only ships sh. Any script in your app or CI pipeline that calls bash will crash. And the error message isn't clean — it throws a full Node.js module-not-found stack trace because CMD ["node", "app.js"] is the entrypoint and Node tried to interpret bash as a script. That's a deeply confusing error if you don't know what you're looking at.
Here's what else is missing:
glibc: Alpine uses musl libc instead. This is the silent killer. Native npm packages like bcrypt, sharp, canvas, and sqlite3 are compiled against glibc. When you run them on Alpine they break — with no warning during build. The error surfaces at runtime in production when a user tries to do something.
npm: You didn't copy it into Stage 2. You cannot run npm install inside a running slim container.
curl, wget, ps: Your standard debugging tools. When something goes wrong in a running Alpine container you have almost nothing to work with.
apt-get: Alpine uses apk instead, which has a much smaller package registry.
So When Is Alpine Actually Safe?
Alpine is safe when:
- Your app is pure JavaScript with no native compiled dependencies
- You have no
bashscripts in your startup or CI process - You don't need to exec into running containers to debug
- Your
node_modulesare all JavaScript packages — runnpm installand check fornode-gypin the output. That flags a native package.
Alpine is risky when:
- You use
bcryptfor password hashing - You use
sharpfor image processing - You use
canvas,sqlite3,puppeteer, or anything that compiles C++ bindings - Your Dockerfile or startup scripts reference
bashanywhere
If you need native packages but still want a smaller image, use node:18-slim instead of node:18-alpine. It's Debian-based so it keeps glibc, but strips out the heavy development tools. You'll land around 300–400MB — not as dramatic as Alpine, but safe for production.
The Decision Framework Before You Slim Any Image
1. Do any of my npm packages use node-gyp?
npm install
Check the output for gyp. If it appears, do not use Alpine.
2. Do any of my scripts call bash?
grep -r "#!/bin/bash" .
If yes, switch to sh or do not use Alpine.
3. Do I need to exec into running containers for debugging?
If yes, use node:18-slim instead.
4. Is CI pipeline speed a priority?
Smaller images pull faster in every environment. If you're running 50 builds a day the difference between 1.58GB and 186MB compounds significantly.
The Full Working Dockerfile
# Stage 1: build environment (discarded after build)
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
# Stage 2: runtime environment (this is what ships)
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/app.js ./app.js
COPY --from=builder /app/package.json ./package.json
CMD ["node", "app.js"]
Build and verify:
# Build the slim image
docker build -f slim/Dockerfile -t myapp:slim .
# Compare sizes
docker images myapp
# Confirm the app runs
docker run --rm myapp:slim node app.js
# Confirm what is missing
docker run --rm myapp:slim bash
Full repo with both Dockerfiles, the app, and all screenshots:
github.com/Arbythecoder/docker-optimization
What I Actually Learned
Going from 1.58GB to 186MB felt like a win. It is a win — for the right app.
But the real skill isn't knowing how to shrink an image. It's knowing whether to shrink it, what you're trading away, and how to verify nothing broke before it reaches production.
Most tutorials give you the happy path. Production gives you everything else.
This article is part of my Docker for Production ebook series. Ebook 4 covers the complete pre-deployment checklist for containerized Node.js apps — including the full audit framework before you slim any production image. Follow me on DEV.to, LinkedIn and X to get notified when it drops.
Screenshots for reference:




Top comments (0)