DEV Community

Cover image for From 1.2GB to 54MB: My Docker Image Went on a Diet
Hasan Ashab
Hasan Ashab

Posted on

From 1.2GB to 54MB: My Docker Image Went on a Diet

When I first containerized my Node.js app, I felt pretty good about myself. I had a Dockerfile, I built it, and it worked.

Then I checked the size.

1.2GB. For a single Node.js service.

That’s when reality hit me. My image wasn’t lean—it was obese. It slowed down builds, bloated my CI/CD pipeline, took forever to push to the registry, and ate storage like there was no tomorrow.

So, I put my Docker image on a strict diet. After a few rounds of optimizations, it went from 1.2GB → 250MB → 54MB.

Here’s the story of how I cut the fat—and how you can too.

Step 1: The Heavyweight Start

Here’s what my original Dockerfile looked like:

FROM node:16

WORKDIR /app

COPY package*.json ./
RUN npm install

COPY . .

CMD ["node", "server.js"]
Enter fullscreen mode Exit fullscreen mode

Looks innocent, right? But it had several problems:

  • node:16 is Debian-based and heavy (~350MB).
  • npm install installed everything—dev and production dependencies.
  • No .dockerignore, so logs, git history, and node_modules sneaked into the image.

The result? A 1.2GB monster that slowed everything down.

Step 2: Choosing a Leaner Base

The first fix was swapping node:16 for node:16-alpine.

FROM node:16-alpine
Enter fullscreen mode Exit fullscreen mode

That one-line change cut my image down to ~250MB.

Lesson: Your base image choice can make or break your build.

⚠️ Caveat: Alpine uses musl instead of glibc. If your app has native modules (sharp, bcrypt, canvas), you may need extra packages.

Step 3: Multi-Stage Builds

My app uses TypeScript, so I had build tools sitting inside the final image. Big mistake. They added hundreds of MBs I didn’t need in production.

Enter multi-stage builds:

# Stage 1: Builder
FROM node:16-alpine AS builder

WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: Runtime
FROM node:16-alpine

WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY package*.json ./
RUN npm ci --only=production

CMD ["node", "dist/server.js"]
Enter fullscreen mode Exit fullscreen mode

Now, the final image contains only:

  • Compiled JavaScript (dist/)
  • Production dependencies

No dev dependencies. No build cache. No clutter.

This dropped my image to ~120MB.

Step 4: Prune and Ignore Junk

Another culprit: files that had no business being in production.

I added a .dockerignore:

node_modules
npm-debug.log
Dockerfile
.dockerignore
.git
.gitignore
*.md
tests
Enter fullscreen mode Exit fullscreen mode

And I cleaned up caches in the Dockerfile:

RUN npm ci --only=production \
    && npm cache clean --force \
    && rm -rf /tmp/*
Enter fullscreen mode Exit fullscreen mode

End result: no accidental junk, no wasted MBs.

Step 5: Minimize Layers

At first, I had a Dockerfile with multiple RUN statements:

RUN apk add --no-cache python3
RUN npm ci --only=production
RUN npm cache clean --force
Enter fullscreen mode Exit fullscreen mode

Each RUN adds a layer. I combined them into one:

RUN apk add --no-cache python3 \
    && npm ci --only=production \
    && npm cache clean --force
Enter fullscreen mode Exit fullscreen mode

This small tweak shaved off ~15MB. Not huge, but every MB counts when you’re pulling images in production.

Step 6: Measuring and Iterating

The key to trimming images is measuring:

docker images
docker history <image>
Enter fullscreen mode Exit fullscreen mode

With docker history, I saw exactly which layer was eating space and optimized from there.

Final Weight Check

  • Original: 1.2GB
  • After switching to Alpine: ~250MB
  • After multi-stage + pruning: 120MB
  • After .dockerignore + cleanup: 54MB 🎉

That’s a ~95% reduction. Pulls went from minutes to seconds, and CI/CD pipelines stopped crawling.

Lessons Learned

  1. Pick the right base image – Defaults are rarely optimal.
  2. Multi-stage builds are gold – Keep dev tools out of production.
  3. Use .dockerignore religiously – Don’t ship junk.
  4. Prune aggressively – Caches, logs, temp files… delete them.
  5. Measure constantly – Know what’s eating space before fixing it.

Conclusion

Cutting Docker image size isn’t just about bragging rights—it’s about faster deploys, lower registry costs, and fewer headaches.

My Node.js image went on a diet and lost 1.1GB, and I’ll never go back to lazy Dockerfiles again.

If your containers are bloated, trust me: a few tweaks can make them featherweight.

So… is your Docker image on a healthy diet?


📬 Contact

If you’d like to connect, collaborate, or discuss DevOps, feel free to reach out:

Top comments (0)