I built two Docker images for the same React app this week.
One was 760MB. The other was 94MB. Both loaded the exact same website in the browser.
That 87.6% difference is the story of this post.
The Setup
This is Week 14 of my DevOps Micro Internship. The project: containerize a React app two ways, compare the results, and explain what changed and why it matters.
I am running everything on an Azure VM (Ubuntu 24.04 LTS) with Docker auto-installed via cloud-init.
The React app: https://github.com/pravinmishraaws/my-react-app
First: The .dockerignore
Before writing a single Dockerfile, I created a .dockerignore to keep things that should never be in an image out of the build context:
node_modules
build
.dockerignore
.git
.gitignore
*.md
This is especially important for node_modules. If you do not exclude it, Docker copies your entire local node_modules into the build context, which defeats the whole purpose of running npm ci inside the container.
Approach 1: Single-Stage Baseline (Dockerfile.single)
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
RUN npm install -g serve
EXPOSE 3000
CMD ["serve", "-s", "build", "-l", "3000"]
This image does everything in one go. Install dependencies, build the app, serve it. Simple.
The problem is that everything stays. Node.js, npm, all 1,342 packages, build tools. None of that is needed to serve a built React app. But it is all sitting there in the image.
Result: 760MB
Build command:
docker build -f Dockerfile.single -t react-single:latest .
Run command:
docker run -d --name react-single -p 3000:3000 --restart unless-stopped react-single:latest
Approach 2: Multi-Stage Build (Dockerfile)
# Stage 1 - build React app
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2 - serve with nginx
FROM nginx:alpine
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Two stages. Stage 1 builds the app. Stage 2 starts completely fresh with nginx:alpine and picks up only the finished build/ folder from Stage 1.
Node.js never makes it into the final image. Neither do npm or any of those 1,342 packages.
Result: 94MB
Build command:
docker build -t react-multi:latest .
Run command:
docker run -d --name react-multi -p 80:80 --restart unless-stopped react-multi:latest
The Comparison
docker images
| Image | Size |
|---|---|
| react-single:latest | 760 MB |
| react-multi:latest | 94 MB |
| Reduction | 87.6% |
Both containers ran simultaneously. Both loaded the same React app. The only difference was what was inside each image.
Why This Matters Beyond the Numbers
Security: Every package not in the final image cannot be exploited. The multi-stage image has no Node.js, no npm, no build tools. An attacker who somehow gets into that container finds a bare Nginx server. Nothing else.
CI/CD Speed: Smaller images push and pull faster. If your pipeline deploys 10 times a day and each deployment pulls a 760MB image instead of a 94MB one, that is a significant amount of wasted time over weeks and months.
Layer Caching: Notice that both Dockerfiles copy package.json before the rest of the source code. This is intentional. Docker caches each layer. If your dependencies have not changed, Docker skips the npm ci step entirely on the next build and jumps straight to copying your source. This alone can shave minutes off build times.
Running Both Simultaneously
One of the most satisfying parts of this project was running both containers at the same time on the same VM:
docker ps
CONTAINER ID IMAGE PORTS NAMES
837fb6d5def2 react-multi:latest 0.0.0.0:80->80/tcp react-multi
66efa6b350bf react-single:latest 0.0.0.0:3000->3000/tcp react-single
Opening both in the browser showed the same React app on two different ports, proving the multi-stage approach produces an identical result in a fraction of the space.
Full Project
GitHub: https://github.com/vivianokose/cloud-vm-docker-deploy
See you in the next one.

















Top comments (0)