A journey of turning a friend's local project into a production-ready container.


A friend of mine recently finished building an awesome portfolio website using the MERN stack (specifically React). It looked great on their local machine, but they were struggling with how to deploy it efficiently.
As someone diving deep into DevOps and Cloud Engineering, I saw this as the perfect opportunity to get my hands dirty. I offered to containerize the application for them.
My goal? Create a Docker image that was secure, fast, and incredibly small. Here is how I went from a massive 1GB+ image to a tiny 40MB production-ready container using Multi-Stage Builds.
The "Naive" Approach
When I first started, my instinct was to just wrap the application in a standard Node.js environment.
Dockerfile
FROM node:22-alpine
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm", "run", "dev"]
The Problem: While this worked, it was terrible for production.
Size: It included the entire node_modules folder, the source code, and development tools. The image size was huge (over 1GB!).
Security: The source code was sitting right there in the container.
Performance: We were using the development server (npm run dev) to serve the site, which isn't optimized for traffic.
The Solution: Multi-Stage Builds
I decided to refactor the Dockerfile using a multi-stage approach. The concept is simple: use one heavy image to build the app, and a second, lighter image to serve it.
Stage 1: The Builder (Node.js) I used the Node image strictly to install dependencies and run the build script. This compiles the React code into a static dist folder.
Stage 2: The Runner (Nginx) For the final image, I ditched Node.js entirely and used Nginx. Nginx is an industry-standard web server that is incredibly lightweight and faster at serving static HTML/CSS/JS files than Node.
The Final Dockerfile
Here is the optimized code I ended up with:
Dockerfile
# Stage 1: Build the React Application
FROM node:22-alpine AS builder
WORKDIR /app
COPY package*.json .
RUN npm install
COPY . .
RUN npm run build
# Stage 2: Serve with Nginx
FROM nginx:alpine
# Copy only the build output to replace the default Nginx contents
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
The Results
The difference was night and day.
Before: ~900 MB (Node image + Source Code + Node Modules)
After: ~200 MB (Alpine Nginx + Static Assets)
By using multi-stage builds, I stripped away everything that wasn't strictly necessary for the user to see the website. We didn't ship the tools used to build the house; we just shipped the house.
Key Takeaways
If you are learning Docker, don't stop at "it works." Always ask "is this efficient?" Moving from single-stage to multi-stage builds is one of the easiest wins you can get in terms of performance and security.
Now, my friend's portfolio is ready for the cloud, and I've got another tool in my DevOps arsenal.
Linkedin: https://www.linkedin.com/in/dasari-jayanth-b32ab9367/
Top comments (0)