TL;DR: I spent 3 days trying to Dockerize my MERN app "the right way." It crashed on deployment, leaked env vars, had a 1.2GB image, and React couldn't talk to Express. This post is the complete story + the final setup that works. Copy-paste ready.
🧠 Why I even bothered with Docker
It was a Friday evening. My MERN app worked perfectly on my machine.
I pushed to the VPS. It broke immediately.
Node version mismatch. Then MongoDB connection string issues. Then React's VITE_API_URL was pointing to localhost in production. I spent 6 hours fixing things that had nothing to do with my actual app.
That Sunday, I decided: never again. Docker was the answer — one environment everywhere, no surprises on deployment.
What followed was 3 days of learning, breaking things, and eventually getting it right. Here's the complete story.
🏗️ What we're building
A production-ready Docker setup for a full MERN stack app with:
- MongoDB — running in a container (with a volume for data persistence)
- Express + Node.js — the API server
- React (Vite) — the frontend, built and served via Nginx
- docker-compose — orchestrating all three together
Here's what the final folder structure looks like:
my-mern-app/
├── client/ # React + Vite frontend
│ ├── src/
│ ├── Dockerfile
│ └── nginx.conf
├── server/ # Express + Node backend
│ ├── src/
│ ├── Dockerfile
│ └── .dockerignore
├── docker-compose.yml
├── docker-compose.prod.yml
└── .env
💥 Mistake #1 — My first Dockerfile was a disaster
My first attempt at a server Dockerfile looked like this:
# ❌ my first (terrible) attempt
FROM node:latest
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "src/index.js"]
Three problems with this:
-
node:latest— this pulls a different version every time you build. Your dev build and prod build can use different Node versions silently. - No
.dockerignore— I was copyingnode_modules(800MB+) into the image and then overwriting it withnpm install. Wasteful and slow. - Single stage — the final image contained dev dependencies, source maps, everything. My image was 1.2GB.
Here's what I replaced it with:
✅ The Server Dockerfile — multi-stage, lean, production-ready
# server/Dockerfile
# ---- Stage 1: Install dependencies ----
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
# ---- Stage 2: Production image ----
FROM node:20-alpine AS runner
WORKDIR /app
# Create non-root user for security
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
COPY --from=deps /app/node_modules ./node_modules
COPY src/ ./src/
COPY package.json ./
# Switch to non-root user
USER appuser
EXPOSE 5000
CMD ["node", "src/index.js"]
Key decisions here:
-
node:20-alpine— pinned version, Alpine Linux base = tiny image (~180MB vs 1.2GB) - Multi-stage build — only production deps and source code end up in the final image
- Non-root user — running as
rootinside a container is a security risk. This is the 2026 standard. -
npm ciinstead ofnpm install— faster, deterministic, respectspackage-lock.jsonexactly
🔥 Mistake #2 — React was calling localhost in production
This one genuinely confused me for half a day.
My React code had this:
// ❌ hardcoded — breaks in every environment except local
const res = await fetch('http://localhost:5000/api/products');
Even after I "fixed" it with an env variable:
const res = await fetch(`${import.meta.env.VITE_API_URL}/api/products`);
...it still broke. Because VITE_API_URL was empty in the Docker build. Vite bakes env vars at build time, not runtime. The container didn't have access to my .env file during the build stage.
The fix was passing the build arg explicitly:
# client/Dockerfile
# ---- Stage 1: Build React app ----
FROM node:20-alpine AS builder
WORKDIR /app
# Accept build arg
ARG VITE_API_URL
ENV VITE_API_URL=$VITE_API_URL
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# ---- Stage 2: Serve with Nginx ----
FROM nginx:alpine AS runner
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
And in docker-compose.yml, pass the arg at build time:
client:
build:
context: ./client
args:
VITE_API_URL: ${VITE_API_URL}
🌐 The Nginx config — the piece everyone forgets
React is a SPA. If you navigate to /dashboard and refresh, Nginx tries to find a file called dashboard — it doesn't exist, and you get a 404.
This tiny nginx.conf fixes that:
# client/nginx.conf
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
# ✅ This is the critical line — sends all routes to React
location / {
try_files $uri $uri/ /index.html;
}
# Proxy API calls to backend — no CORS issues
location /api {
proxy_pass http://server:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
The /api proxy block is the real win here. React calls /api/products — Nginx forwards it to the Express container internally. No CORS headers needed. No http://localhost:5000 in your React code.
🐳 The docker-compose.yml — full orchestration
# docker-compose.yml (development)
version: '3.9'
services:
mongo:
image: mongo:7
container_name: mern_mongo
restart: unless-stopped
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ROOT_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ROOT_PASSWORD}
MONGO_INITDB_DATABASE: ${MONGO_DB_NAME}
volumes:
- mongo_data:/data/db
ports:
- "27017:27017"
networks:
- mern_network
server:
build:
context: ./server
container_name: mern_server
restart: unless-stopped
environment:
NODE_ENV: development
PORT: 5000
MONGO_URI: mongodb://${MONGO_ROOT_USER}:${MONGO_ROOT_PASSWORD}@mongo:27017/${MONGO_DB_NAME}?authSource=admin
JWT_SECRET: ${JWT_SECRET}
ports:
- "5000:5000"
depends_on:
- mongo
networks:
- mern_network
volumes:
- ./server/src:/app/src # hot reload in dev
client:
build:
context: ./client
args:
VITE_API_URL: ${VITE_API_URL}
container_name: mern_client
restart: unless-stopped
ports:
- "80:80"
depends_on:
- server
networks:
- mern_network
volumes:
mongo_data:
networks:
mern_network:
driver: bridge
A few things worth explaining:
-
mongo:27017— inside Docker, containers talk to each other by service name, notlocalhost. Your ExpressMONGO_URIshould usemongo(the service name), notlocalhost. -
depends_on— ensures MongoDB starts before Express. Note: it doesn't wait for Mongo to be ready, just started. More on that below. -
mongo_datavolume — your database persists across container restarts. Without this, everydocker-compose downwipes your data. -
mern_network— all services share a private network. Nothing is exposed to the internet except what you explicitly map withports.
🔐 The .env file — never commit this
# .env — add this to .gitignore immediately
MONGO_ROOT_USER=maulik
MONGO_ROOT_PASSWORD=supersecretpassword123
MONGO_DB_NAME=mernapp
JWT_SECRET=your-very-long-random-jwt-secret-here
VITE_API_URL=http://localhost:80
# .gitignore
.env
.env.*
!.env.example
Always commit a .env.example with placeholder values so teammates know what variables are needed — but never the actual .env.
💥 Mistake #3 — MongoDB wasn't ready when Express started
Even with depends_on: mongo, Express would start and immediately try to connect to MongoDB — which was still initializing. Result: crash.
The fix is a retry loop in your Express server:
// server/src/db.js
const mongoose = require('mongoose');
const connectDB = async (retries = 5) => {
while (retries) {
try {
await mongoose.connect(process.env.MONGO_URI);
console.log('✅ MongoDB connected');
return;
} catch (err) {
retries--;
console.log(`MongoDB not ready — retrying... (${retries} attempts left)`);
if (retries === 0) {
console.error('❌ MongoDB connection failed after all retries');
process.exit(1);
}
// Wait 5 seconds before retrying
await new Promise(res => setTimeout(res, 5000));
}
}
};
module.exports = connectDB;
// server/src/index.js
const express = require('express');
const connectDB = require('./db');
const app = express();
connectDB(); // handles its own retries
app.use(express.json());
// ... your routes
app.listen(process.env.PORT || 5000, () => {
console.log(`🚀 Server running on port ${process.env.PORT || 5000}`);
});
🚀 The production docker-compose override
For production, you don't want source volume mounts, you want smaller images, and you want proper restart policies:
# docker-compose.prod.yml
version: '3.9'
services:
server:
build:
context: ./server
target: runner # use the production stage
environment:
NODE_ENV: production
volumes: [] # no source mounts in prod
client:
build:
context: ./client
target: runner
args:
VITE_API_URL: https://yourdomain.com
Deploy with:
# Development
docker-compose up --build
# Production
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up --build -d
📊 Before vs after — the numbers
| Metric | Before Docker | After Docker |
|---|---|---|
| Setup time on new machine | ~45 minutes |
docker-compose up — 3 min |
| "Works on my machine" issues | Every deployment | Zero |
| Image size (server) | — | 187MB (was 1.2GB before multi-stage) |
| MongoDB data loss on restart | Yes | No (volume) |
| CORS issues | Constant | Gone (Nginx proxy) |
| Env var leaks | Possible | Contained |
🛠️ The .dockerignore files — don't skip these
# server/.dockerignore
node_modules
npm-debug.log
.env
.env.*
.git
.gitignore
README.md
# client/.dockerignore
node_modules
npm-debug.log
dist
.env
.env.*
.git
Without .dockerignore, Docker copies node_modules into the build context — even though you don't need them. This makes builds painfully slow.
🧪 Quick commands to know
# Start everything
docker-compose up --build
# Start in background
docker-compose up -d --build
# View logs
docker-compose logs -f server
docker-compose logs -f client
# Stop everything (keeps volumes)
docker-compose down
# Stop + wipe database volume (careful!)
docker-compose down -v
# Rebuild one service only
docker-compose up --build server
# Get a shell inside a running container
docker exec -it mern_server sh
docker exec -it mern_mongo mongosh
💡 3 things I'd tell myself before starting
1. Containers talk by service name, not localhost.
http://mongo:27017 not http://localhost:27017. This will confuse you exactly once — now it won't confuse you at all.
2. Vite bakes env vars at build time.
Pass VITE_* vars as Docker build args, not runtime env vars. Runtime env vars are for your Node server, not your compiled React bundle.
3. Always add a MongoDB retry loop.
depends_on is not a health check. MongoDB takes a few seconds to be ready — your Express server needs to handle that gracefully.
🎯 What's next
With this setup you're ready for:
- Adding SSL with Let's Encrypt + Nginx (next blog)
- CI/CD pipeline with GitHub Actions that builds and pushes your Docker image automatically
- Kubernetes if you eventually need to scale beyond a single VPS
The foundation is solid. Everything else builds on top of this.
If you made it this far — you're ahead of most MERN devs who are still debugging Node version mismatches on their VPS at 2am. 🙌
Drop your questions in the comments — I check them all.

Top comments (0)