If you’re working with Node.js microservices and still running everything manually on your machine - this post is for you.
In this guide, we’ll Dockerize multiple backend microservices and run them together using Docker Compose, the same way it’s done in real-world projects.
No frontend. No fluff. Just clean backend architecture.
Why Docker for Microservices?
Microservices mean:
- Multiple Node.js apps
- Different ports
- Shared infrastructure (DB, cache, queues)
Without Docker:
- “Works on my machine” issues
- Manual setup every time
- Painful onboarding
With Docker:
- Same environment everywhere
- One command to start everything
- Easy scaling & CI/CD
What We’ll Build
Backend-only setup with:
auth-serviceuser-serviceorder-service- MongoDB
- Docker + Docker Compose
Each service:
- Runs in its own container
- Has its own Dockerfile
- Communicates over Docker network
Folder Structure
backend/
│
├── auth-service/
│ ├── Dockerfile
│ ├── package.json
│ ├── server.js
│ └── .dockerignore
│
├── user-service/
│ ├── Dockerfile
│ ├── package.json
│ ├── server.js
│ └── .dockerignore
│
├── order-service/
│ ├── Dockerfile
│ ├── package.json
│ ├── server.js
│ └── .dockerignore
│
├── docker-compose.yml
└── .env
Common Dockerfile (Node.js Service)
Each service uses the same Dockerfile pattern.
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install --production
COPY . .
EXPOSE 5000
CMD ["npm", "start"]
.dockerignore
node_modules
.env
npm-debug.log
This keeps images small and clean.
Sample Service Code
Example: auth-service/server.js
const express = require("express");
const app = express();
app.get("/health", (req, res) => {
res.json({ service: "auth", status: "ok" });
});
app.listen(5000, () => {
console.log("Auth service running on port 5000");
});
Each microservice exposes a /health endpoint — a best practice for containers.
Docker Compose (Multiple Microservices)
This is where the magic happens.
version: "3.9"
services:
auth-service:
build: ./auth-service
ports:
- "5001:5000"
env_file:
- .env
depends_on:
- mongo
user-service:
build: ./user-service
ports:
- "5002:5000"
env_file:
- .env
depends_on:
- mongo
order-service:
build: ./order-service
ports:
- "5003:5000"
env_file:
- .env
depends_on:
- mongo
mongo:
image: mongo:6
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db
volumes:
mongo_data:
Environment Variables
.env
MONGO_URI=mongodb://mongo:27017/microservices_db
JWT_SECRET=supersecret
📌 Important Docker Rule
Inside Docker:
- ❌
localhostdoes NOT work - ✅ Use service names (
mongo,user-service, etc.)
Inter-Service Communication
Inside containers:
auth-service → http://user-service:5000
user-service → http://order-service:5000
Example:
axios.get("http://user-service:5000/health");
Docker automatically creates a shared network for services in docker-compose.yml.
Run Everything
docker-compose up --build
Test locally:
Production-Ready Tips
✔ Give each service its own database
✔ Don’t expose ports for internal services
✔ Add health checks
✔ Use an API Gateway (NGINX / Kong)
✔ Move to Kubernetes when scaling
Key Takeaway
Each microservice runs in its own Docker container, communicates via Docker’s internal network using service names, and is orchestrated with Docker Compose.
If you understand this setup, you’re already ahead of many backend devs.
If you found this useful, drop a ❤️ or comment - happy to write a follow-up!
Top comments (0)