Your app needs PostgreSQL, Redis, a Node.js API, and a React frontend. You open 4 terminal tabs, start each service manually, configure networking between them, and pray nothing crashes.
What if one command started your entire stack — databases, services, networking — from a single config file?
That's Docker Compose.
Quick Start
# docker-compose.yml
services:
api:
build: ./api
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://postgres:secret@db:5432/myapp
REDIS_URL: redis://cache:6379
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
volumes:
- ./api/src:/app/src # Hot reload in development
web:
build: ./web
ports:
- "5173:5173"
environment:
VITE_API_URL: http://localhost:3000
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
volumes:
- redis-data:/data
volumes:
pgdata:
redis-data:
docker compose up -d # Start everything
docker compose logs -f # Stream all logs
docker compose down # Stop everything
One file. One command. Entire development environment.
Production-Ready Configuration
services:
api:
build:
context: ./api
dockerfile: Dockerfile
target: production
restart: unless-stopped
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
Multiple Environments
# docker-compose.override.yml (auto-loaded in dev)
services:
api:
build:
target: development
volumes:
- ./api:/app
command: npm run dev
# docker-compose.prod.yml
services:
api:
build:
target: production
command: node dist/server.js
# Development (uses override automatically)
docker compose up
# Production
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Common Patterns
Database + Migrations
services:
db:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
migrate:
build: ./api
command: npx prisma migrate deploy
depends_on:
db:
condition: service_healthy
profiles:
- tools # Only runs with: docker compose --profile tools run migrate
Background Workers
services:
worker:
build: ./api
command: node worker.js
depends_on: [db, cache]
deploy:
replicas: 3 # Run 3 worker instances
Reverse Proxy
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./certs:/etc/nginx/certs:ro
depends_on: [api, web]
Docker Compose v2 Features
-
Watch mode:
docker compose watch— auto-rebuild on file changes - Profiles: group services for different workflows
-
GPU support:
deploy.resources.reservations.devicesfor ML workloads -
Include: split large configs across files with
include
When to Choose Docker Compose
Choose Compose when:
- Multi-service development environments
- Local testing of production-like setups
- Small to medium deployments (single server)
- CI testing with real databases
Skip Compose when:
- Multi-host orchestration needed → Kubernetes
- Auto-scaling required → Kubernetes or cloud services
- You only have one container → just use
docker run
The Bottom Line
Docker Compose turns "works on my machine" into "works on every machine." Define your entire stack in YAML, share it with your team, and everyone gets identical environments in one command.
Start here: docs.docker.com/compose
Need custom data extraction, scraping, or automation? I build tools that collect and process data at scale — 78 actors on Apify Store and 265+ open-source repos. Email me: Spinov001@gmail.com | My Apify Actors
Top comments (0)