Series: From "Just Put It on a Server" to Production DevOps
Reading time: 12 minutes
Level: Beginner to Intermediate
The Problem: Container Orchestration by Hand
In Part 3, we containerized our API and Worker services. Success!
But to run the full stack, you need to:
# Start PostgreSQL
docker run -d --name postgres -e POSTGRES_PASSWORD=pass postgres:15-alpine
# Start Redis
docker run -d --name redis redis:7-alpine
# Start Elasticsearch
docker run -d --name elasticsearch -e "discovery.type=single-node" elasticsearch:8.11.0
# Wait for them to be ready (how long? π€·)
sleep 30
# Start API (with 10+ environment variables)
docker run -d --name api -p 3000:3000 \
-e DB_HOST=172.17.0.1 \
-e DB_PORT=5432 \
-e REDIS_HOST=172.17.0.1 \
# ... 8 more -e flags
sspp-api:latest
# Start Worker (with another 10+ environment variables)
docker run -d --name worker \
-e DB_HOST=172.17.0.1 \
# ... more -e flags
sspp-worker:latest
This is painful:
- 5 separate
docker runcommands - Manual networking configuration
- Environment variables repeated everywhere
- No startup order control
- Stopping everything requires 5 separate commands
There must be a better way.
Enter Docker Compose
Docker Compose lets you define your entire application stack in a single YAML file:
services:
api:
# ...
worker:
# ...
postgres:
# ...
redis:
# ...
elasticsearch:
# ...
Then start everything with:
docker-compose up
One command. Entire stack. Magic.
Installing Docker Compose
Docker Compose should already be installed with Docker Desktop (Mac/Windows).
On Linux:
# Install
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# Make executable
sudo chmod +x /usr/local/bin/docker-compose
# Verify
docker-compose --version
Creating docker-compose.yml
In your project root (/opt/sspp):
cd /opt/sspp
nano docker-compose.yml
version: '3.8'
services:
# PostgreSQL Database
postgres:
image: postgres:15-alpine
container_name: sspp-postgres
environment:
POSTGRES_DB: sales_signals
POSTGRES_USER: sspp_user
POSTGRES_PASSWORD: sspp_password
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./infrastructure/database/init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U sspp_user -d sales_signals"]
interval: 10s
timeout: 5s
retries: 5
# Redis Queue & Cache
redis:
image: redis:7-alpine
container_name: sspp-redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 5
# Elasticsearch for Search & Analytics
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
container_name: sspp-elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- "9200:9200"
- "9300:9300"
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
interval: 30s
timeout: 10s
retries: 5
# API Service
api:
build:
context: ./services/api
dockerfile: Dockerfile
image: davidbrown77/sspp-api:latest
container_name: sspp-api
ports:
- "3000:3000"
environment:
NODE_ENV: production
PORT: 3000
DB_HOST: postgres
DB_PORT: 5432
DB_NAME: sales_signals
DB_USER: sspp_user
DB_PASSWORD: sspp_password
REDIS_HOST: redis
REDIS_PORT: 6379
ELASTICSEARCH_URL: http://elasticsearch:9200
QUEUE_NAME: sales-events
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
elasticsearch:
condition: service_healthy
restart: unless-stopped
# Worker Service
worker:
build:
context: ./services/worker
dockerfile: Dockerfile
image: davidbrown77/sspp-worker:latest
container_name: sspp-worker
environment:
NODE_ENV: production
DB_HOST: postgres
DB_PORT: 5432
DB_NAME: sales_signals
DB_USER: sspp_user
DB_PASSWORD: sspp_password
REDIS_HOST: redis
REDIS_PORT: 6379
ELASTICSEARCH_URL: http://elasticsearch:9200
QUEUE_NAME: sales-events
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
elasticsearch:
condition: service_healthy
restart: unless-stopped
volumes:
postgres_data:
redis_data:
elasticsearch_data:
Let's break down the key concepts:
Key Concepts
1. Services
Each service is a container:
services:
postgres:
image: postgres:15-alpine
# ...
api:
build: ./services/api
# ...
Two ways to define services:
-
image= Use pre-built image from Docker Hub -
build= Build from Dockerfile
2. Networking (Automatic!)
Docker Compose creates a default network where services can reach each other by name:
environment:
DB_HOST: postgres # Not 172.17.0.1!
REDIS_HOST: redis # Not localhost!
Magic: postgres resolves to the PostgreSQL container's IP automatically.
3. Volumes (Persistent Data)
Without volumes, data disappears when containers stop:
volumes:
- postgres_data:/var/lib/postgresql/data
This creates a named volume that persists between container restarts.
Named volumes:
volumes:
postgres_data:
redis_data:
elasticsearch_data:
Docker manages these volumes for you.
4. Health Checks
Tell Docker Compose to wait for services to be ready:
healthcheck:
test: ["CMD-SHELL", "pg_isready -U sspp_user -d sales_signals"]
interval: 10s
timeout: 5s
retries: 5
Why this matters: API won't start until PostgreSQL is accepting connections.
5. Dependency Order
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
API starts only after:
- PostgreSQL health check passes
- Redis health check passes
- Elasticsearch health check passes
6. Restart Policies
restart: unless-stopped
Options:
-
no= Never restart -
always= Always restart (even after manual stop) -
on-failure= Only on error -
unless-stopped= Always, unless manually stopped
7. Environment Variables
Define once, use everywhere:
environment:
NODE_ENV: production
DB_HOST: postgres
DB_PORT: 5432
Or use .env files:
env_file:
- .env.production
Running Your Stack
Start Everything
docker-compose up
Output:
Creating network "sspp_default" with the default driver
Creating volume "sspp_postgres_data" with default driver
Creating volume "sspp_redis_data" with default driver
Creating volume "sspp_elasticsearch_data" with default driver
Creating sspp-postgres ... done
Creating sspp-redis ... done
Creating sspp-elasticsearch ... done
Waiting for postgres to be healthy...
Waiting for redis to be healthy...
Waiting for elasticsearch to be healthy...
Creating sspp-api ... done
Creating sspp-worker ... done
Attaching to sspp-postgres, sspp-redis, sspp-elasticsearch, sspp-api, sspp-worker
Watch the logs stream in real-time. Press Ctrl+C to stop.
Start in Background (Detached)
docker-compose up -d
Check status:
docker-compose ps
Output:
Name Command State Ports
-------------------------------------------------------------------------------------
sspp-api docker-entrypoint.sh pnpm ... Up 0.0.0.0:3000->3000/tcp
sspp-elasticsearch /bin/tini -- /usr/local/b ... Up 0.0.0.0:9200->9200/tcp
sspp-postgres docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
sspp-redis docker-entrypoint.sh redis ... Up 0.0.0.0:6379->6379/tcp
sspp-worker docker-entrypoint.sh pnpm ... Up
All services running! π
Testing the Full Stack
1. Check API Health
curl http://localhost:3000/api/v1/health
Response:
{
"status": "ok",
"timestamp": "2025-12-22T14:00:00.000Z"
}
2. Send an Event
curl -X POST http://localhost:3000/api/v1/events \
-H "Content-Type: application/json" \
-d '{
"accountId": "acct_001",
"userId": "user_001",
"eventType": "email_sent",
"timestamp": "2025-12-22T14:00:00Z",
"metadata": {
"campaign": "Q4_Outreach",
"recipientDomain": "acmecorp.com"
}
}'
Response:
{
"status": "accepted",
"jobId": "1",
"message": "Event queued for processing"
}
3. Check Worker Processed It
docker-compose logs worker
Output:
sspp-worker | info: Processing job 1 {"accountId":"acct_001",...}
sspp-worker | info: Signal stored in PostgreSQL {"signalId":1}
sspp-worker | info: Signal indexed in Elasticsearch {"signalId":1}
sspp-worker | info: Job 1 completed successfully
4. Verify Data in PostgreSQL
docker-compose exec postgres psql -U sspp_user -d sales_signals -c "SELECT * FROM sales_signals;"
Output:
id | account_id | user_id | event_type | signal_type | signal_score | ...
----+------------+----------+------------+-------------+--------------+-----
1 | acct_001 | user_001 | email_sent | outreach | 0.10 | ...
Complete end-to-end flow verified! π
Docker Compose Commands
Start/Stop
# Start all services
docker-compose up
# Start in background
docker-compose up -d
# Stop all services
docker-compose down
# Stop and remove volumes (delete data!)
docker-compose down -v
# Restart all services
docker-compose restart
# Restart specific service
docker-compose restart api
Logs
# View all logs
docker-compose logs
# Follow logs (live tail)
docker-compose logs -f
# Specific service
docker-compose logs api
# Last 50 lines
docker-compose logs --tail 50 api
Build
# Build all images
docker-compose build
# Build specific service
docker-compose build api
# Force rebuild (no cache)
docker-compose build --no-cache
# Build and start
docker-compose up --build
Exec (Run Commands in Containers)
# Open shell in API container
docker-compose exec api sh
# Run one-off command
docker-compose exec postgres psql -U sspp_user -d sales_signals
# Run command in new container (not running)
docker-compose run --rm api pnpm test
Scale Services
# Run 3 worker instances
docker-compose up --scale worker=3
# Check it
docker-compose ps
Output:
sspp_worker_1 Up
sspp_worker_2 Up
sspp_worker_3 Up
Parallel event processing!
Advanced: Multiple Compose Files
You can have different configurations for different environments.
Base config (docker-compose.yml):
services:
api:
build: ./services/api
environment:
NODE_ENV: ${NODE_ENV:-production}
Development overrides (docker-compose.dev.yml):
services:
api:
volumes:
- ./services/api:/app # Live code reload
environment:
NODE_ENV: development
command: pnpm run start:dev
Run with override:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
Or create an alias:
alias dc-dev='docker-compose -f docker-compose.yml -f docker-compose.dev.yml'
dc-dev up
Environment Variables
Method 1: Inline in docker-compose.yml
environment:
NODE_ENV: production
PORT: 3000
Method 2: .env File
Create .env:
NODE_ENV=production
PORT=3000
DB_PASSWORD=supersecret
Reference in docker-compose.yml:
environment:
NODE_ENV: ${NODE_ENV}
PORT: ${PORT}
DB_PASSWORD: ${DB_PASSWORD}
Method 3: env_file
services:
api:
env_file:
- .env
- .env.local
Precedence: Later files override earlier ones.
Real-World Tips
1. Use Named Volumes for Data
volumes:
postgres_data: # Named volume (Docker-managed)
- ./logs:/logs # Bind mount (host directory)
Named volumes are portable. Bind mounts are machine-specific.
2. Always Set Health Checks
Don't use depends_on without health checks:
# β Bad (starts immediately, might not be ready)
depends_on:
- postgres
# β
Good (waits for health check)
depends_on:
postgres:
condition: service_healthy
3. Pin Image Versions
# β Don't do this
image: postgres:latest
# β
Do this
image: postgres:15.5-alpine
4. Use Multi-Stage Builds
We already do this in our Dockerfiles (builder + production stages).
5. .dockerignore Everything Unnecessary
cat > .dockerignore <<EOF
node_modules
.git
.env*
*.log
dist
coverage
.vscode
.idea
EOF
What We Solved
β
One-command startup - docker-compose up
β
Automatic networking - Services find each other by name
β
Startup ordering - Health checks ensure readiness
β
Environment management - Centralized config
β
Volume persistence - Data survives container restarts
β
Easy scaling - --scale worker=5
β
Local-prod parity - Same stack, same behavior
What We Didn't Solve
β Production orchestration - Compose is for dev/local, not production at scale
β Multi-server deployment - Compose runs on one machine
β Auto-scaling - Manual --scale isn't dynamic
β Self-healing - If a container dies, it won't restart automatically (in production)
β Load balancing - No built-in request distribution
β Zero-downtime deploys - Restart = brief downtime
Docker Compose is amazing for local development. But production needs more power.
What's Next?
We've mastered Docker Compose for local development. But how do you run this in production?
In Part 5, we'll expose the limitations of Docker Compose and simulate production failures to understand why we need orchestration.
You'll learn:
- What happens when containers die in production
- Manual scaling nightmares
- Why deployment strategies matter
- The emotional journey to accepting Kubernetes
Spoiler: You'll want orchestration. Badly.
Try It Yourself
Challenge: Get the full SSPP stack running with Docker Compose:
- Create
docker-compose.ymlwith all 5 services - Start it with
docker-compose up -d - Send 10 events via the API
- Scale workers to 3 instances
- Verify all events processed in PostgreSQL
Bonus: Create a docker-compose.dev.yml with live reload.
Discussion
How do you structure your Docker Compose files? Single file or multi-file?
Join the conversation on GitHub Discussions.
Previous: Part 3: Dependency Hell - Why Docker Exists
Next: Part 5: From One Server to Many - The Need for Orchestration
About the Author
Writing this series to demonstrate production infrastructure thinking for my Proton.ai application.
- GitHub: @daviesbrown
- LinkedIn: David Nwosu Brown
Top comments (0)