DEV Community

Cover image for Part 4: Running Multiple Services Locally with Docker Compose
David Nwosu
David Nwosu

Posted on

Part 4: Running Multiple Services Locally with Docker Compose

Series: From "Just Put It on a Server" to Production DevOps

Reading time: 12 minutes

Level: Beginner to Intermediate


The Problem: Container Orchestration by Hand

In Part 3, we containerized our API and Worker services. Success!

But to run the full stack, you need to:

# Start PostgreSQL
docker run -d --name postgres -e POSTGRES_PASSWORD=pass postgres:15-alpine

# Start Redis  
docker run -d --name redis redis:7-alpine

# Start Elasticsearch
docker run -d --name elasticsearch -e "discovery.type=single-node" elasticsearch:8.11.0

# Wait for them to be ready (how long? 🀷)
sleep 30

# Start API (with 10+ environment variables)
docker run -d --name api -p 3000:3000 \
  -e DB_HOST=172.17.0.1 \
  -e DB_PORT=5432 \
  -e REDIS_HOST=172.17.0.1 \
  # ... 8 more -e flags
  sspp-api:latest

# Start Worker (with another 10+ environment variables)
docker run -d --name worker \
  -e DB_HOST=172.17.0.1 \
  # ... more -e flags
  sspp-worker:latest
Enter fullscreen mode Exit fullscreen mode

This is painful:

  • 5 separate docker run commands
  • Manual networking configuration
  • Environment variables repeated everywhere
  • No startup order control
  • Stopping everything requires 5 separate commands

There must be a better way.


Enter Docker Compose

Docker Compose lets you define your entire application stack in a single YAML file:

services:
  api:
    # ...
  worker:
    # ...
  postgres:
    # ...
  redis:
    # ...
  elasticsearch:
    # ...
Enter fullscreen mode Exit fullscreen mode

Then start everything with:

docker-compose up
Enter fullscreen mode Exit fullscreen mode

One command. Entire stack. Magic.


Installing Docker Compose

Docker Compose should already be installed with Docker Desktop (Mac/Windows).

On Linux:

# Install
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

# Make executable
sudo chmod +x /usr/local/bin/docker-compose

# Verify
docker-compose --version
Enter fullscreen mode Exit fullscreen mode

Creating docker-compose.yml

In your project root (/opt/sspp):

cd /opt/sspp
nano docker-compose.yml
Enter fullscreen mode Exit fullscreen mode
version: '3.8'

services:
  # PostgreSQL Database
  postgres:
    image: postgres:15-alpine
    container_name: sspp-postgres
    environment:
      POSTGRES_DB: sales_signals
      POSTGRES_USER: sspp_user
      POSTGRES_PASSWORD: sspp_password
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./infrastructure/database/init.sql:/docker-entrypoint-initdb.d/init.sql
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U sspp_user -d sales_signals"]
      interval: 10s
      timeout: 5s
      retries: 5

  # Redis Queue & Cache
  redis:
    image: redis:7-alpine
    container_name: sspp-redis
    ports:
      - "6379:6379"
    volumes:
      - redis_data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 3s
      retries: 5

  # Elasticsearch for Search & Analytics
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
    container_name: sspp-elasticsearch
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ports:
      - "9200:9200"
      - "9300:9300"
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 5

  # API Service
  api:
    build:
      context: ./services/api
      dockerfile: Dockerfile
    image: davidbrown77/sspp-api:latest
    container_name: sspp-api
    ports:
      - "3000:3000"
    environment:
      NODE_ENV: production
      PORT: 3000
      DB_HOST: postgres
      DB_PORT: 5432
      DB_NAME: sales_signals
      DB_USER: sspp_user
      DB_PASSWORD: sspp_password
      REDIS_HOST: redis
      REDIS_PORT: 6379
      ELASTICSEARCH_URL: http://elasticsearch:9200
      QUEUE_NAME: sales-events
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
      elasticsearch:
        condition: service_healthy
    restart: unless-stopped

  # Worker Service
  worker:
    build:
      context: ./services/worker
      dockerfile: Dockerfile
    image: davidbrown77/sspp-worker:latest
    container_name: sspp-worker
    environment:
      NODE_ENV: production
      DB_HOST: postgres
      DB_PORT: 5432
      DB_NAME: sales_signals
      DB_USER: sspp_user
      DB_PASSWORD: sspp_password
      REDIS_HOST: redis
      REDIS_PORT: 6379
      ELASTICSEARCH_URL: http://elasticsearch:9200
      QUEUE_NAME: sales-events
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
      elasticsearch:
        condition: service_healthy
    restart: unless-stopped

volumes:
  postgres_data:
  redis_data:
  elasticsearch_data:
Enter fullscreen mode Exit fullscreen mode

Let's break down the key concepts:


Key Concepts

1. Services

Each service is a container:

services:
  postgres:
    image: postgres:15-alpine
    # ...

  api:
    build: ./services/api
    # ...
Enter fullscreen mode Exit fullscreen mode

Two ways to define services:

  • image = Use pre-built image from Docker Hub
  • build = Build from Dockerfile

2. Networking (Automatic!)

Docker Compose creates a default network where services can reach each other by name:

environment:
  DB_HOST: postgres    # Not 172.17.0.1!
  REDIS_HOST: redis    # Not localhost!
Enter fullscreen mode Exit fullscreen mode

Magic: postgres resolves to the PostgreSQL container's IP automatically.

3. Volumes (Persistent Data)

Without volumes, data disappears when containers stop:

volumes:
  - postgres_data:/var/lib/postgresql/data
Enter fullscreen mode Exit fullscreen mode

This creates a named volume that persists between container restarts.

Named volumes:

volumes:
  postgres_data:
  redis_data:
  elasticsearch_data:
Enter fullscreen mode Exit fullscreen mode

Docker manages these volumes for you.

4. Health Checks

Tell Docker Compose to wait for services to be ready:

healthcheck:
  test: ["CMD-SHELL", "pg_isready -U sspp_user -d sales_signals"]
  interval: 10s
  timeout: 5s
  retries: 5
Enter fullscreen mode Exit fullscreen mode

Why this matters: API won't start until PostgreSQL is accepting connections.

5. Dependency Order

depends_on:
  postgres:
    condition: service_healthy
  redis:
    condition: service_healthy
Enter fullscreen mode Exit fullscreen mode

API starts only after:

  • PostgreSQL health check passes
  • Redis health check passes
  • Elasticsearch health check passes

6. Restart Policies

restart: unless-stopped
Enter fullscreen mode Exit fullscreen mode

Options:

  • no = Never restart
  • always = Always restart (even after manual stop)
  • on-failure = Only on error
  • unless-stopped = Always, unless manually stopped

7. Environment Variables

Define once, use everywhere:

environment:
  NODE_ENV: production
  DB_HOST: postgres
  DB_PORT: 5432
Enter fullscreen mode Exit fullscreen mode

Or use .env files:

env_file:
  - .env.production
Enter fullscreen mode Exit fullscreen mode

Running Your Stack

Start Everything

docker-compose up
Enter fullscreen mode Exit fullscreen mode

Output:

Creating network "sspp_default" with the default driver
Creating volume "sspp_postgres_data" with default driver
Creating volume "sspp_redis_data" with default driver
Creating volume "sspp_elasticsearch_data" with default driver
Creating sspp-postgres ... done
Creating sspp-redis ... done
Creating sspp-elasticsearch ... done
Waiting for postgres to be healthy...
Waiting for redis to be healthy...
Waiting for elasticsearch to be healthy...
Creating sspp-api ... done
Creating sspp-worker ... done
Attaching to sspp-postgres, sspp-redis, sspp-elasticsearch, sspp-api, sspp-worker
Enter fullscreen mode Exit fullscreen mode

Watch the logs stream in real-time. Press Ctrl+C to stop.

Start in Background (Detached)

docker-compose up -d
Enter fullscreen mode Exit fullscreen mode

Check status:

docker-compose ps
Enter fullscreen mode Exit fullscreen mode

Output:

        Name                      Command               State           Ports
-------------------------------------------------------------------------------------
sspp-api              docker-entrypoint.sh pnpm ...   Up      0.0.0.0:3000->3000/tcp
sspp-elasticsearch    /bin/tini -- /usr/local/b ...   Up      0.0.0.0:9200->9200/tcp
sspp-postgres         docker-entrypoint.sh postgres   Up      0.0.0.0:5432->5432/tcp
sspp-redis            docker-entrypoint.sh redis ...  Up      0.0.0.0:6379->6379/tcp
sspp-worker           docker-entrypoint.sh pnpm ...   Up
Enter fullscreen mode Exit fullscreen mode

All services running! πŸŽ‰


Testing the Full Stack

1. Check API Health

curl http://localhost:3000/api/v1/health
Enter fullscreen mode Exit fullscreen mode

Response:

{
  "status": "ok",
  "timestamp": "2025-12-22T14:00:00.000Z"
}
Enter fullscreen mode Exit fullscreen mode

2. Send an Event

curl -X POST http://localhost:3000/api/v1/events \
  -H "Content-Type: application/json" \
  -d '{
    "accountId": "acct_001",
    "userId": "user_001",
    "eventType": "email_sent",
    "timestamp": "2025-12-22T14:00:00Z",
    "metadata": {
      "campaign": "Q4_Outreach",
      "recipientDomain": "acmecorp.com"
    }
  }'
Enter fullscreen mode Exit fullscreen mode

Response:

{
  "status": "accepted",
  "jobId": "1",
  "message": "Event queued for processing"
}
Enter fullscreen mode Exit fullscreen mode

3. Check Worker Processed It

docker-compose logs worker
Enter fullscreen mode Exit fullscreen mode

Output:

sspp-worker | info: Processing job 1 {"accountId":"acct_001",...}
sspp-worker | info: Signal stored in PostgreSQL {"signalId":1}
sspp-worker | info: Signal indexed in Elasticsearch {"signalId":1}
sspp-worker | info: Job 1 completed successfully
Enter fullscreen mode Exit fullscreen mode

4. Verify Data in PostgreSQL

docker-compose exec postgres psql -U sspp_user -d sales_signals -c "SELECT * FROM sales_signals;"
Enter fullscreen mode Exit fullscreen mode

Output:

 id | account_id | user_id  | event_type | signal_type | signal_score | ...
----+------------+----------+------------+-------------+--------------+-----
  1 | acct_001   | user_001 | email_sent | outreach    |         0.10 | ...
Enter fullscreen mode Exit fullscreen mode

Complete end-to-end flow verified! 🎊


Docker Compose Commands

Start/Stop

# Start all services
docker-compose up

# Start in background
docker-compose up -d

# Stop all services
docker-compose down

# Stop and remove volumes (delete data!)
docker-compose down -v

# Restart all services
docker-compose restart

# Restart specific service
docker-compose restart api
Enter fullscreen mode Exit fullscreen mode

Logs

# View all logs
docker-compose logs

# Follow logs (live tail)
docker-compose logs -f

# Specific service
docker-compose logs api

# Last 50 lines
docker-compose logs --tail 50 api
Enter fullscreen mode Exit fullscreen mode

Build

# Build all images
docker-compose build

# Build specific service
docker-compose build api

# Force rebuild (no cache)
docker-compose build --no-cache

# Build and start
docker-compose up --build
Enter fullscreen mode Exit fullscreen mode

Exec (Run Commands in Containers)

# Open shell in API container
docker-compose exec api sh

# Run one-off command
docker-compose exec postgres psql -U sspp_user -d sales_signals

# Run command in new container (not running)
docker-compose run --rm api pnpm test
Enter fullscreen mode Exit fullscreen mode

Scale Services

# Run 3 worker instances
docker-compose up --scale worker=3

# Check it
docker-compose ps
Enter fullscreen mode Exit fullscreen mode

Output:

sspp_worker_1   Up
sspp_worker_2   Up
sspp_worker_3   Up
Enter fullscreen mode Exit fullscreen mode

Parallel event processing!


Advanced: Multiple Compose Files

You can have different configurations for different environments.

Base config (docker-compose.yml):

services:
  api:
    build: ./services/api
    environment:
      NODE_ENV: ${NODE_ENV:-production}
Enter fullscreen mode Exit fullscreen mode

Development overrides (docker-compose.dev.yml):

services:
  api:
    volumes:
      - ./services/api:/app  # Live code reload
    environment:
      NODE_ENV: development
    command: pnpm run start:dev
Enter fullscreen mode Exit fullscreen mode

Run with override:

docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
Enter fullscreen mode Exit fullscreen mode

Or create an alias:

alias dc-dev='docker-compose -f docker-compose.yml -f docker-compose.dev.yml'
dc-dev up
Enter fullscreen mode Exit fullscreen mode

Environment Variables

Method 1: Inline in docker-compose.yml

environment:
  NODE_ENV: production
  PORT: 3000
Enter fullscreen mode Exit fullscreen mode

Method 2: .env File

Create .env:

NODE_ENV=production
PORT=3000
DB_PASSWORD=supersecret
Enter fullscreen mode Exit fullscreen mode

Reference in docker-compose.yml:

environment:
  NODE_ENV: ${NODE_ENV}
  PORT: ${PORT}
  DB_PASSWORD: ${DB_PASSWORD}
Enter fullscreen mode Exit fullscreen mode

Method 3: env_file

services:
  api:
    env_file:
      - .env
      - .env.local
Enter fullscreen mode Exit fullscreen mode

Precedence: Later files override earlier ones.


Real-World Tips

1. Use Named Volumes for Data

volumes:
  postgres_data:    # Named volume (Docker-managed)
  - ./logs:/logs    # Bind mount (host directory)
Enter fullscreen mode Exit fullscreen mode

Named volumes are portable. Bind mounts are machine-specific.

2. Always Set Health Checks

Don't use depends_on without health checks:

# ❌ Bad (starts immediately, might not be ready)
depends_on:
  - postgres

# βœ… Good (waits for health check)
depends_on:
  postgres:
    condition: service_healthy
Enter fullscreen mode Exit fullscreen mode

3. Pin Image Versions

# ❌ Don't do this
image: postgres:latest

# βœ… Do this
image: postgres:15.5-alpine
Enter fullscreen mode Exit fullscreen mode

4. Use Multi-Stage Builds

We already do this in our Dockerfiles (builder + production stages).

5. .dockerignore Everything Unnecessary

cat > .dockerignore <<EOF
node_modules
.git
.env*
*.log
dist
coverage
.vscode
.idea
EOF
Enter fullscreen mode Exit fullscreen mode

What We Solved

βœ… One-command startup - docker-compose up

βœ… Automatic networking - Services find each other by name

βœ… Startup ordering - Health checks ensure readiness

βœ… Environment management - Centralized config

βœ… Volume persistence - Data survives container restarts

βœ… Easy scaling - --scale worker=5

βœ… Local-prod parity - Same stack, same behavior


What We Didn't Solve

❌ Production orchestration - Compose is for dev/local, not production at scale

❌ Multi-server deployment - Compose runs on one machine

❌ Auto-scaling - Manual --scale isn't dynamic

❌ Self-healing - If a container dies, it won't restart automatically (in production)

❌ Load balancing - No built-in request distribution

❌ Zero-downtime deploys - Restart = brief downtime

Docker Compose is amazing for local development. But production needs more power.


What's Next?

We've mastered Docker Compose for local development. But how do you run this in production?

In Part 5, we'll expose the limitations of Docker Compose and simulate production failures to understand why we need orchestration.

You'll learn:

  • What happens when containers die in production
  • Manual scaling nightmares
  • Why deployment strategies matter
  • The emotional journey to accepting Kubernetes

Spoiler: You'll want orchestration. Badly.


Try It Yourself

Challenge: Get the full SSPP stack running with Docker Compose:

  1. Create docker-compose.yml with all 5 services
  2. Start it with docker-compose up -d
  3. Send 10 events via the API
  4. Scale workers to 3 instances
  5. Verify all events processed in PostgreSQL

Bonus: Create a docker-compose.dev.yml with live reload.


Discussion

How do you structure your Docker Compose files? Single file or multi-file?

Join the conversation on GitHub Discussions.


Previous: Part 3: Dependency Hell - Why Docker Exists

Next: Part 5: From One Server to Many - The Need for Orchestration

About the Author

Writing this series to demonstrate production infrastructure thinking for my Proton.ai application.

Top comments (0)