DEV Community

Teguh Coding
Teguh Coding

Posted on

Docker Compose for Local Development: The Setup That Makes Your Team Actually Enjoy Onboarding

There is a ritual every developer knows too well.

A new engineer joins the team. They clone the repo. They follow the README — a document last updated eighteen months ago. Two hours later, they are still fighting with port conflicts, missing environment variables, and a PostgreSQL version that refuses to cooperate with their machine.

Docker Compose exists to kill that ritual. Not just for running containers, but for building a local development environment so smooth that onboarding becomes a non-event.

This article is about doing it right.


Why Most Docker Compose Setups Fall Short

Most teams adopt Docker Compose and immediately replicate their production docker run commands inside a YAML file. That works, technically. But it misses everything that makes local development actually pleasant:

  • Hot reloading when you change code
  • Shared environment variables without copying .env files everywhere
  • Services that start in the right order
  • Logs that don't require a PhD to read
  • A database with seed data already loaded

Let's build a setup that handles all of this.


The Project Structure

We'll use a Node.js API + PostgreSQL + Redis stack as the example. The same patterns apply to Python, Go, or any backend language.

my-app/
  docker-compose.yml
  docker-compose.override.yml
  .env
  .env.example
  apps/
    api/
      Dockerfile
      Dockerfile.dev
      src/
  db/
    init/
      01_schema.sql
      02_seed.sql
Enter fullscreen mode Exit fullscreen mode

The key insight here: two Compose files. One for the baseline config, one for local overrides. This is Docker Compose's built-in feature that most teams never use.


The Base Compose File

# docker-compose.yml
version: '3.9'

services:
  api:
    build:
      context: ./apps/api
      dockerfile: Dockerfile
    environment:
      NODE_ENV: production
      DATABASE_URL: postgresql://app:secret@db:5432/appdb
      REDIS_URL: redis://cache:6379
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started
    ports:
      - "3000:3000"

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: app
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: appdb
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./db/init:/docker-entrypoint-initdb.d
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U app -d appdb"]
      interval: 5s
      timeout: 5s
      retries: 5

  cache:
    image: redis:7-alpine
    volumes:
      - redis_data:/data

volumes:
  postgres_data:
  redis_data:
Enter fullscreen mode Exit fullscreen mode

A few things worth noting:

The healthcheck on postgres is not optional. Without it, your API container starts, tries to connect to the database, and fails because Postgres is still initializing. depends_on with condition: service_healthy solves this elegantly.

The ./db/init volume mount auto-runs SQL files in alphabetical order when the database is first created. Your new engineer gets a database with real schema and seed data on their first docker compose up.


The Override File for Local Dev

This is where the magic happens.

# docker-compose.override.yml
version: '3.9'

services:
  api:
    build:
      dockerfile: Dockerfile.dev
    environment:
      NODE_ENV: development
    volumes:
      - ./apps/api/src:/app/src
    command: npm run dev
    ports:
      - "9229:9229"  # Node.js debugger port

  db:
    ports:
      - "5432:5432"  # Expose locally for DB clients like TablePlus

  cache:
    ports:
      - "6379:6379"  # Expose locally for Redis clients
Enter fullscreen mode Exit fullscreen mode

Docker Compose automatically merges docker-compose.override.yml with docker-compose.yml when you run docker compose up. No flags needed.

What this override does:

  • Uses a dev-specific Dockerfile (with nodemon or tsx watch installed)
  • Mounts your source code directly into the container so changes reflect instantly
  • Exposes database and cache ports locally so developers can connect with their GUI tools
  • Opens the Node.js debug port for stepping through code in VS Code

The Dev Dockerfile

# Dockerfile.dev
FROM node:20-alpine

WORKDIR /app

COPY package*.json ./
RUN npm install  # Full install, including devDependencies

# Source is mounted via volume, not copied
CMD ["npm", "run", "dev"]
Enter fullscreen mode Exit fullscreen mode

Notice: no COPY src . here. The source code comes in through the volume mount. This means every file save on your host machine immediately reflects inside the container, triggering your dev server's hot reload.


Environment Variable Strategy

Commit an .env.example file with all required variables but no real values:

# .env.example
DATABASE_URL=
REDIS_URL=
JWT_SECRET=
API_PORT=3000
Enter fullscreen mode Exit fullscreen mode

Commit a .env file with safe local defaults (not secrets):

# .env  (safe to commit for local dev defaults)
DATABASE_URL=postgresql://app:secret@db:5432/appdb
REDIS_URL=redis://cache:6379
JWT_SECRET=local-dev-secret-not-for-production
API_PORT=3000
Enter fullscreen mode Exit fullscreen mode

In your Compose file, reference it:

services:
  api:
    env_file:
      - .env
Enter fullscreen mode Exit fullscreen mode

For production, you never commit real secrets. But for local development, having working defaults means a new engineer types docker compose up and everything just works.


The Makefile: Hiding the Complexity

Docker commands get verbose. A simple Makefile keeps things friendly:

.PHONY: up down logs shell db-reset

up:
    docker compose up --build -d

ddown:
    docker compose down

logs:
    docker compose logs -f api

shell:
    docker compose exec api sh

db-reset:
    docker compose down -v
    docker compose up -d db
    docker compose exec db psql -U app -d appdb -f /docker-entrypoint-initdb.d/01_schema.sql

test:
    docker compose exec api npm test
Enter fullscreen mode Exit fullscreen mode

Now onboarding looks like:

git clone https://github.com/your-org/my-app
cd my-app
make up
Enter fullscreen mode Exit fullscreen mode

Three commands. Done.


Handling the Logs Problem

Running docker compose logs -f across five services is a wall of noise. Two approaches:

Option 1: Filter by service

docker compose logs -f api
Enter fullscreen mode Exit fullscreen mode

Option 2: Use structured logging in your app and pipe to jq

docker compose logs -f api | jq '.'
Enter fullscreen mode Exit fullscreen mode

If you are running many services, consider adding Dozzle to your Compose stack — a lightweight log viewer that runs as a container and gives you a browser UI for all service logs:

  dozzle:
    image: amir20/dozzle:latest
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - "8888:8080"
Enter fullscreen mode Exit fullscreen mode

Visit http://localhost:8888 and you get a clean, searchable log UI for every container.


One More Thing: Profiles

Docker Compose supports profiles — a way to define services that only start when explicitly requested.

  mailhog:
    image: mailhog/mailhog
    ports:
      - "1025:1025"  # SMTP
      - "8025:8025"  # Web UI
    profiles:
      - mail

  pgadmin:
    image: dpage/pgadmin4
    environment:
      PGADMIN_DEFAULT_EMAIL: admin@local.dev
      PGADMIN_DEFAULT_PASSWORD: admin
    ports:
      - "5050:80"
    profiles:
      - admin
Enter fullscreen mode Exit fullscreen mode

Now developers only start these when they need them:

docker compose --profile mail up -d    # Start with MailHog
docker compose --profile admin up -d   # Start with pgAdmin
Enter fullscreen mode Exit fullscreen mode

The core stack stays lean. Heavy optional services are available but out of the way.


What Good Onboarding Feels Like

When this setup is in place, your README shrinks to:

  1. Install Docker Desktop
  2. Clone the repo
  3. Run make up
  4. Visit http://localhost:3000

No version conflicts. No "works on my machine." No two-hour debugging sessions before writing a single line of code.

That's the real value of Docker Compose done right. It's not about containerization. It's about respect for your teammates' time.


Quick Reference

Command What it does
docker compose up -d Start all services in background
docker compose up --build -d Rebuild images then start
docker compose down Stop and remove containers
docker compose down -v Also remove volumes (fresh database)
docker compose logs -f api Follow logs for one service
docker compose exec api sh Shell into running container
docker compose ps Check container status

If your team is still doing manual setup, this is the weekend project that pays back within the first week. Build it once, and every future engineer thanks you without knowing your name.

That's a pretty good legacy to leave in a YAML file.

Top comments (0)