You have been there. A new developer joins the team, and within the first hour they are already stuck. "It works on my machine" is being typed in Slack. Node version mismatch. PostgreSQL not running. Redis connection refused. The classic onboarding nightmare that wastes hours every single week.
Docker Compose is the cure. Not just for deployment — for local development. When set up right, it collapses that chaos into a single command: docker compose up. This article walks through building a local development environment with Docker Compose that actually feels good to work in, not like you are fighting the tool.
Why Most Docker Compose Setups Fall Short
The typical beginner Compose file does the basics: spins up a database, maybe a Redis instance, and the app. But then frustration sets in:
- Code changes require rebuilding the image
- Logs from multiple services are a mess
- Environment variables are hardcoded everywhere
- There is no separation between dev and production config
Let us fix all of that.
The Project Structure
For a Node.js backend with PostgreSQL and Redis, here is a clean starting layout:
my-project/
app/
src/
package.json
Dockerfile
docker-compose.yml
docker-compose.override.yml
.env
.env.example
The key insight: docker-compose.yml is your base config. docker-compose.override.yml is the dev-only layer that Docker Compose automatically merges when you run docker compose up. You never have to pass -f flags manually.
The Base Compose File
# docker-compose.yml
version: "3.9"
services:
app:
build:
context: ./app
target: base
env_file: .env
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
networks:
- backend
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
interval: 5s
timeout: 5s
retries: 5
networks:
- backend
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
networks:
- backend
volumes:
postgres_data:
networks:
backend:
driver: bridge
A few things worth noting here. The depends_on with condition: service_healthy is a game changer. Your app will not start until the database is actually ready to accept connections — not just started. No more race conditions on first docker compose up.
The Dev Override
# docker-compose.override.yml
version: "3.9"
services:
app:
build:
target: development
volumes:
- ./app:/usr/src/app
- /usr/src/app/node_modules
ports:
- "3000:3000"
- "9229:9229"
command: npm run dev
environment:
NODE_ENV: development
db:
ports:
- "5432:5432"
redis:
ports:
- "6379:6379"
This override does several important things:
Volume mounting for hot reload: ./app:/usr/src/app mounts your local source into the container. Changes you make in your editor instantly appear inside the container without rebuilding.
The node_modules trick: The line /usr/src/app/node_modules creates an anonymous volume that prevents your local node_modules from overwriting the container's installed packages. This avoids the native binary nightmare when your machine is Mac and the container is Linux.
Exposed ports for local tools: Exposing 5432 and 6379 means you can connect your local database GUI or Redis client directly. Useful for debugging without docker exec.
Debug port: Port 9229 is the Node.js inspector port. With this exposed, you can attach VS Code debugger directly to the running container.
The Dockerfile That Supports Both Worlds
# app/Dockerfile
FROM node:20-alpine AS base
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
FROM base AS development
RUN npm ci
CMD ["npm", "run", "dev"]
FROM base AS production
CMD ["node", "src/index.js"]
The multi-stage Dockerfile lets Compose target different stages. In the override, target: development pulls the stage with dev dependencies. In production, you target production and get a lean image without nodemon or other dev-only tools.
Environment Variables Done Right
Never hardcode secrets. Never commit them. Here is a minimal .env.example to commit:
# .env.example
DB_USER=appuser
DB_PASSWORD=changeme
DB_NAME=myapp
DB_HOST=db
DB_PORT=5432
REDIS_URL=redis://redis:6379
PORT=3000
NODE_ENV=development
Each developer copies this to .env and fills in their values. The .env file is gitignored. Simple, explicit, no magic.
One critical detail: inside Compose, services talk to each other by service name, not localhost. So DB_HOST=db and REDIS_URL=redis://redis:6379 use the service names defined in the Compose file. This trips up almost everyone the first time.
Useful Compose Commands for Daily Work
Once your setup is in place, here is the workflow that becomes muscle memory:
# Start everything (with rebuild if Dockerfile changed)
docker compose up --build
# Start in background, follow specific service logs
docker compose up -d
docker compose logs -f app
# Restart just one service (after code change when not using hot reload)
docker compose restart app
# Run a one-off command inside a service
docker compose exec app sh
docker compose exec db psql -U appuser -d myapp
# Run database migrations
docker compose exec app npm run migrate
# Nuke everything and start fresh
docker compose down -v
docker compose up --build
The exec command is your best friend for debugging. Need to check what is in Redis? docker compose exec redis redis-cli. Need a Postgres shell? One line and you are in.
Making It Fast: Layer Caching That Actually Works
The biggest pain with Docker in development is slow rebuilds. Here is what makes caching actually hold:
# Copy package files FIRST, install, THEN copy source
COPY package*.json ./
RUN npm ci
COPY . .
If your source changes but package.json did not, Docker reuses the cached layer with all your installed packages. This makes rebuilds go from 2 minutes to 10 seconds.
With volume mounts in dev mode, you rarely need to rebuild at all. The only times you do: changing the Dockerfile, adding new npm packages, or modifying environment variables.
Adding a Seed Script
Getting a fresh database with test data should be zero-friction:
# Add to docker-compose.override.yml
services:
db-seed:
build:
context: ./app
target: development
command: npm run db:seed
env_file: .env
depends_on:
db:
condition: service_healthy
networks:
- backend
profiles:
- tools
The profiles: [tools] key means this service does not start by default. To seed the database:
docker compose --profile tools run --rm db-seed
Clean, on-demand, no side effects on the normal up flow.
The Payoff
When a new developer joins your team, the onboarding is:
- Clone the repo
- Copy
.env.exampleto.env - Run
docker compose up
That is it. No "install PostgreSQL 15 specifically, not 16". No "make sure your Redis is version 7". No mystery errors because someone has Node 18 and the project needs Node 20.
Everybody runs the same stack. Bugs are reproducible. Onboarding takes minutes, not days. "It works on my machine" disappears from your Slack permanently.
Docker Compose for local development is not about being fancy — it is about respecting your team's time. Build it once, document it well, and stop solving the same environment problems forever.
Top comments (0)