Two projects. Both have Dockerfiles. One takes me 20 minutes to figure out how to run. The other - 30 seconds.
The difference? The second one has a docker-compose.yml.
That file told me everything: what ports to expose, what services it depends on, what environment variables it needs, what volumes to mount. The Dockerfile only told me how to build.
Compose isn't just a convenience tool. It's executable documentation.
The "I'll Just Remember It" Trap
A year ago I was debugging a service that hadn't been touched in months. The Dockerfile was there. Clean, multi-stage, optimized. Professional.
No Compose file. Just a Dockerfile and a README that said "see Dockerfile."
How do I run it?
docker build -t myservice .
docker run myservice
Container crashes. Missing environment variables. Okay, which ones?
I open config/config.go. There's a struct with 15 fields loaded via envconfig. Three have defaults. The rest? Required, apparently. The README mentions DATABASE_URL and that's it - the rest are "obvious."
I check git history. Find a Slack message from six months ago:
"run it with
-e DATABASE_URL=... -e REDIS_URL=... -e SECRET_KEY=... -v $(pwd)/config:/app/config -p 8080:8080 --network=backend"
This is not engineering. This is archeology.
Compose as the Single Source of Truth
Now compare:
services:
myservice:
build: .
ports:
- "8080:8080"
environment:
- DATABASE_URL=postgres://db:5432/app
- REDIS_URL=redis://redis:6379
- SECRET_KEY=${SECRET_KEY}
volumes:
- ./config:/app/config
depends_on:
- db
- redis
I open this file and I know everything:
- What ports it exposes
- What services it needs
- What config it expects
- What volumes it mounts
No Slack archeology. No guessing. The documentation is the code.
The "But I Only Have One Container" Argument
People say: "I don't need Compose, I only have one service."
I disagree. Here's why.
Reason 1: You'll forget.
docker run commands grow. First it's just -p 3000:3000. Then you add a volume. Then environment variables. Then a network. Six months later you have a 200-character command and zero memory of why each flag is there.
Reason 2: You'll hate yourself.
Coming back to a project after a few months:
docker run -d --name app -p 3000:3000 -v $(pwd):/app -v /app/node_modules -e NODE_ENV=development -e DATABASE_URL=postgres://... -e REDIS_URL=redis://... -e JWT_SECRET=... --network app-network app:dev
vs.
docker compose up
Reason 3: Compose files are diffable.
When someone changes the configuration, I see it in the PR. Clear diff. Clear history. With shell commands? Good luck tracking what changed.
The Things That Bit Me
depends_on Lies to You
I wrote this and expected it to work:
services:
db:
image: postgres:15
app:
build: .
depends_on:
- db
App crashed on startup. "Connection refused."
depends_on doesn't wait for the service to be ready. It waits for the container to start. Postgres takes a few seconds to initialize. My app tried to connect immediately. Boom.
The fix:
services:
db:
image: postgres:15
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 2s
timeout: 5s
retries: 10
app:
build: .
depends_on:
db:
condition: service_healthy
Now Compose actually waits until Postgres responds to connections.
I've seen production deployments break because of this. People blame Docker. It's not Docker's fault - depends_on just doesn't do what you think it does. (docs)
Anonymous Volumes Are a Trap (docs)
This looks innocent:
volumes:
- /app/node_modules
It creates an anonymous volume. Works great until you need to:
- Find it (
docker volume lsshows a random hash) - Back it up (which volume is which?)
- Clean it up (orphaned volumes everywhere)
Named volumes:
volumes:
- node_modules:/app/node_modules
volumes:
node_modules:
Now it's called projectname_node_modules. Searchable. Deletable. Manageable.
The Override File Nobody Reads About
Compose automatically merges docker-compose.override.yml if it exists (docs). No flags needed. Same with .env - it loads automatically, no env_file directive required (docs).
I use this pattern:
docker-compose.yml - production-like defaults, committed:
services:
app:
image: myapp:1.2.3 # pin versions in prod, never use :latest
environment:
- NODE_ENV=production
docker-compose.override.yml - dev settings, also committed:
services:
app:
build: .
volumes:
- .:/app
environment:
- NODE_ENV=development
Run docker compose up - you get development mode.
Run docker compose -f docker-compose.yml up - you get production mode.
No separate "dev" and "prod" compose files. No confusion about which one to use.
The Pattern That Changed Everything
Profiles (docs). Added in Compose 1.28, still underused.
Before profiles, I had this mess:
docker-compose.ymldocker-compose.dev.ymldocker-compose.test.ymldocker-compose.monitoring.yml
With profiles:
services:
app:
build: .
# Always runs
db:
image: postgres:15
profiles: [dev]
redis:
image: redis:7
profiles: [dev]
prometheus:
image: prom/prometheus
profiles: [monitoring]
grafana:
image: grafana/grafana
profiles: [monitoring]
test-db:
image: postgres:15
profiles: [test]
environment:
- POSTGRES_DB=test
docker compose up # just app
docker compose --profile dev up # app + db + redis
docker compose --profile monitoring up # app + prometheus + grafana
docker compose --profile dev --profile monitoring up # combine profiles
One file. Clear intent. No YAML soup.
The Commands Nobody Told Me About
docker compose up --wait (docs)
Most people run docker compose up -d and then manually check if everything started. There's a better way:
docker compose up -d --wait
This returns control only when all services with healthchecks are healthy. Perfect for CI pipelines and scripts.
docker compose config (docs)
Your compose file uses variables and overrides. What's the actual final config?
docker compose config
Shows the fully resolved YAML after all interpolation and merging. Saved me hours of "why isn't this variable working" debugging.
docker compose watch (docs)
Volumes for hot reload are messy. They sync node_modules back and forth, cause permission issues, and slow things down.
Compose 2.22+ has a better way:
services:
app:
build: .
develop:
watch:
- action: rebuild
path: ./package.json
- action: sync
path: ./src
target: /app/src
docker compose watch
File changes trigger rebuilds or syncs automatically. No bidirectional volume chaos.
Small Things That Matter
Environment variable defaults (docs)
services:
app:
ports:
- "${PORT:-3000}:3000"
environment:
- LOG_LEVEL=${LOG_LEVEL:-info}
If the variable isn't set, use the default. No more "variable not found" crashes.
init: true (docs)
services:
app:
init: true
One line. Proper signal handling. No zombie processes. Your containers will actually stop when you ask them to.
tmpfs for test databases (docs)
services:
test-db:
image: postgres:15
tmpfs: /var/lib/postgresql/data
Database runs entirely in memory. Tests run faster. Nothing persists between runs - which is exactly what you want for tests.
YAML anchors for DRY (docs)
x-logging: &default-logging
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
services:
app:
<<: *default-logging
build: .
worker:
<<: *default-logging
build: ./worker
Define once, reuse everywhere. The x- prefix marks it as an extension (docs) - Compose ignores it, but YAML anchors still work.
The Compose File as Contract
Here's how I think about it now.
A Dockerfile answers: "How do I build this?"
A docker-compose.yml answers: "How does this run in context?"
Context means:
- What other services does it need?
- What configuration does it expect?
- What ports does it expose?
- What data does it persist?
Without this context, a Dockerfile is just a build recipe. With Compose, it becomes a runnable system.
The best infrastructure is the one you don't have to explain. Write it down in a way that runs.
Top comments (0)