You don't need a Dockerfile for local development. You need docker-compose.yml with three services and a volume mount.
The Setup
# docker-compose.yml
services:
db:
image: postgres:17
environment:
POSTGRES_DB: myapp
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:7-alpine
ports:
- "6379:6379"
app:
image: node:22-alpine
working_dir: /app
volumes:
- .:/app
- node_modules:/app/node_modules
ports:
- "3000:3000"
environment:
DATABASE_URL: postgresql://dev:dev@db:5432/myapp
REDIS_URL: redis://redis:6379
command: sh -c "npm install && npm run dev"
depends_on:
- db
- redis
volumes:
pgdata:
node_modules:
That's it. Run docker compose up and you have:
- PostgreSQL 17 with persistent data
- Redis for caching/sessions
- Your app with hot reload (Vite, nodemon, whatever your
devscript uses)
Why This Works
No Dockerfile for dev. You don't need one. The node:22-alpine image + volume mount gives you hot reload. Your code changes are reflected instantly because the current directory is mounted as /app.
Named volume for node_modules. This is the key trick. Without node_modules:/app/node_modules, the volume mount from your host would overwrite the container's node_modules. The named volume keeps them separate — container installs its own deps, your host keeps its own.
Persistent database. The pgdata volume survives docker compose down. Your seed data, migrations, test records — all preserved. Only docker compose down -v wipes it.
Adding Prisma Migrations
migrate:
image: node:22-alpine
working_dir: /app
volumes:
- .:/app
- node_modules:/app/node_modules
environment:
DATABASE_URL: postgresql://dev:dev@db:5432/myapp
command: sh -c "npm install && npx prisma migrate dev"
depends_on:
db:
condition: service_started
profiles:
- migrate
Run migrations: docker compose --profile migrate run migrate
The profiles key means this service only runs when explicitly called. It won't start with a regular docker compose up.
Production Is Different
For production, you DO need a Dockerfile. Multi-stage build:
# Stage 1: Build
FROM node:22-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Run
FROM node:22-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./
EXPOSE 3000
CMD ["node", "dist/server.js"]
Dev image: ~900MB (full Node + all deps + source). Production image: ~150MB (only build output + production deps).
Common Mistakes
1. Forgetting the node_modules volume
Without it:
volumes:
- .:/app # Host's empty node_modules overwrites container's
With it:
volumes:
- .:/app
- node_modules:/app/node_modules # Container keeps its own
2. Using build: . in development
If you use build: . instead of image: node:22-alpine, Docker rebuilds on every docker compose up. Volume mount + pre-built image is faster.
3. Not setting depends_on
Your app will crash on first start if it tries to connect to PostgreSQL before the database is ready. depends_on ensures startup order (though for production, use healthchecks).
The .env.example
Always ship one:
# .env.example
DATABASE_URL=postgresql://dev:dev@localhost:5432/myapp
REDIS_URL=redis://localhost:6379
JWT_SECRET=dev-secret-change-in-production
PORT=3000
Note: localhost for running outside Docker, db/redis for inside Docker. Use the Docker service names in docker-compose.yml and localhost in .env for when you run the app natively.
This setup runs across 12 production templates. Clone, docker compose up, start coding. No 45-minute setup guides.
Top comments (0)