DEV Community

Cover image for docker-compose for Next.js + NestJS local dev
Mahmoud Mokaddem
Mahmoud Mokaddem

Posted on • Originally published at mahmoud-mokaddem.com

docker-compose for Next.js + NestJS local dev

Most docker-compose tutorials for Next.js stop at "here's a service block, here's how to mount your code." Then you start it on macOS, type a character in your editor, and hot reload doesn't fire. You add a Postgres service, your NestJS API tries to connect before the database is ready, and the container crashes on first boot. You stop the stack, your node_modules is mysteriously empty on the host. None of these are mysteries — they're all known issues with known fixes. Here's the docker-compose.dev.yml I actually use.

This compose file is part of a production-grade Next.js + NestJS starter I'm building. Free for email subscribers — subscribe at mahmoud-mokaddem.com. Follow-up to Dockerizing Next.js for production.


The compose file, up front

If you're in a hurry, copy this and skip to Common gotchas. The rest of the post explains every line.

services:
  web:
    build:
      context: ./web
      dockerfile: Dockerfile.dev
    ports: ['3000:3000']
    volumes:
      - ./web:/app           # bind-mount source
      - /app/node_modules    # anonymous vol shadows host node_modules
      - /app/.next           # anonymous vol preserves build cache
    environment:
      WATCHPACK_POLLING: 'true'
      NEXT_PUBLIC_API_URL: http://localhost:4000
    depends_on:
      api:
        condition: service_started

  api:
    build:
      context: ./api
      dockerfile: Dockerfile.dev
    ports: ['4000:4000']
    volumes:
      - ./api:/app
      - /app/node_modules
    environment:
      DATABASE_URL: postgres://dev:dev@db:5432/app
      NODE_ENV: development
    depends_on:
      db:
        condition: service_healthy
    command: npm run start:dev

  db:
    image: postgres:16-alpine
    ports: ['5432:5432']
    environment:
      POSTGRES_USER: dev
      POSTGRES_PASSWORD: dev
      POSTGRES_DB: app
    volumes:
      - db_data:/var/lib/postgresql/data
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U dev -d app']
      interval: 5s
      timeout: 5s
      retries: 5

volumes:
  db_data:
Enter fullscreen mode Exit fullscreen mode

Three services: web (Next.js), api (NestJS), db (Postgres). Each decision in this file exists to work around a specific failure mode.

Why a separate .dev.yml

The production Dockerfile from the previous post builds a multi-stage image: it installs deps, compiles the app, then copies only the slim standalone output into a clean final image. No source code mounted, no dev dependencies, no hot reload. That's exactly correct for production and completely wrong for local work.

Local dev needs almost the opposite: source code mounted so changes take effect without a rebuild, dev dependencies installed, next dev and nest start --watch running, and fast incremental compilation. A Dockerfile.dev handles this — it installs deps and starts the dev server; it never runs npm run build.

The two-file approach gives you a clean mental model: docker-compose.yml for production-like local runs (useful for final QA and catching prod-only bugs), docker-compose.dev.yml for daily development. Run the dev stack with:

docker compose -f docker-compose.dev.yml up
Enter fullscreen mode Exit fullscreen mode

Worth aliasing that flag away — I'll cover it in the workflow section.

Service: web (Next.js)

The dev Dockerfile for Next.js is intentionally minimal. It installs dependencies and starts the dev server; nothing more.

# web/Dockerfile.dev
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
EXPOSE 3000
CMD ["npm", "run", "dev"]
Enter fullscreen mode Exit fullscreen mode

No build stage, no standalone output, no multi-stage. The source arrives via the bind mount at runtime, not at build time.

The volume pattern that keeps hot reload working

This is the most important thing to get right in the whole file. The volume block for web looks like this:

volumes:
  - ./web:/app           # bind-mount source
  - /app/node_modules    # anonymous vol shadows host node_modules
  - /app/.next           # anonymous vol preserves build cache
Enter fullscreen mode Exit fullscreen mode

The first line is a bind mount: your ./web directory on the host is mounted into /app inside the container. Every file you edit on the host is immediately visible inside the container — that's how hot reload works.

The problem: the bind mount overwrites everything at /app, including node_modules and .next. Your host's node_modules was either compiled on macOS (wrong architecture for Linux), or it doesn't exist at all if you didn't run npm install locally. Either way, the container's freshly-installed node_modules gets stomped.

The fix is two anonymous volumes. An anonymous volume at /app/node_modules tells Docker to manage that path separately — it won't be overwritten by the bind mount above it. The container's node_modules, installed during docker compose build and stored in a Docker-managed volume, stays intact. Same logic for /app/.next: preserving the Next.js build cache across restarts.

Without these two lines, you get a "cannot find module" error every time.

WATCHPACK_POLLING

WATCHPACK_POLLING: 'true' is required on macOS and Windows because Docker Desktop's filesystem bridge doesn't reliably deliver native file-system events to the container. Next.js uses Watchpack for its file watcher; polling mode makes it check for changes on a timer instead of waiting for an event that may never arrive.

Linux users running Docker Engine natively can usually leave this off — events propagate without polling. The trade-off is CPU: polling mode runs a constant low-level scan instead of sleeping between changes. Enable it when you need it; remove it when you're on Linux.

NEXT_PUBLIC_API_URL

NEXT_PUBLIC_API_URL: http://localhost:4000 — this one looks wrong at first glance. The api service is reachable at http://api:4000 inside the Docker network. So why localhost?

Because NEXT_PUBLIC_* variables are baked into the client-side JavaScript bundle, and that bundle runs in your browser — not inside Docker. Your browser doesn't know what api means as a hostname; it only sees localhost. So for any variable that the browser-side code uses, you need the host-mapped port, not the docker-network hostname.

Server-side Next.js code (API routes, getServerSideProps, Server Components) runs inside the container and can use docker hostnames. So if you have a purely server-side database URL, db:5432 is fine. The rule: if it touches the browser, use the host port; if it's server-only, use the docker-network hostname.

depends_on

depends_on: api: condition: service_started ensures the api container has at least started before Next.js tries to boot. This is a loose dependency — "started" means the container process launched, not that the API is ready to accept requests. For local dev, that's usually fine; Next.js doesn't make API calls at startup.

Service: api (NestJS)

The NestJS dev Dockerfile follows the same pattern:

# api/Dockerfile.dev
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
EXPOSE 4000
CMD ["npm", "run", "start:dev"]
Enter fullscreen mode Exit fullscreen mode

Where start:dev in your package.json runs nest start --watch. NestJS recompiles TypeScript on each save and restarts the process — the same hot-reload loop as next dev, just with TypeScript involved.

The volumes are identical to the web service: bind-mount source, anonymous volume for node_modules. Same reasoning applies.

DATABASE_URL and docker-network hostnames

DATABASE_URL: postgres://dev:dev@db:5432/app — the hostname here is db, the docker-compose service name. Inside the Docker network, services find each other by service name. This is server-side code only, so the docker hostname works fine.

The mistake I see most often: using localhost here. Inside the api container, localhost is the container itself, not the host machine and not the db container. The connection will be refused. Use the service name.

depends_on with service_healthy

The api service uses a stricter dependency than web:

depends_on:
  db:
    condition: service_healthy
Enter fullscreen mode Exit fullscreen mode

service_healthy waits for the db service's healthcheck to pass before starting the api container. This is what prevents the connection-refused race condition.

Without it: Docker starts api shortly after db. Postgres takes 3–5 seconds to initialize on first boot (creating the user, setting up the database). Your API tries to connect during those seconds, gets refused, and crashes. If it doesn't auto-restart, you have to docker compose restart api manually every time you wipe the database volume.

With service_healthy: Docker holds the api container until Postgres says it's ready. The healthcheck (defined on the db service) handles the timing.

Note that command: npm run start:dev is set at the compose level, not in the Dockerfile. This is intentional — it lets you override the default command without rebuilding the image. If you want to run a migration before the dev server starts, you can override command here without touching the Dockerfile.

Service: db (Postgres)

postgres:16-alpine — small image, fast pull, stable. Alpine here is straightforward: Postgres ships its own binaries without the glibc complications that affect some Node packages.

The three environment variables (POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_DB) auto-create the user and database on first container start. dev/dev/app is fine for local. Don't waste time on secure local credentials — that's what environment files for production are for.

The named volume db_data persists your data across docker compose down. When you want a clean slate — after a schema change you can't migrate forward from, or at the start of a testing session — use docker compose -f docker-compose.dev.yml down -v. The -v flag removes named volumes too. Without it, your data survives restarts.

ports: ['5432:5432'] exposes Postgres to your host machine. This is optional from the stack's perspective — the api container connects over the Docker network and doesn't need the host port. The reason to keep it: direct access from GUI clients like TablePlus, Postico, or DBeaver, and the ability to run psql postgres://dev:dev@localhost:5432/app from your terminal. If you don't use those, remove the line.

The healthcheck

The healthcheck is what makes the whole dependency chain work:

healthcheck:
  test: ['CMD-SHELL', 'pg_isready -U dev -d app']
  interval: 5s
  timeout: 5s
  retries: 5
Enter fullscreen mode Exit fullscreen mode

pg_isready is a Postgres CLI utility that returns exit code 0 when the server is accepting connections on the specified user and database. Docker's healthcheck system runs this command every 5 seconds, marks the container healthy when it passes, and marks it unhealthy after 5 consecutive failures.

The api service's depends_on: db: condition: service_healthy watches this status. As soon as pg_isready returns 0, Docker releases the api container to start. Without this healthcheck, service_healthy would block forever because Docker wouldn't know what "healthy" means for the db service.

Optional services

Add only what your app actually uses. Two common additions:

redis:
  image: redis:7-alpine
  ports: ['6379:6379']

mail:
  image: maildev/maildev
  ports: ['1080:1080', '1025:1025']  # web UI : smtp
Enter fullscreen mode Exit fullscreen mode

Redis is useful for session storage, cache, or BullMQ queues. The redis:7-alpine image is tiny (~30 MB) and needs no configuration for local dev.

Maildev (or Mailhog — either works) catches outgoing emails without delivering them. Your app sends to mail:1025 over the Docker network using SMTP; you read the messages at localhost:1080 in your browser. Invaluable when working on email-sending flows like password reset or welcome sequences — no risk of accidentally spamming real inboxes.

Keep optional services optional. Every container you add to the stack consumes RAM and adds a few seconds to docker compose up. If Redis isn't needed for what you're working on today, comment it out.

Common gotchas

The five bugs that account for most local-dev compose failures with this stack:

1. Hot reload doesn't fire on Mac or Windows. Docker Desktop's filesystem bridge doesn't reliably propagate native file-system events into containers. Fix: set WATCHPACK_POLLING: 'true' in the web service environment. For NestJS with nodemon or Chokidar, add CHOKIDAR_USEPOLLING: 'true' to the api service environment. Polling costs CPU; only enable it where you need it. Linux users with Docker Engine native usually don't need it.

2. node_modules empty or missing on the host after docker compose up. The bind mount ./api:/app overwrites the container's /app with your host directory, including its (empty or macOS-compiled) node_modules. Fix: add the anonymous volume /app/node_modules below the bind mount. The anonymous volume takes precedence for that specific path; the container's installed modules survive. Same fix for /app/.next.

3. "Cannot find module" for native dependencies like bcrypt or sharp. Native packages are compiled for the platform where npm install runs. If you installed them on macOS, the binaries are macOS binaries and won't load in the Linux container. Fix: the anonymous volume pattern above means npm ci runs inside the container at build time, producing Linux binaries. If you still see the error, run docker compose -f docker-compose.dev.yml exec api npm rebuild bcrypt once after first up.

4. API container crashes with "connection refused" on first start. The API started before Postgres finished initializing. Fix: add the healthcheck block to the db service (exactly as shown above) and set depends_on: db: condition: service_healthy on the api service. The API container won't start until pg_isready returns 0. Without this, you'll restart the api container by hand every time you wipe the database volume.

5. Port already in use. Error starting userland proxy: listen tcp4 0.0.0.0:3000: bind: address already in use means something else on your machine owns that port. Fix: lsof -i :3000 to find the offending process. Kill it, or change the host-side port mapping in the compose file ('3001:3000' runs the container on 3000 but exposes it on 3001 on your host). Ports 4000 and 5432 are equally likely culprits.

Each one of these is a 5-minute fix once you know the name. Combined, they're 90% of the local-dev compose failures I've debugged on real teams.

The everyday workflow

How I actually run this stack day to day:

# Start everything in the background
docker compose -f docker-compose.dev.yml up -d

# Tail logs for the service you're working on
docker compose -f docker-compose.dev.yml logs -f api

# Run a one-off command inside a container
docker compose -f docker-compose.dev.yml exec api npm run db:migrate

# After changing package.json — rebuild just that service
docker compose -f docker-compose.dev.yml build api
docker compose -f docker-compose.dev.yml up -d api

# Wipe the database for a clean start
docker compose -f docker-compose.dev.yml down -v

# Connect to Postgres from your host terminal
psql postgres://dev:dev@localhost:5432/app
Enter fullscreen mode Exit fullscreen mode

The -f docker-compose.dev.yml flag is the main ergonomic cost. Worth adding an alias to your shell rc file:

# In ~/.zshrc or ~/.bashrc
alias dc='docker compose -f docker-compose.dev.yml'
Enter fullscreen mode Exit fullscreen mode

Then the commands above become dc up -d, dc logs -f api, dc exec api npm run db:migrate. Saves a significant amount of typing over a few months of daily use.

One more: if you're touching a section of the app that touches the database schema, the workflow is dc down -v (wipe), dc up -d (fresh start), dc exec api npm run db:migrate (apply migrations), then develop. Faster than trying to hand-migrate a dirty local database.

The full setup is in the starter

This compose file is part of a production-grade Next.js + NestJS starter I'm building. The starter ships everything you'd need to go from git clone to a running local stack in under five minutes, then deploy to production without rearchitecting:

  • The production Dockerfile (from the previous post) and a docker-compose.yml for prod-like local runs
  • docker-compose.dev.yml (this post, pre-wired)
  • Dockerfile.dev for both web and api
  • A working migration flow with Prisma — db:migrate and db:seed commands that run inside the container
  • Healthchecks pre-wired on every service that needs one
  • GitHub Actions pipeline: lint → test → build image → push to registry → deploy
  • JWT auth with refresh tokens — the hardest part of most apps to get right from scratch

It'll be free, and email subscribers get it the day it ships. Subscribe at mahmoud-mokaddem.com.

What's next

If your local dev compose has a gotcha I didn't cover — a platform-specific failure, a dependency manager edge case, a Docker Desktop version that changed something — find me on X or LinkedIn. I'll add it to the post.

Top comments (0)