DEV Community

Cover image for Docker for Node.js Developers: From Zero to Production Without Losing Your Mind
Juan Torchia
Juan Torchia

Posted on • Originally published at juanchi.dev

Docker for Node.js Developers: From Zero to Production Without Losing Your Mind

The first time Docker took down my production server

It was 2021. I had a Node.js app running on a DigitalOcean VPS, working perfectly on my machine (yes, that phrase), and I decided to "modernize" the deployment by throwing Docker at it. The result: three hours of downtime, one furious client, and me at 3 AM reading logs I didn't understand.

Today, with all that pain converted into hard-earned experience, I can tell you that Docker with Node.js is one of the best decisions you can make for your stack — as long as you do it right. And "right" means understanding what's actually happening, not copying a Dockerfile from Stack Overflow and praying.

Let's start from zero. I mean it — actual zero.

Why Docker and Node.js work so well together

Node.js has a historical problem: the environment. The Node version on your machine, on your staging server, on your production server — if you don't control those, you're setting yourself up for bugs that only appear in production and make you question your own sanity.

Docker solves this with containers. A container is basically an isolated process that carries its own filesystem, its own dependencies, its own Node version. You define all of that in a Dockerfile, and that file travels with your code. If it works in your container, it works everywhere.

That's the promise. Now let's talk about how not to ruin it.

Your first Dockerfile for Node.js

Starting with the basics. Let's say you have a simple Express app:

FROM node:20-alpine

WORKDIR /app

COPY package*.json ./

RUN npm ci --only=production

COPY . .

EXPOSE 3000

CMD ["node", "src/index.js"]
Enter fullscreen mode Exit fullscreen mode

Every line in this Dockerfile does something specific for a specific reason. I'll break it down because when you understand the why, you stop copying blindly:

FROM node:20-alpine: I use Alpine Linux, which weighs around 50MB versus the 300MB+ of the Debian/Ubuntu image. For production, less surface area means fewer potential vulnerabilities. For development, Alpine can sometimes break native dependencies (looking at you, bcrypt). In those cases, use node:20-slim.

WORKDIR /app: Sets a clean working directory. Without this, Docker dumps your files in the container's root and chaos ensues.

COPY package*.json ./ before COPY . .: This is critical for Docker's layer caching system. Docker layers get cached. If you copy the package.json files first and run npm ci, Docker will reuse that layer as long as your package.json files haven't changed. Meaning: on every rebuild, if you only touched source code, Docker doesn't reinstall all your dependencies. This saves you real minutes.

npm ci instead of npm install: ci uses exactly what's in package-lock.json. Reproducible, deterministic — exactly what you want in production.

The .dockerignore file nobody tells you about

Before you build anything, create a .dockerignore. This is what most people forget and what burned me hardest early on:

node_modules
.git
.gitignore
*.log
.env
.env.local
.env.*.local
dist
build
.next
Dockerfile
docker-compose*.yml
README.md
.DS_Store
coverage
Enter fullscreen mode Exit fullscreen mode

Without a .dockerignore, you're copying node_modules (which can weigh gigabytes) into the build context, and potentially baking your secret environment variables right into the image. Yes, exactly as bad as it sounds. A .env file inside a public Docker image is a security nightmare — and I've seen it happen in real repos.

Docker Compose: your inseparable companion

No app lives alone. Yours needs a database, maybe Redis, maybe a queue service. Docker Compose lets you orchestrate all of that locally with a single file:

version: '3.8'

services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=development
      - DATABASE_URL=postgresql://postgres:password@db:5432/myapp
      - REDIS_URL=redis://cache:6379
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started
    volumes:
      - .:/app
      - /app/node_modules

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_PASSWORD: password
      POSTGRES_DB: myapp
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

  cache:
    image: redis:7-alpine
    volumes:
      - redis_data:/data

volumes:
  postgres_data:
  redis_data:
Enter fullscreen mode Exit fullscreen mode

Notice the depends_on with condition: service_healthy. This was another one of my classic mistakes: starting the app before Postgres finished initializing. Without the healthcheck, your app starts, tries to connect to a database that's still booting, and explodes. With the healthcheck, Docker waits until Postgres is actually ready.

The double volume on app:

volumes:
  - .:/app
  - /app/node_modules
Enter fullscreen mode Exit fullscreen mode

This mounts your local code inside the container (hot reload in development) while preserving the container's own node_modules. Without that second line, your local node_modules would overwrite the container's version — and if you're on a Mac or Windows running Alpine, the compiled binaries are incompatible. This subtle thing cost me two hours one afternoon.

Multi-stage builds: the grown-up move

Once you start working with TypeScript (and you will be working with TypeScript), you need to compile before running. A naive Dockerfile would install all your devDependencies, compile, and leave all that weight in the final image. Multi-stage builds solve that:

# Stage 1: Builder
FROM node:20-alpine AS builder

WORKDIR /app

COPY package*.json tsconfig.json ./
RUN npm ci

COPY src ./src
RUN npm run build

# Stage 2: Production
FROM node:20-alpine AS production

WORKDIR /app

RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodeuser -u 1001

COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force

COPY --from=builder /app/dist ./dist

USER nodeuser

EXPOSE 3000

CMD ["node", "dist/index.js"]
Enter fullscreen mode Exit fullscreen mode

This does two important things:

  1. The final image only has the compiled code and production dependencies. No TypeScript, no ts-node, no devDependencies whatsoever. Smaller images, more secure, faster to deploy.
  2. USER nodeuser: Don't run your app as root inside the container. It's a basic security principle that a lot of people ignore until something goes wrong.

Environment variables: do it right or don't do it at all

Never hardcode secrets in your Dockerfile or in the docker-compose.yml that you commit. The right way:

For development, use a local .env file (which lives in your .dockerignore and .gitignore) and reference it in Compose:

services:
  app:
    env_file:
      - .env
Enter fullscreen mode Exit fullscreen mode

For production, use your platform's secrets system: Railway, Render, Fly.io, or the environment variables in your CI/CD pipeline. Docker Swarm and Kubernetes have their own secrets management. The point is that the secret never lives in your code or in the image.

The workflow I actually use today

After all the stumbles, here's my current flow:

# Development with hot reload
docker compose up

# Force rebuild when dependencies change
docker compose up --build

# Run in the background
docker compose up -d

# Watch logs in real time
docker compose logs -f app

# Get inside the container to debug
docker compose exec app sh

# Wipe everything and start fresh
docker compose down -v
Enter fullscreen mode Exit fullscreen mode

docker compose exec app sh is your best friend for debugging. You get inside the running container, you can run commands, verify that your environment variables are what you expect, check whether files are where they should be.

Production: what nobody actually tells you

For real production deployments, a few things I learned the hard way:

Health checks in the Dockerfile:

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD node -e "require('http').get('http://localhost:3000/health', (r) => { process.exit(r.statusCode === 200 ? 0 : 1) })"
Enter fullscreen mode Exit fullscreen mode

Your orchestrator (whether Compose, Swarm, or Kubernetes) needs to know if your app is actually alive. Without a health check, it could be serving 500 errors and the orchestrator keeps thinking everything is fine.

NODE_ENV=production: Always set it. Express, among other frameworks, has specific optimizations for this mode.

Signal handling: Node.js inside Docker needs to handle SIGTERM to do a graceful shutdown. If you don't implement it, Docker kills the process after the timeout and you can lose in-flight requests. That's a topic that deserves its own post entirely.

The pain is worth it

Docker with Node.js has a real learning curve. It will break things on you. You'll end up with images that weigh 2GB when they should weigh 200MB. You'll have containers that won't start because of permission issues at 2 AM.

But when you have it dialed in, the feeling of docker compose up and having your entire stack running in 30 seconds, on any machine, with exactly the same versions of everything — that's hard to beat.

The day a teammate cloned my repo and had the entire project running in 5 minutes without installing anything other than Docker, I understood why the initial pain is worth it.

A well-crafted production Dockerfile is one of the most valuable assets in your project. Treat it like code, evolve it, review it in pull requests. It's not just infrastructure — it's the recipe for how your app lives in the world.

Top comments (0)