DEV Community

Cover image for Docker for Web Developers - The Only Guide You Actually Need (2026)
Parag Agrawal
Parag Agrawal

Posted on • Originally published at turbodeploy.dev

Docker for Web Developers - The Only Guide You Actually Need (2026)

If you've ever said "it works on my machine," Docker is the fix.

If you've ever spent hours debugging why your staging server behaves differently from your laptop, Docker is the fix.

If you've ever deployed to a PaaS like Vercel or Railway and wondered "what's actually happening under the hood". Docker is what's happening.

Yet the vast majority of Docker tutorials are written by and for DevOps engineers. They dive into container orchestration, overlay networks, and volume drivers before you've even containerized a "Hello World." That's backwards.

This guide is different. It's written for web developers, for the people building Next.js apps, Express APIs, Django backends, and Flask services. We'll cover exactly what you need to know, skip what you don't, and build up to production-ready skills in a single post.

By the end, you'll be able to:

  • ✅ Containerize any web application
  • ✅ Write efficient, cache-optimized Dockerfiles
  • ✅ Use Docker Compose for local development with databases
  • ✅ Apply production best practices (security, size, speed)
  • ✅ Push images to a registry and deploy anywhere

Let's go.

Docker Core Concepts

Part 1: The Mental Model (2 Minutes)

Before touching any commands, let's build the right mental model.

What Docker Actually Does

Docker packages your application, its dependencies, its runtime, and its configuration into a single, portable unit called a container. That container runs identically everywhere may it be your laptop, your coworker's laptop, CI/CD, staging, production.

Think of it like this:

Without Docker With Docker
"Install Node 20, then npm install, then set these env vars, then..." "Run docker run my-app"
"It works on my machine" "It works on every machine"
Different OS, different dependencies per environment Same environment everywhere
"We need to match the production Node version" Node version is locked in the Dockerfile

The Four Core Concepts

  1. Dockerfile : A recipe (text file) that describes how to build your container image. Like a package.json for your entire environment.

  2. Image : The built result of a Dockerfile. A read-only snapshot containing your code, dependencies, runtime, and OS. Like a .zip of your entire application stack.

  3. Container : A running instance of an image. You can run multiple containers from the same image. Like processes spawned from the same binary.

  4. Registry : A place to store and share images. Docker Hub is the public one. Amazon ECR is AWS's private one. Like npm for container images.

The flow: You write a Dockerfile → build it into an Image → run the image as a Container → push the image to a Registry for deployment.


Part 2: Your First Dockerfile (5 Minutes)

Let's containerize a real Node.js web application in under 5 minutes.

The Application

Here's a minimal Express API. Create a project folder with these files:

package.json

{
  "name": "my-web-app",
  "version": "1.0.0",
  "scripts": {
    "start": "node server.js",
    "dev": "node --watch server.js"
  },
  "dependencies": {
    "express": "^4.21.0"
  }
}
Enter fullscreen mode Exit fullscreen mode

server.js

const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;

app.get('/', (req, res) => {
  res.json({
    message: 'Hello from Docker! 🐳',
    environment: process.env.NODE_ENV || 'development',
    timestamp: new Date().toISOString()
  });
});

app.get('/health', (req, res) => {
  res.status(200).json({ status: 'healthy' });
});

app.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
});
Enter fullscreen mode Exit fullscreen mode

The Dockerfile

Create a file called Dockerfile (no extension) in your project root:

# 1. Start from the official Node.js 20 Alpine image
FROM node:20-alpine

# 2. Set the working directory inside the container
WORKDIR /app

# 3. Copy dependency files first (for cache optimization)
COPY package*.json ./

# 4. Install dependencies
RUN npm ci --only=production

# 5. Copy everything else
COPY . .

# 6. Tell Docker which port your app uses
EXPOSE 3000

# 7. Define the command to run your app
CMD ["node", "server.js"]
Enter fullscreen mode Exit fullscreen mode

That's it. Seven lines. Let's break down what each line does:

Line What It Does Why
FROM node:20-alpine Uses Node.js 20 on Alpine Linux as the base Alpine is ~180MB vs ~1.1GB for the default image
WORKDIR /app Creates /app and sets it as the working directory Keeps things organized; avoids polluting the root
COPY package*.json ./ Copies package.json and package-lock.json Copied separately for layer caching (explained below)
RUN npm ci --only=production Installs exact versions from lockfile, prod-only npm ci is faster and more reliable than npm install
COPY . . Copies the rest of your application code Done after npm ci so code changes don't re-trigger install
EXPOSE 3000 Documents the port (doesn't actually open it) Informational; required by some platforms for auto-detection
CMD ["node", "server.js"] Defines the default command when the container starts Use exec form (JSON array) for proper signal handling

Build It

# Build the image and tag it as "my-web-app"
docker build -t my-web-app .

# Output:
# [+] Building 12.3s (10/10) FINISHED
# => [1/5] FROM node:20-alpine
# => [2/5] WORKDIR /app
# => [3/5] COPY package*.json ./
# => [4/5] RUN npm ci --only=production
# => [5/5] COPY . .
# => naming to docker.io/library/my-web-app
Enter fullscreen mode Exit fullscreen mode

Run It

# Run the container
docker run -p 3000:3000 my-web-app

# -p 3000:3000 = map port 3000 on your machine to port 3000 in the container
Enter fullscreen mode Exit fullscreen mode

Open http://localhost:3000 - your app is running inside a container. 🎉

Essential Run Variants

# Run in the background (detached mode)
docker run -d -p 3000:3000 --name my-app my-web-app

# Run with environment variables
docker run -d -p 3000:3000 -e NODE_ENV=production -e API_KEY=secret my-web-app

# Run with a volume (live code changes, for development)
docker run -p 3000:3000 -v $(pwd):/app my-web-app npm run dev

# See running containers
docker ps

# View logs
docker logs my-app

# Stop and remove
docker stop my-app && docker rm my-app
Enter fullscreen mode Exit fullscreen mode

Part 3: The .dockerignore File (Don't Skip This)

Just like .gitignore prevents files from entering your repo, .dockerignore prevents files from entering your image. Without it, COPY . . copies everything including node_modules, .git, test files, and local secrets.

.dockerignore

node_modules
npm-debug.log
.git
.gitignore
.env
.env.*
Dockerfile
docker-compose.yml
.dockerignore
README.md
.vscode
.idea
coverage
tests
__tests__
*.test.js
*.spec.js
Enter fullscreen mode Exit fullscreen mode

Why this matters:

With .dockerignore Without .dockerignore
Image: ~180 MB Image: ~400+ MB
Build time: ~12 sec Build time: ~30+ sec
No secrets in image ✅ .env leaked into image ❌
Faster CI/CD deploys Slower deploys

⚠️ Critical: Never include .env files in your Docker image. Use environment variables passed at runtime (docker run -e) or Docker secrets instead.


Part 4: Understanding Layer Caching (The Key to Fast Builds)

Docker builds images in layers. Each instruction in your Dockerfile creates a layer. Docker caches these layers and reuses them when nothing has changed.

Docker Layer Caching

Why the Order Matters

Look at our Dockerfile again:

COPY package*.json ./     # Layer 3: Changes rarely
RUN npm ci                # Layer 4: Changes rarely (cached if package.json unchanged)
COPY . .                  # Layer 5: Changes on every code edit
Enter fullscreen mode Exit fullscreen mode

If we had done it the "obvious" way:

# ❌ BAD: Every code change re-runs npm install
COPY . .
RUN npm ci
Enter fullscreen mode Exit fullscreen mode

Every time you change a single line of code, Docker would re-install all dependencies from scratch. By copying package.json first, Docker caches the npm ci layer and only re-runs it when your dependencies change.

The golden rule: Order Dockerfile instructions from least-frequently-changed to most-frequently-changed.

Build Time Comparison

Scenario With Cache Optimization Without
First build 45 seconds 45 seconds
Code change (no dependency change) 3 seconds 45 seconds ❌
Dependency change 40 seconds 45 seconds

Over 50 builds/day (common in active development), cache optimization saves ~35 minutes daily.


Part 5: Dockerfiles for Every Stack

Node.js / Express / Next.js

FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Enter fullscreen mode Exit fullscreen mode

Python / Flask / Django

FROM python:3.12-slim

WORKDIR /app

# Install dependencies first for caching
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 5000

# Flask
CMD ["python", "-m", "flask", "run", "--host=0.0.0.0"]

# Django (use this instead):
# CMD ["gunicorn", "myproject.wsgi:application", "--bind", "0.0.0.0:8000"]
Enter fullscreen mode Exit fullscreen mode

Go

# Build stage
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o main .

# Run stage
FROM alpine:3.19
WORKDIR /app
COPY --from=builder /app/main .
EXPOSE 8080
CMD ["./main"]
Enter fullscreen mode Exit fullscreen mode

Note: The Go example uses a multi-stage build which we'll cover in detail in the next section. This is the pattern for compiled languages.


Part 6: Multi-Stage Builds (Smaller, Faster, Safer)

Multi-stage builds let you use one Dockerfile with multiple FROM instructions. The first stage builds your app; the final stage contains only the production output.

Why? Your build tools (TypeScript compiler, webpack, dev dependencies) don't need to be in your production image. Multi-stage builds strip them out automatically.

Example: Next.js Multi-Stage Build

# Stage 1: Install dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci

# Stage 2: Build the application
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build

# Stage 3: Production image (only compiled output)
FROM node:20-alpine AS runner
WORKDIR /app

# Create non-root user
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

# Copy only what's needed
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static

USER nextjs
EXPOSE 3000
CMD ["node", "server.js"]
Enter fullscreen mode Exit fullscreen mode

Size Impact

Approach Image Size Contents
Single stage ~800 MB Source + node_modules + devDeps + build output
Multi-stage ~180 MB Alpine + production node_modules + build output
Multi-stage + standalone ~120 MB Alpine + minimal Node.js + compiled output only

The multi-stage version is 6x smaller which means faster pulls, faster deploys, faster auto-scaling, and cheaper ECR storage.


Part 7: Choosing a Base Image

Docker Base Image Sizes Compared

Your choice of base image is the single biggest factor in final image size.

Node.js Base Images

Image Size Use Case Trade-offs
node:20 ~1,100 MB Never use for production Full Debian; massive; includes compilers
node:20-slim ~250 MB Good default for production Debian-slim; most native modules work
node:20-bookworm-slim ~220 MB Explicit Debian version Reproducible; predictable
node:20-alpine ~180 MB Best for most web apps Small; uses musl (rare compat issues with native modules)
gcr.io/distroless/nodejs20 ~130 MB Ultra-minimal production No shell, no package manager; hard to debug

Python Base Images

Image Size Use Case
python:3.12 ~1,000 MB Development only
python:3.12-slim ~150 MB Best default for production
python:3.12-alpine ~60 MB Smallest, but pip compilations can be slow

Our Recommendation

Use -alpine for most web applications. It's the best balance of size, security, and compatibility. If you hit issues with native modules (like bcrypt or sharp), switch to -slim.


Part 8: Docker Compose (Local Development with Databases)

Docker Compose lets you define and run multi-container applications. Instead of manually running your app, database, and cache as separate containers, you define them in a single file.

Example: Node.js + PostgreSQL + Redis

docker-compose.yml (or compose.yml both work)

services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=development
      - DATABASE_URL=postgresql://postgres:postgres@db:5432/myapp
      - REDIS_URL=redis://cache:6379
    volumes:
      - .:/app           # Live code reload
      - /app/node_modules # Don't override container's node_modules
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started
    command: npm run dev

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: myapp
    ports:
      - "5432:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

  cache:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  pgdata:
Enter fullscreen mode Exit fullscreen mode

Run Everything

# Start all services
docker compose up

# Start in background
docker compose up -d

# View logs
docker compose logs -f app

# Stop everything
docker compose down

# Stop and remove data (fresh database)
docker compose down -v
Enter fullscreen mode Exit fullscreen mode

What This Gives You

  • One command to spin up your entire development stack
  • Consistent database version across all team members
  • No local PostgreSQL/Redis installation needed
  • Data persistence via Docker volumes (survives docker compose down)
  • Automatic service discovery : your app reaches Postgres at db:5432 and Redis at cache:6379
  • Health checks : the app waits for the database to be ready before starting

Python Equivalent

services:
  app:
    build: .
    ports:
      - "8000:8000"
    environment:
      - DJANGO_SETTINGS_MODULE=myproject.settings
      - DATABASE_URL=postgresql://postgres:postgres@db:5432/myapp
    volumes:
      - .:/app
    depends_on:
      db:
        condition: service_healthy
    command: python manage.py runserver 0.0.0.0:8000

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: myapp
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

volumes:
  pgdata:
Enter fullscreen mode Exit fullscreen mode

Part 9: Docker Compose Watch (Hot Reload in 2026)

Docker Compose Watch (docker compose watch) is the modern way to develop with Docker. Instead of brittle bind mounts, Compose Watch syncs file changes into the container automatically and can trigger rebuilds when needed.

compose.yml with Watch enabled:

services:
  app:
    build: .
    ports:
      - "3000:3000"
    develop:
      watch:
        # Sync code changes instantly (no rebuild)
        - action: sync
          path: ./src
          target: /app/src

        # Rebuild when dependencies change
        - action: rebuild
          path: ./package.json

        # Sync and restart when config changes
        - action: sync+restart
          path: ./config
          target: /app/config
Enter fullscreen mode Exit fullscreen mode
# Start with watch mode
docker compose watch
Enter fullscreen mode Exit fullscreen mode

Why Watch is better than bind mounts:

Feature Bind Mounts (volumes) Compose Watch
macOS performance Slow (file system translation) Fast (direct sync)
Selective sync No (mounts entire directory) Yes (specify paths)
Auto rebuild on deps change No Yes (action: rebuild)
Works with Docker Build No (bypasses build) Yes (respects Dockerfile)

Part 10: Production Best Practices Checklist

Before you deploy your Docker image to production (whether on ECS, Railway, or anywhere else), apply these practices:

✅ Security

# 1. Run as non-root user
RUN addgroup --system --gid 1001 appuser && \
    adduser --system --uid 1001 appuser
USER appuser

# 2. Don't store secrets in the image
# ❌ BAD
ENV API_KEY=sk-secret-123
# ✅ GOOD - pass at runtime
# docker run -e API_KEY=sk-secret-123 my-app

# 3. Use specific image tags (not :latest)
# ❌ BAD
FROM node:latest
# ✅ GOOD
FROM node:20.11.1-alpine3.19

# 4. Add a health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
Enter fullscreen mode Exit fullscreen mode

✅ Size Optimization

# 1. Use multi-stage builds (covered above)

# 2. Combine RUN commands to reduce layers
# ❌ BAD: 3 layers
RUN apt-get update
RUN apt-get install -y curl
RUN rm -rf /var/lib/apt/lists/*
# ✅ GOOD: 1 layer
RUN apt-get update && \
    apt-get install -y --no-install-recommends curl && \
    rm -rf /var/lib/apt/lists/*

# 3. Use --no-cache-dir for pip
RUN pip install --no-cache-dir -r requirements.txt

# 4. Use npm ci instead of npm install
RUN npm ci --only=production
Enter fullscreen mode Exit fullscreen mode

✅ Performance

# 1. Use .dockerignore (covered above)

# 2. Order layers from stable to volatile (covered above)

# 3. Pin versions for reproducibility
FROM node:20.11.1-alpine3.19
# NOT: FROM node:20-alpine (could change underneath you)
Enter fullscreen mode Exit fullscreen mode

✅ The Complete Production Dockerfile (Node.js)

Here's a production-ready Dockerfile that combines everything we've covered:

# syntax=docker/dockerfile:1

# ----- Stage 1: Dependencies -----
FROM node:20.11.1-alpine3.19 AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --only=production && npm cache clean --force

# ----- Stage 2: Build -----
FROM node:20.11.1-alpine3.19 AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build

# ----- Stage 3: Production -----
FROM node:20.11.1-alpine3.19 AS runner
WORKDIR /app

# Non-root user
RUN addgroup --system --gid 1001 nodejs && \
    adduser --system --uid 1001 appuser

# Copy production node_modules + built files
COPY --from=deps --chown=appuser:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=appuser:nodejs /app/dist ./dist
COPY --from=builder --chown=appuser:nodejs /app/package.json ./

USER appuser
EXPOSE 3000

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1

CMD ["node", "dist/server.js"]
Enter fullscreen mode Exit fullscreen mode

Part 11: Essential Docker Commands Cheat Sheet

# === BUILD ===
docker build -t my-app .                    # Build an image
docker build -t my-app:v1.2.3 .             # Build with a specific tag
docker build --no-cache -t my-app .         # Build without cache
docker build --platform linux/amd64 -t my-app .  # Build for x86 (on Apple Silicon)

# === RUN ===
docker run -p 3000:3000 my-app              # Run (foreground)
docker run -d -p 3000:3000 --name app my-app  # Run (background)
docker run -e NODE_ENV=prod my-app          # Run with env var
docker run --rm my-app                      # Auto-remove when stopped
docker exec -it app sh                      # Shell into running container

# === INSPECT ===
docker ps                                   # List running containers
docker ps -a                                # List all containers
docker images                               # List images
docker logs app                             # View logs
docker logs -f app                          # Follow logs
docker stats                                # Resource usage (CPU/memory)

# === CLEANUP ===
docker stop app                             # Stop a container
docker rm app                               # Remove a container
docker rmi my-app                           # Remove an image
docker system prune                         # Remove all unused data
docker system prune -a                      # Remove everything unused (⚠️ aggressive)

# === COMPOSE ===
docker compose up                           # Start all services
docker compose up -d                        # Start in background
docker compose down                         # Stop and remove
docker compose down -v                      # Stop + remove volumes (fresh DB)
docker compose logs -f app                  # Follow app logs
docker compose exec app sh                  # Shell into a service
docker compose build                        # Rebuild images
docker compose watch                        # Start with file watching
Enter fullscreen mode Exit fullscreen mode

Part 12: Pushing to a Registry (Deploy Anywhere)

Once your image is built, you need to push it to a registry so deployment platforms can pull it.

Push to Docker Hub (Public)

# Login
docker login

# Tag
docker tag my-app yourusername/my-app:latest

# Push
docker push yourusername/my-app:latest
Enter fullscreen mode Exit fullscreen mode

Push to Amazon ECR (Private - What TurboDeploy Uses)

# Authenticate with ECR
aws ecr get-login-password --region us-east-1 | \
  docker login --username AWS --password-stdin \
  <your-account-id>.dkr.ecr.us-east-1.amazonaws.com

# Create repository (first time only)
aws ecr create-repository --repository-name my-app

# Tag
docker tag my-app:latest \
  <your-account-id>.dkr.ecr.us-east-1.amazonaws.com/my-app:latest

# Push
docker push \
  <your-account-id>.dkr.ecr.us-east-1.amazonaws.com/my-app:latest
Enter fullscreen mode Exit fullscreen mode

Once your image is in ECR, you can deploy it to ECS Express Mode in about 5 minutes.


Where Docker Fits in the TurboDeploy World

Every container deployment platform like Vercel, Railway, Render, ECS and TurboDeploy runs Docker containers under the hood. Knowing Docker means:

  1. You understand what your PaaS is doing : no more black box
  2. You can switch platforms without rewriting : your Dockerfile works everywhere
  3. You can optimize costs : smaller images = faster deploys = lower bills
  4. You're ready for AWS : ECS Fargate runs Docker containers natively

With TurboDeploy, you write a Dockerfile, push to Git and we handle the rest. Building the image, pushing to ECR, deploying to ECS, provisioning the ALB and setting up monitoring. All in your AWS account, at AWS pricing.

→ Already comfortable with Docker? Check out How to Deploy a Docker Container on AWS ECS Fargate our step-by-step deployment guide.


TL;DR

Concept What You Need to Know
Dockerfile Recipe for building your image. Order matters for caching.
Images Use -alpine or -slim. Never :latest in production.
.dockerignore Always create one. Never ship node_modules or .env.
Layer caching Copy dependencies before code. Saves 80%+ build time.
Multi-stage builds Use for compiled/built apps. Reduces image size by 4–6x.
Docker Compose Use for local dev with databases. One command, full stack.
Compose Watch Replace bind mounts for development. Faster on macOS.
Security Non-root user, specific tags, no secrets in image, health checks.
Registries Docker Hub (public), ECR (private). Push before deploy.

Skip the Docker learning curve entirely? TurboDeploy detects your framework, generates an optimized Dockerfile,and deploys to your AWS account with no Docker knowledge required. But if you want to understand the engine under the hood, this guide has you covered.

Join the waitlist →

Top comments (0)