DEV Community

DevForge Templates
DevForge Templates

Posted on

Multi-Stage Docker Builds for Fullstack React + Node Apps

I used to ship fullstack apps as 1.2GB Docker images. Node modules, build tools, source maps, dev dependencies -- all crammed into one layer. It worked, but pulling that image on a $5 VPS with 1GB RAM was painful.

Multi-stage builds cut that to ~180MB. Here's the exact setup I use for Vite + Fastify apps with Prisma, including the Traefik reverse proxy config for automatic SSL.

The Problem with Single-Stage Builds

A naive Dockerfile looks like this:

FROM node:22-alpine
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
EXPOSE 3000
CMD ["node", "dist/server.js"]
Enter fullscreen mode Exit fullscreen mode

This image includes everything: TypeScript compiler, Vite, all dev dependencies, source files, node_modules with 400+ packages you only need at build time. The result is 1GB+ and slow to deploy.

The Multi-Stage Approach

The idea is simple: use one stage to build, another to run. The build stage has all the tools. The production stage copies only the compiled output.

Here's the complete Dockerfile for a Vite frontend + Fastify backend:

# ---- Stage 1: Install all dependencies ----
FROM node:22-alpine AS deps
WORKDIR /app

# Enable pnpm via corepack
RUN corepack enable && corepack prepare pnpm@latest --activate

COPY package.json pnpm-lock.yaml ./
COPY prisma ./prisma/

RUN pnpm install --frozen-lockfile
RUN pnpm prisma generate

# ---- Stage 2: Build frontend and backend ----
FROM node:22-alpine AS builder
WORKDIR /app

RUN corepack enable && corepack prepare pnpm@latest --activate

COPY --from=deps /app/node_modules ./node_modules
COPY . .

# Build Vite frontend (outputs to client/dist/)
RUN pnpm run build:client

# Build Fastify backend with esbuild (outputs to dist/)
RUN pnpm run build:server

# ---- Stage 3: Production image ----
FROM node:22-alpine AS production
WORKDIR /app

RUN corepack enable && corepack prepare pnpm@latest --activate

# Only copy production dependencies
COPY package.json pnpm-lock.yaml ./
COPY prisma ./prisma/

RUN pnpm install --frozen-lockfile --prod
RUN pnpm prisma generate

# Copy built artifacts
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/client/dist ./client/dist

# Non-root user
RUN addgroup -g 1001 appgroup && \
    adduser -u 1001 -G appgroup -s /bin/sh -D appuser
USER appuser

EXPOSE 3000

CMD ["node", "dist/server.js"]
Enter fullscreen mode Exit fullscreen mode

Three stages, each with a clear purpose:

  1. deps -- installs all dependencies and generates the Prisma client
  2. builder -- compiles TypeScript and bundles the frontend
  3. production -- copies only what's needed to run

The final image doesn't contain TypeScript, Vite, esbuild, or any dev dependencies. Just the compiled JavaScript, production node_modules, and the Prisma client.

Serving the Frontend from Fastify

Since both frontend and backend are in one container, Fastify serves the Vite build output as static files:

import fastifyStatic from "@fastify/static";
import { join } from "path";

fastify.register(fastifyStatic, {
  root: join(__dirname, "../client/dist"),
  prefix: "/",
  wildcard: false,
});

// SPA fallback -- serve index.html for all non-API routes
fastify.setNotFoundHandler(async (request, reply) => {
  if (request.url.startsWith("/api/")) {
    return reply.status(404).send({ error: "Not found" });
  }
  return reply.sendFile("index.html");
});
Enter fullscreen mode Exit fullscreen mode

API routes go under /api/*, everything else falls through to the React SPA. One process, one port, no nginx sidecar needed.

Docker Compose with Traefik

For production, I use Traefik as a reverse proxy. It handles SSL certificates from Let's Encrypt automatically -- no certbot cron jobs, no manual renewal.

Here's the docker-compose.yml:

services:
  traefik:
    image: traefik:v3
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web"
      - "--certificatesresolvers.letsencrypt.acme.email=you@example.com"
      - "--certificatesresolvers.letsencrypt.acme.storage=/certs/acme.json"
      - "--entrypoints.web.http.redirections.entrypoint.to=websecure"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - traefik-certs:/certs

  app:
    build:
      context: .
      dockerfile: Dockerfile
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.app.rule=Host(`app.example.com`)"
      - "traefik.http.routers.app.entrypoints=websecure"
      - "traefik.http.routers.app.tls.certresolver=letsencrypt"
      - "traefik.http.services.app.loadbalancer.server.port=3000"
    environment:
      DATABASE_URL: "postgresql://postgres:secret@db:5432/myapp"
      NODE_ENV: production
    depends_on:
      db:
        condition: service_healthy

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: myapp
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 3s
      retries: 5

volumes:
  pgdata:
  traefik-certs:
Enter fullscreen mode Exit fullscreen mode

The key Traefik labels on the app service:

  • Host() rule routes traffic for your domain
  • certresolver=letsencrypt triggers automatic certificate provisioning
  • The HTTP-to-HTTPS redirect is on the Traefik entrypoint level

One important detail: your DNS must point directly to the server (A record, not proxied through Cloudflare). Traefik needs to respond to the ACME HTTP challenge on port 80.

The .dockerignore File

This is easy to overlook but matters for build speed and image size:

node_modules
dist
client/dist
.git
*.md
.env*
.vscode
Enter fullscreen mode Exit fullscreen mode

Without this, Docker copies your local node_modules (which might be 500MB+) into the build context before the COPY . . step. The build stage installs its own clean dependencies, so your local ones are just wasted transfer time.

Prisma in Docker: Common Pitfalls

Two things that will bite you with Prisma in multi-stage builds:

1. Generate in the right stage. prisma generate creates a platform-specific engine binary. If you generate in the deps stage and copy to production, the binary targets match (both Alpine). If you generate locally on macOS and copy to the container, it won't work.

2. Set the binary target explicitly in your schema as a safety net:

generator client {
  provider      = "prisma-client-js"
  binaryTargets = ["native", "linux-musl-openssl-3.0.x"]
}
Enter fullscreen mode Exit fullscreen mode

The linux-musl target covers Alpine. The native target keeps local development working.

Size Comparison

Here's what the multi-stage build achieves on a real project with Prisma, 15 API routes, and a React dashboard:

Stage Size
Single-stage (everything) 1.24 GB
Multi-stage (production) 178 MB
Of which: node_modules (prod only) 112 MB
Of which: Prisma engine 38 MB
Of which: built app code 28 MB

That's a 7x reduction. Deploys go from 45 seconds to about 8 seconds on a typical VPS.

Deploy Script

I deploy with a simple rsync + rebuild:

#!/bin/bash
set -euo pipefail

SERVER="user@your-vps-ip"
APP_DIR="/opt/apps/myapp"

echo "Syncing files..."
rsync -az --delete \
  --exclude node_modules \
  --exclude .git \
  --exclude .env \
  ./ "$SERVER:$APP_DIR/"

echo "Building and starting..."
ssh "$SERVER" "cd $APP_DIR && docker compose up -d --build"

echo "Done. Checking health..."
sleep 5
curl -sf "https://app.example.com/api/health" && echo " OK" || echo " FAILED"
Enter fullscreen mode Exit fullscreen mode

Rsync transfers only changed files. Docker layer caching means unchanged stages aren't rebuilt. A typical deploy after a small code change takes under 30 seconds.

Wrapping Up

The multi-stage pattern works for any Node.js fullstack app, not just this specific stack. The principles are always the same:

  1. Separate install, build, and run into distinct stages
  2. Copy only artifacts into the final image (no source, no dev deps)
  3. Run as non-root in production
  4. Use .dockerignore to keep the build context small

Combined with Traefik for automatic SSL and Docker Compose for orchestration, you get a production setup that takes about 20 minutes to configure and costs $5/month on any VPS provider.

Top comments (0)