DEV Community

Cover image for Deploying Node.js with Docker + AWS EC2: A Complete Guide
suyog bhise
suyog bhise

Posted on • Originally published at suyogbhise.online

Deploying Node.js with Docker + AWS EC2: A Complete Guide

Deploying Node.js with Docker + AWS EC2: A Complete Guide

Before Docker, deploying Node.js at IVTREE meant SSH-ing into the server, pulling from git, running npm install, and hoping the Node version matched production.

It never fully matched. There was always something — a package that behaved differently, a native module that needed rebuilding, an environment variable that got missed. Docker eliminates all of that. What runs locally runs in production, every time.

This is the exact setup I use for production Node.js deployments.

What We're Building

By the end of this guide you'll have:

  • A multi-stage Docker build that produces a small, secure image
  • docker-compose for local development with MongoDB included
  • GitHub Actions CI/CD that deploys to EC2 on every push to main
  • Nginx as a reverse proxy with SSL via Let's Encrypt
  • Zero-downtime deployments

The Dockerfile

Most Node.js Dockerfiles I see online are single-stage and install devDependencies in production. Here's the correct multi-stage approach:

# Stage 1: Build
FROM node:20-alpine AS builder

WORKDIR /app

# Copy package files first — Docker caches this layer
# Only re-runs npm ci when package*.json changes
COPY package*.json ./
RUN npm ci --only=production

# Stage 2: Run
FROM node:20-alpine AS runner

WORKDIR /app

# Security: run as non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Copy only production dependencies from builder stage
COPY --from=builder /app/node_modules ./node_modules

# Copy application code
COPY . .

# Switch to non-root user before starting
USER appuser

EXPOSE 3000

CMD ["node", "server.js"]
Enter fullscreen mode Exit fullscreen mode

Why multi-stage? The builder stage has devDependencies, build tools, and compilation artifacts. The runner stage gets only what's needed to run. A typical Express API image goes from ~800MB (single-stage) to ~150MB (multi-stage).

Why Alpine? node:20-alpine is 50MB vs node:20 at 400MB. For a Node.js API, you almost never need the full Debian image.

Why non-root? If your container is ever compromised, running as root means the attacker has root access to the host via container escape vulnerabilities. Non-root limits the blast radius.

.dockerignore

Always add this — without it, Docker copies node_modules from your local machine into the build context, which takes forever and may include platform-specific binaries:

node_modules
.git
.gitignore
*.md
.env
.env.*
dist
coverage
.nyc_output
logs
*.log
Enter fullscreen mode Exit fullscreen mode

docker-compose for Local Development

# docker-compose.yml
version: '3.8'

services:
  api:
    build: .
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=development
      - MONGO_URI=mongodb://mongo:27017/appdb
      - JWT_SECRET=local_dev_secret
    volumes:
      # Hot reload: mount source code
      - .:/app
      # Prevent local node_modules from overriding container's
      - /app/node_modules
    depends_on:
      mongo:
        condition: service_healthy
    restart: unless-stopped

  mongo:
    image: mongo:7
    ports:
      - "27017:27017"
    volumes:
      - mongo_data:/data/db
    healthcheck:
      test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  mongo_data:
Enter fullscreen mode Exit fullscreen mode

The healthcheck on MongoDB ensures your API container only starts once MongoDB is actually ready to accept connections — not just running. Without this, you get race condition errors on startup.

Run with:

docker-compose up        # foreground
docker-compose up -d     # background
docker-compose down      # stop and remove containers
docker-compose down -v   # also remove volumes (wipes database)
Enter fullscreen mode Exit fullscreen mode

AWS EC2 Setup

Instance Selection

  • t3.small (2GB RAM) — minimum for a real Node.js API
  • t3.medium (4GB RAM) — recommended for APIs with meaningful traffic
  • t2.micro — the free tier option, too slow for production under any real load

Security Group Rules

Inbound:
  Port 22   (SSH)    — Your IP only
  Port 80   (HTTP)   — 0.0.0.0/0
  Port 443  (HTTPS)  — 0.0.0.0/0

Outbound:
  All traffic — 0.0.0.0/0
Enter fullscreen mode Exit fullscreen mode

Never expose port 3000 publicly. Traffic goes through Nginx on 80/443, which proxies to your Node container on 3000 internally.

EC2 Server Setup

SSH in and run:

# Update system
sudo apt update && sudo apt upgrade -y

# Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker ubuntu
newgrp docker

# Install Docker Compose
sudo apt install docker-compose-plugin -y

# Install Nginx
sudo apt install nginx -y

# Install Certbot for SSL
sudo apt install certbot python3-certbot-nginx -y

# Create app directory
mkdir -p /home/ubuntu/app
Enter fullscreen mode Exit fullscreen mode

Nginx Configuration

# /etc/nginx/sites-available/api
server {
    listen 80;
    server_name api.yourdomain.com;

    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
    }
}
Enter fullscreen mode Exit fullscreen mode
# Enable the site
sudo ln -s /etc/nginx/sites-available/api /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx

# Get SSL certificate (replace with your domain)
sudo certbot --nginx -d api.yourdomain.com
Enter fullscreen mode Exit fullscreen mode

Certbot automatically modifies your Nginx config to add HTTPS and sets up auto-renewal via a cron job.

GitHub Actions CI/CD

Store these in GitHub repository secrets:

  • EC2_HOST — your EC2 public IP
  • EC2_SSH_KEY — your EC2 private key (the .pem file contents)
# .github/workflows/deploy.yml
name: Deploy to EC2

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Deploy to EC2
        uses: appleboy/ssh-action@v1.0.0
        with:
          host: ${{ secrets.EC2_HOST }}
          username: ubuntu
          key: ${{ secrets.EC2_SSH_KEY }}
          script: |
            cd /home/ubuntu/app

            # Pull latest code
            git pull origin main

            # Build new image
            docker build -t myapp:latest .

            # Zero-downtime: start new container, stop old one
            docker compose up -d --build --no-deps api

            # Clean up old images
            docker image prune -f
Enter fullscreen mode Exit fullscreen mode

The --no-deps flag rebuilds only the api service without touching the database container. This is important — you don't want to restart MongoDB during a code deployment.

Zero-Downtime Deployments

The approach above has a brief gap when the old container stops and the new one starts. For true zero-downtime, use a blue-green approach with Nginx upstream switching:

# docker-compose.yml — two app instances
services:
  api-blue:
    build: .
    ports:
      - "3001:3000"

  api-green:
    build: .
    ports:
      - "3002:3000"
Enter fullscreen mode Exit fullscreen mode
# deploy.sh on EC2
#!/bin/bash

# Determine current active color
ACTIVE=$(cat /tmp/active_color 2>/dev/null || echo "blue")
NEXT=$([ "$ACTIVE" = "blue" ] && echo "green" || echo "blue")

# Start new version
docker compose up -d --build api-$NEXT

# Wait for health check
sleep 10

# Switch Nginx to new version
PORT=$([ "$NEXT" = "blue" ] && echo "3001" || echo "3002")
sed -i "s/proxy_pass http:\/\/localhost:[0-9]*/proxy_pass http:\/\/localhost:$PORT/" /etc/nginx/sites-available/api
nginx -s reload

# Stop old version
docker compose stop api-$ACTIVE

# Record active color
echo $NEXT > /tmp/active_color
Enter fullscreen mode Exit fullscreen mode

Overkill for most projects, but worth knowing for high-traffic APIs.

Environment Variables

Never put secrets in your Docker image. Use a .env file on the server:

# /home/ubuntu/app/.env (on EC2, not in git)
NODE_ENV=production
MONGO_URI=mongodb://mongo:27017/appdb
JWT_SECRET=your_actual_secret_here
STRIPE_SECRET_KEY=sk_live_...
Enter fullscreen mode Exit fullscreen mode

Reference it in docker-compose:

services:
  api:
    env_file:
      - .env
Enter fullscreen mode Exit fullscreen mode

The .env file on EC2 is separate from anything in your repository. Add .env to .gitignore and never commit secrets.

Monitoring

Two simple things that catch most production issues:

# View live logs
docker compose logs -f api

# Container resource usage
docker stats

# Check if containers are running
docker compose ps
Enter fullscreen mode Exit fullscreen mode

For production, set up log forwarding to CloudWatch or Datadog. But for early-stage products, docker compose logs piped to a file is often enough.

EC2 Checklist

Before going live:

  • [ ] Elastic IP assigned (so IP doesn't change on restart)
  • [ ] Domain pointing to Elastic IP
  • [ ] SSL certificate installed (Certbot)
  • [ ] Security group only exposes 80, 443, 22
  • [ ] .env file on server with production secrets
  • [ ] docker-compose restart: unless-stopped set on all services
  • [ ] GitHub Actions secrets set
  • [ ] First deployment tested manually

Containers don't solve bad architecture — they just make deployment consistent. Make sure your app is well-structured before you containerize it.

The most common mistake I see is treating Docker as a magic solution for messy code. It's not. A hard-coded port, an uncaught async error that crashes the process, a missing environment variable — all of these problems exist whether you're using Docker or not.

Containerization is about consistency and reproducibility. Get the code right first.


I'm Suyog Bhise, a Full Stack Developer at IVTREE where I manage Docker-based deployments to AWS EC2. suyogbhise.online

Top comments (0)