DEV Community

Ramer Lacida
Ramer Lacida

Posted on

How to Deploy Zero‑Downtime Docker Services with Blue‑Green Releases

Introduction

As a DevOps lead, you’ve probably seen the panic that follows a service outage. The good news is that with Docker, Nginx, and a disciplined blue‑green workflow you can ship updates without ever taking users down. This tutorial walks you through a practical, end‑to‑end setup that you can drop into any CI/CD pipeline.


Prerequisites

  • Docker Engine ≥ 20.10 on your build and production hosts
  • Docker Compose for local testing
  • Nginx acting as a reverse proxy in front of your containers
  • A Git repository that triggers your CI system (GitHub Actions, GitLab CI, etc.)
  • Basic familiarity with Bash and YAML

If any of these are missing, spin them up first; the steps below assume they are already available.


1. Structure Your Repository for Blue‑Green

Create a folder layout that separates the two environments:

my‑app/
├─ docker-compose.yml          # common services (db, redis, …)
├─ nginx/
│   └─ nginx.conf            # proxy config with upstream groups
├─ blue/
│   └─ Dockerfile            # image for the "blue" version
├─ green/
│   └─ Dockerfile            # image for the "green" version
└─ .github/workflows/
    └─ ci.yml                # CI pipeline definition
Enter fullscreen mode Exit fullscreen mode

Both blue and green Dockerfiles can be identical; the only difference is the tag you push to your registry (e.g., my‑app:blue‑v123). Keeping them in separate folders makes it easier to reference the correct build context in CI.


2. Nginx Configuration for Dynamic Upstreams

The magic lives in the Nginx upstream block. By using resolver and variables, you can swap the active upstream without reloading Nginx.

http {
    upstream app_upstream {
        # The variable $upstream_name will be set by a health‑check script
        server unix:/var/run/${upstream_name}.sock;
    }

    server {
        listen 80;
        location / {
            proxy_pass http://app_upstream;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

When you want to promote green to live, you simply update the symlink /var/run/green.sock to point to the new container’s socket and change the $upstream_name variable via an nginx -s reload or, even better, a zero‑downtime nginx -s reload that only swaps the upstream reference.


3. CI/CD Pipeline – The Heartbeat

Below is a minimal GitHub Actions workflow that builds both images, pushes them to Docker Hub, and then triggers a promotion script on the production host.

name: CI
on:
  push:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu‑latest
    steps:
      - uses: actions/checkout@v3

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2

      - name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DH_USERNAME }}
          password: ${{ secrets.DH_PASSWORD }}

      - name: Build & push blue image
        run: |
          docker build -t myorg/my‑app:blue-${{ github.sha }} ./blue
          docker push myorg/my‑app:blue-${{ github.sha }}

      - name: Build & push green image
        run: |
          docker build -t myorg/my‑app:green-${{ github.sha }} ./green
          docker push myorg/my‑app:green-${{ github.sha }}

      - name: Deploy green (zero‑downtime)
        env:
          SSH_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
        run: |
          ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no user@prod-host \
            "./deploy_green.sh ${{ github.sha }}"
Enter fullscreen mode Exit fullscreen mode

The deploy_green.sh script is where the swap happens. Keep it idempotent and safe.


4. The Promotion Script (deploy_green.sh)

#!/usr/bin/env bash
set -euo pipefail

SHA=$1
APP_NAME="my‑app"
GREEN_TAG="green-${SHA}"

# Pull the new image
docker pull myorg/${APP_NAME}:${GREEN_TAG}

# Stop any stale green container
docker rm -f ${APP_NAME}_green || true

# Start the new green container bound to a Unix socket
docker run -d \
  --name ${APP_NAME}_green \
  -v /var/run/${APP_NAME}_green.sock:/tmp/app.sock \
  myorg/${APP_NAME}:${GREEN_TAG}

# Verify health endpoint
if curl -sSf http://localhost:8080/health; then
  echo "✅ Green container healthy"
else
  echo "❌ Health check failed, aborting"
  exit 1
fi

# Switch Nginx upstream by updating the symlink
ln -sffn /var/run/${APP_NAME}_green.sock /var/run/${APP_NAME}.sock

# Reload Nginx (zero‑downtime)
nginx -s reload

# Optional: clean up old blue container
docker rm -f ${APP_NAME}_blue || true

echo "🚀 Deployment of ${GREEN_TAG} complete"
Enter fullscreen mode Exit fullscreen mode

Key points:

  • Health check before swap prevents bad releases from reaching traffic.
  • Symlink swap is atomic on Unix, guaranteeing an instant switch.
  • nginx -s reload only reloads the configuration, not the worker processes, so existing connections stay alive.

5. Observability & Logging

Zero‑downtime deployments are only as good as the visibility you have. Wire the following into your stack:

  • Prometheus scrapes /metrics from both containers. Tag metrics with environment=blue|green.
  • Grafana dashboards show request latency per version, letting you spot regressions instantly.
  • ELK stack (or Loki) collects Docker logs. Use a consistent JSON log format so you can filter by container_name.

A quick docker logs tail for troubleshooting:

docker logs -f my‑app_green --tail 100
Enter fullscreen mode Exit fullscreen mode

6. Rollback Strategy

If the new green version shows a spike in error rate, roll back with a single command:

# Re‑point the socket back to blue
ln -sffn /var/run/${APP_NAME}_blue.sock /var/run/${APP_NAME}.sock
nginx -s reload
Enter fullscreen mode Exit fullscreen mode

Because the blue container is still running (you didn’t docker rm -f it until the green health check passes), the swap is instantaneous. Only after a successful period should you clean up the old version.


7. Bonus: Blue‑Green with Kubernetes (Optional)

If your team migrates to K8s, the same principles apply using Deployments and Services with the strategy.type: RollingUpdate set to maxSurge: 0 and maxUnavailable: 0. However, the Docker‑Nginx approach remains lightweight for small‑to‑medium SaaS products.


Conclusion

Zero‑downtime deployments are no longer a lofty ideal; they’re a practical pattern you can implement today with Docker, Nginx, and a disciplined CI pipeline. By separating blue and green builds, using atomic socket swaps, and coupling health checks with observability, you protect users while moving fast.

If you need help shipping this, the team at https://ramerlabs.com can help.

Top comments (0)