Introduction
Zero‑downtime deployments are a non‑negotiable expectation for modern services. As a DevOps lead, you’re probably juggling Docker containers, Nginx reverse proxies, and a CI/CD pipeline that must stay up while code rolls forward. This checklist gives you concrete steps to achieve blue‑green releases without a single request slipping through the cracks.
Prerequisites
Before you dive in, make sure you have:
- A Docker‑compatible host (Docker Engine ≥ 20.10 or Docker Desktop).
- Nginx installed as a front‑end reverse proxy (official Docker image works well).
- A CI/CD system that can build and push images (GitHub Actions, GitLab CI, CircleCI, etc.).
- Basic health‑check endpoints (
/healthz
) on your application.
If any of these are missing, pause the checklist and get them in place first.
1️⃣ Prepare Immutable Docker Images
a. Use a multi‑stage Dockerfile
# ---- Build stage ----
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# ---- Runtime stage ----
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY package*.json ./
RUN npm ci --only=production
EXPOSE 3000
CMD ["node", "dist/index.js"]
-
Why? Each build produces a fresh, immutable image tagged with the Git SHA (
myapp:${GIT_SHA}
). - Tip: Keep the image size small to speed up pulls during a rollout.
b. Tag and push atomically
docker build -t myregistry.com/myapp:${GIT_SHA} .
docker push myregistry.com/myapp:${GIT_SHA}
Never reuse the latest
tag for production; it defeats the purpose of deterministic rollouts.
2️⃣ Configure Nginx for Blue‑Green Routing
Create an upstream block that can point to two separate Docker services – myapp_green
and myapp_blue
.
http {
upstream myapp {
# Initially point to the "blue" version
server myapp_blue:3000;
}
server {
listen 80;
location / {
proxy_pass http://myapp;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
When the green version is ready, you simply swap the upstream entry and reload Nginx:
# Inside the Docker compose network
docker exec nginx nginx -s reload
Because Nginx reloads gracefully, existing connections finish on the old upstream while new connections flow to the fresh version.
3️⃣ Implement the Blue‑Green Deployment Workflow
a. Define two Docker Compose services
version: "3.8"
services:
myapp_blue:
image: myregistry.com/myapp:${CURRENT_SHA}
restart: always
networks:
- appnet
myapp_green:
image: myregistry.com/myapp:${NEW_SHA}
restart: always
networks:
- appnet
nginx:
image: nginx:stable-alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "80:80"
depends_on:
- myapp_blue
- myapp_green
networks:
- appnet
networks:
appnet:
driver: bridge
b. CI/CD pipeline snippet (GitHub Actions)
name: Deploy Blue‑Green
on:
push:
branches: [ main ]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Log in to registry
uses: docker/login-action@v2
with:
registry: myregistry.com
username: ${{ secrets.REGISTRY_USER }}
password: ${{ secrets.REGISTRY_PASS }}
- name: Build & push image
run: |
GIT_SHA=$(git rev-parse --short HEAD)
docker build -t myregistry.com/myapp:${GIT_SHA} .
docker push myregistry.com/myapp:${GIT_SHA}
- name: Deploy green stack
env:
NEW_SHA: ${{ env.GIT_SHA }}
run: |
docker compose pull myapp_green
docker compose up -d myapp_green
# Wait for health checks (see next section)
sleep 30
# Switch Nginx upstream
docker exec nginx sed -i "s/myapp_blue/myapp_green/" /etc/nginx/nginx.conf
docker exec nginx nginx -s reload
The pipeline builds a new image, brings up the green service, validates health, then flips Nginx.
4️⃣ Health Checks & Rolling Updates
a. Application‑level health endpoint
// Express example
app.get('/healthz', (req, res) => {
const dbOk = db.isConnected();
res.status(dbOk ? 200 : 503).json({status: dbOk ? 'ok' : 'unhealthy'});
});
Expose this endpoint on port 3000 and configure Docker health‑check:
HEALTHCHECK --interval=10s --timeout=2s \
CMD curl -f http://localhost:3000/healthz || exit 1
b. CI step to verify health before traffic switch
HEALTH=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:3000/healthz)
if [ "$HEALTH" -ne 200 ]; then
echo "⚠️ Green version failed health check – aborting"
exit 1
fi
If the check fails, the pipeline should skip the Nginx reload and alert the team.
5️⃣ Logging & Observability
- Centralise logs with the ELK stack or Loki‑Grafana. Forward Docker container stdout/stderr to the collector.
-
Metrics: expose Prometheus
/metrics
endpoint and scrape both blue and green services. Compare latency before and after the switch. - Tracing: use OpenTelemetry to follow a request across Nginx and the app containers.
Having visibility ensures you notice regressions the moment they appear.
6️⃣ Rollback Strategy
Even with health checks, something can slip through. Keep the previous version alive (the blue service) until you’re confident the green deployment is stable.
To rollback:
- Edit
nginx.conf
to point back tomyapp_blue
. - Reload Nginx (
docker exec nginx nginx -s reload
). - Optionally, prune the faulty green image:
docker rmi myregistry.com/myapp:${NEW_SHA}
Automate this in your CI pipeline as a manual “approval” step if you prefer a safety net.
7️⃣ Post‑Deploy Validation
After traffic has been switched:
- Run a synthetic test suite (e.g., k6 or Postman) against the live endpoint.
- Verify error rates in your observability dashboards are below thresholds.
- Confirm that the new image is running the expected version label (
docker ps | grep myapp_green
).
Document any anomalies and feed them back into the next iteration of the checklist.
Conclusion
Zero‑downtime deployments with Docker and Nginx become repeatable once you lock down the image pipeline, health‑check gating, and a clean Nginx upstream swap. Follow this checklist on every release, and you’ll reduce blast‑radius, keep SLAs intact, and give your team confidence to ship faster.
If you need help shipping this, the team at https://ramerlabs.com can help.
Top comments (0)