Introduction
Ensuring minimal downtime and maintaining continuous service availability are crucial aspects of delivering software applications to users. Users expect a seamless experience at all times, even when failures occur behind the scenes. Servers or applications may fail due to high traffic, malfunctioning code, or unexpected system errors. When these situations arise, it is essential to keep services running with as little interruption as possible.
In this article, we will explore how to maintain application availability using the Blue/Green Deployment strategy. As the name suggests, this method relies on two environments: Blue and Green. These environments may be two separate servers or two instances of the same application running on a single server.
To demonstrate this concept, we will run two versions of our application, blue and green, inside two separate Docker containers. The Blue server will act as the primary server, while the Green server will function as the backup. The following sections provide a walkthrough of the setup.
Nginx Server
Nginx is a high-performance, open source web server commonly used for serving static content, reverse proxying, load balancing, and caching. In this setup, Nginx will function as a reverse proxy and a load balancer, receiving all user requests and forwarding them to either the Blue or Green application server.
In the configuration file (nginx.conf.template), the upstream servers are defined using Nginx’s upstream block. There are two upstream configurations:
blue_primary: Blue is active, Green is backupgreen_primary: Green is active, Blue is backup
TheACTIVE_POOLenvironment variable controls which upstream group is selected.
user nginx;
worker_processes auto;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" status=$status body_bytes_sent=$body_bytes_sent '
'pool=$upstream_http_x_app_pool release=$upstream_http_x_release_id '
'upstream_status=$upstream_status upstream_addr=$upstream_addr '
'request_time=$request_time upstream_response_time=$upstream_response_time';
error_log /var/log/nginx/error.log warn;
access_log /var/log/nginx/access.log main;
upstream blue_primary {
server app_blue:4000 max_fails=1 fail_timeout=5s;
server app_green:4000 backup;
}
upstream green_primary {
server app_green:4000 max_fails=1 fail_timeout=5s;
server app_blue:4000 backup;
}
map "${ACTIVE_POOL}" $active_backend {
default blue_primary;
blue blue_primary;
green green_primary;
}
server {
listen 80;
location / {
proxy_pass http://$active_backend;
proxy_pass_header X-App-Pool;
proxy_pass_header X-Release-Id;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 1s;
proxy_send_timeout 1s;
proxy_read_timeout 1s;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_next_upstream_tries 2;
}
}
}
For simulation purposes, the proxy timeouts were set to very low values to trigger fast failover during testing.
Upstream Servers
In Nginx terminology, upstream servers are the backend servers that Nginx forwards client requests to. Nginx does not execute application logic; rather, it simply accepts incoming traffic and routes it to one or more upstream servers defined in its configuration.
An upstream group may contain:
- One or multiple application servers
- A load-balancing strategy
- Failover rules
- Backup servers that only activate when the primary becomes unavailable
In a Blue/Green deployment, the upstream servers represent the two versions of the application. Nginx sends traffic to the active server (Blue or Green), based on the ACTIVE_POOL variable, while the other server remains on standby. This guarantees seamless switching and minimises downtime during failures or deployments.
The upstream servers in our setup are two identical Docker containers created from the same Docker image. This ensures consistency and prevents users from experiencing different behaviours during failover.
Below is the Docker Compose configuration managing Nginx, Blue, and Green containers:
services:
nginx:
image: nginx:latest
container_name: nginx
volumes:
- ./nginx.conf.template:/etc/nginx/nginx.conf.template:ro
- ./logs:/var/log/nginx
ports:
- "8080:80"
environment:
- ACTIVE_POOL=${ACTIVE_POOL}
command: >
/bin/sh -c "envsubst '$$ACTIVE_POOL' < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && nginx -g 'daemon off;'"
depends_on:
- app_blue
- app_green
app_blue:
image: ${BLUE_IMAGE}
container_name: app_blue
ports:
- "8081:${PORT}"
environment:
- PORT=${PORT}
- APP_POOL=${ACTIVE_POOL}
- RELEASE_ID=${RELEASE_ID_BLUE}
app_green:
image: ${GREEN_IMAGE}
container_name: app_green
ports:
- "8082:${PORT}"
environment:
- PORT=${PORT}
- APP_POOL=green
- RELEASE_ID=${RELEASE_ID_GREEN}
For this demonstration, the headers app_pool and release_idhelp differentiate between the two servers.
Step by Step Guide
1. Set up Docker Compose
Create your Docker services as shown in the Docker Compose file above: Blue (active) and Green (backup), both running identical applications.
2. Define environment variables
These ensure unique identifiers for each server:
BLUE_IMAGE=yimikaade/wonderful:devops-stage-two
GREEN_IMAGE=yimikaade/wonderful:devops-stage-two
ACTIVE_POOL=blue
RELEASE_ID_BLUE=blue:v1.0.0
RELEASE_ID_GREEN=green:v1.0.0
PORT=4000
3. Configure Nginx
Use the provided template to dynamically generate the active configuration using envsubst.
4. Start the services
Ensure Docker and Docker Compose are running, then start the system:
docker compose up -d --build
5. Test the Nginx endpoint
Visit: http://<IP>:8080. If you are running on your local machine, the IP is localhost or 127.0.0.1
6. Check the version headers
Access: http://<IP>:8080/version to check the version and the server headers. The response is shown below;
7. Introduce failure on the Blue server
Trigger chaos mode: POST http://<IP>:8081/chaos/start?mode=timeout or POST http://<IP>:8081/chaos/start?mode=error to render the blue server not to be reached by nginx
Example response:
{
"message": "Simulation mode 'error' activated"
}
8. Verify automatic failover
Check the version endpoint again; you should now see the Green server is responding to requests.
9. Stop the chaos simulation
POST http://<IP>:8081/chaos/stop
The Blue server becomes active again to serve requests.
10. Shut down your environment
Shut down your environment
Conclusion
This walkthrough demonstrates how to implement a Blue/Green deployment strategy with automatic failover using Nginx as a reverse proxy and load balancer. For failover to work correctly, ensure that the backup server is marked as backup in the Nginx configuration.
In the next article, I will explain how to monitor which server is active at any given time.
Before concluding, I want to briefly introduce backend.im, a developer-friendly platform for deploying applications using the Claude Code CLI directly from your desktop. It integrates seamlessly with Claude CLI, allowing you to provision infrastructure and deploy code with just a few commands. I will cover this in more detail soon.




Top comments (0)