DEV Community

Cover image for Running Multiple Spring Boot Containers with NGINX Reverse Proxy & Docker Compose
jungledev
jungledev

Posted on • Edited on

Running Multiple Spring Boot Containers with NGINX Reverse Proxy & Docker Compose

🎯 Target Audience

  • Beginners in DevOps, cloud computing, or software development
  • Developers transitioning to containerized environments
  • Please go through the previous article to learn about the Docker images.

1. Introduction

In this post, we’ll walk through the process of containerizing a Spring Boot application with Docker and using NGINX as a reverse proxy to route traffic to it. This is a common and powerful pattern that enables:

  • Easier deployment across environments,
  • Load balancing multiple instances,
  • and clean separation between your application and network layer.

Whether you're deploying on a local machine, a cloud VM, or inside a larger microservices architecture, this setup provides a solid foundation for scalable, maintainable Spring Boot applications.

By the end of this guide, you'll have:

  • A Dockerized Spring Boot app
  • An NGINX container acting as a reverse proxy
  • A working setup where all traffic to localhost:8080 is cleanly routed to your app via NGINX

2. Build and Containerize the Spring Boot Application

You can quickly generate a Spring Boot project using Spring Initializr. Add the Spring Web dependency and generate the project with sample REST controller.

Prerequisites
Before you start, please ensure:

  • Docker (Engine or Docker Desktop) is installed and running
  • A compatible OS: Linux, macOS, or Windows A sample Dockerfile from Build Better Containers
# -------- Stage 1: Build with Maven--------
# Use Eclipse Temurin JDK 17 with Alpine Linux
FROM eclipse-temurin:17-jdk-alpine AS builder
# Set working directory
WORKDIR /app
# Copy pom.xml and maven wrapper download dependencies
COPY ./pom.xml ./pom.xml
COPY ./mvnw ./mvnw
COPY ./.mvn ./.mvn
# Make Maven wrapper executable and download dependencies
RUN chmod +x ./mvnw && ./mvnw dependency:go-offline
# Copy source files and build
COPY src ./src/
# Build the application
RUN ./mvnw clean package -DskipTests && mv target/docker-demo-0.0.1.jar docker-demo.jar && rm -rf target
# -------- Stage 2: Runtime --------
FROM eclipse-temurin:17-jre-alpine AS runtime
# Set the working directory and make it writable by the non-root user
WORKDIR /app
# Define build arguments for user and group
ARG USER_ID=1001
ARG GROUP_ID=1001
ARG USERNAME=springuser
ARG GROUPNAME=springuser
# Create group and user using ARGs
RUN addgroup -g ${GROUP_ID} ${GROUPNAME} \
    && adduser -u ${USER_ID} -G ${GROUPNAME} -s /bin/sh -D ${USERNAME}
# Copy built JAR from builder stage
COPY --from=builder --chown=springuser:springgroup /app/docker-demo.jar docker-demo.jar
# Switch to non-root user
USER ${USERNAME}
# Expose application port
EXPOSE 8080
# Alternative using wget (no additional package needed)
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
    CMD wget --no-verbose --tries=1 --spider http://localhost:8080/actuator/health || exit 1
ENTRYPOINT ["java","-jar","-Dserver.port=8080","/app/docker-demo.jar"]
Enter fullscreen mode Exit fullscreen mode

Build a Docker image

 docker build -t  docker-demo:latest .
Enter fullscreen mode Exit fullscreen mode

Verify newly created image

 docker images docker-demo
Enter fullscreen mode Exit fullscreen mode


Launch a new container on port 8080 from the docker-demo:latest image

 docker run -d -p 8080:8080 docker-demo:latest
Enter fullscreen mode Exit fullscreen mode

Verify Docker process

Our demo application is currently running inside a Docker container, and its services are exposed via port 8080 on our local machine.
The demo app includes an endpoint, /customers, which retrieves customer details. To test this endpoint and verify its functionality, you can use the curl command as follows:

curl -s -H "Accept: application/json" http://localhost:8080/customers
Enter fullscreen mode Exit fullscreen mode

Output:

[
   {
      "id":1,
      "name":"John Doe",
      "email":"john@example.com",
      "phone":"123-456-7890"
   },
   {
      "id":2,
      "name":"Jane Smith",
      "email":"jane@example.com",
      "phone":"098-765-4321"
   }
]
Enter fullscreen mode Exit fullscreen mode

Now that we've successfully started a container exposing port 8080, let's explore how to run additional containers of the same application. Since two containers cannot bind to the same host port simultaneously, we need to assign distinct host ports for each instance.
For example:

  • First container: -p 8080:8080
  • Second container: -p 8081:8080
  • Third container: -p 8082:8080
# Second instance
docker run -d -p 8081:8080 docker-demo:latest  
Enter fullscreen mode Exit fullscreen mode
# Third instance
docker run -d -p 8082:8080 docker-demo:latest  
Enter fullscreen mode Exit fullscreen mode

Validate all instances:

 docker ps
Enter fullscreen mode Exit fullscreen mode


Docker automatically assigns a randomly generated name to each container if you don’t specify one using the --name flag
Each container will run independently, and you can access them via:

curl -s http://localhost:8080/api/customers
curl -s http://localhost:8081/api/customers
curl -s http://localhost:8082/api/customers
Enter fullscreen mode Exit fullscreen mode

When you're running multiple instances of the same microservice in Docker containers, like your Java demo app, load balancing becomes crucial. It's the mechanism that ensures incoming requests are distributed efficiently across all the available instances, preventing any single instance from becoming a bottleneck and improving the overall reliability and performance of your application.

Step 3: Load Balance Multiple Docker Containers (Microservices)

Here's how to implement load balancing for your Dockerized microservices:

Option 1: Docker + Reverse Proxy (e.g. Nginx)

NGINX is a very popular and powerful web server that excels as a reverse proxy and load balancer, especially for Dockerized microservices.
Let’s walk through a practical example using NGINX as the load balancer and multiple instances of a service.

A lightweight and popular solution for both dev and prod environments.

  • ✅ Works well with Docker networks
  • ✅ Supports round-robin, least connections, and more
  • 🛠 Requires manual config (nginx.conf)
  • ❌ No automatic service discovery

Use case: Simple setups or learning environments.

Prerequisites

  • 3 containers running the same app (docker-demo)
  • All running on a shared Docker network
  • Load balancer container (NGINX) distributes traffic

Step 1: Create a Docker Network: To allow containers to talk to each other by name

docker network create demo-net
Enter fullscreen mode Exit fullscreen mode

List of all Docker Networks

docker network ls
Enter fullscreen mode Exit fullscreen mode

Step 2: Run Multiple App Instances on same network

docker run -d \
  --name docker-demo-1 \
  --network demo-net \
  -p 8081:8080 \
  docker-demo:latest

docker run -d \
  --name docker-demo-2 \
  --network demo-net \
  -p 8082:8080 \
  docker-demo:latest

docker run -d \
  --name docker-demo-3 \
  --network demo-net \
  -p 8083:8080 \
  docker-demo:latest

Enter fullscreen mode Exit fullscreen mode

Validate all instances:

docker ps --filter "name=docker-demo"
Enter fullscreen mode Exit fullscreen mode

Containers must be running inside 'demo-net' network

docker network inspect demo-net
Enter fullscreen mode Exit fullscreen mode

OR

docker ps --format "table {{.Names}}\t{{.Image}}\t{{.Networks}}"
Enter fullscreen mode Exit fullscreen mode

Step 3: Create Your Nginx Configuration
Create a file nginx.conf:

worker_processes 1;
events { worker_connections 1024; }
http {
    upstream demo_app {
        server docker-demo-1:8080;
        server docker-demo-2:8080;
        server docker-demo-3:8080;
    }
    log_format with_upstream '$remote_addr - [$time_local] '
                           '"$request" $status $body_bytes_sent '
                           '"upstream=$upstream_addr" '
                           '"u_status=$upstream_status" '
                           '"u_time=$upstream_response_time"';

    access_log /var/log/nginx/access.log with_upstream;

    server {
        listen 80;
    server_name  localhost; 

        location / {
            proxy_pass http://demo_app;
        proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        add_header X-Upstream-Server $upstream_addr;
        }
    }
}


Enter fullscreen mode Exit fullscreen mode
  • worker_processes 1 - Single nginx worker process (suitable for light loads)
  • worker_connections 1024 - Each worker handles up to 1024 concurrent connections
  • Defines three backend servers using Docker container names
  • nginx uses container names as hostnames (docker-demo-1)
  • Connects to internal ports (8080)
  • External port mappings (-p) are irrelevant for container-to-container communication
  • Responds to requests for localhost
  • $upstream_addr – the address (name:port) of the upstream that handled the request
  • $upstream_status – the HTTP status returned by that upstream
  • $upstream_response_time – the time it took the upstream to respond

bash

docker run -d \
  --name nginx-lb \
  --network demo-net \
  -p 8080:80 \
  -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro \
  nginx
Enter fullscreen mode Exit fullscreen mode

OR

Create a Dockerfile containing the NGINX reverse proxy setup. After that, build the image and run the container

FROM nginx:latest

COPY nginx.conf /etc/nginx/nginx.conf
Enter fullscreen mode Exit fullscreen mode

Validate process:

docker ps --filter "name=nginx-lb"
Enter fullscreen mode Exit fullscreen mode

To test the setup, try accessing the REST endpoint at http://localhost:8080/api/customers in your browser or using curl. You should receive a successful response from one of the backend containers.

To inspect NGINX logs, tail the access logs and look for the upstream= field to identify which backend container served the request.

docker logs -f nginx-lb
Enter fullscreen mode Exit fullscreen mode


Since we have added an X-Upstream-Server header in the NGINX config, we can see it in the REST response as well.

What happens when one of the application servers fails?

When one of the backend application servers (containers) behind an NGINX reverse proxy fails, the behavior depends on how NGINX is configured and what kind of failure occurs.

What Happens by Default (Without Health Checks)

Scenario:

  • docker-demo-1, docker-demo-2, and docker-demo-3 behind NGINX (using upstream)
  • NGINX load balances across them

If, for example, docker-demo-2 crashes or becomes unreachable, here's what happens:

NGINX Doesn’t Automatically Know It's Down
NGINX by default does not perform active health checks. So unless the failed server returns a bad response or times out, NGINX keeps trying to send traffic to it. Silent failover is the default behavior in NGINX when using an upstream block with multiple servers.
Silent failover is the default behavior in NGINX when using an upstream block with multiple servers.

Here’s what Nginx does by default:

  • ✅ Tries the first server (docker-demo-2).
  • ❌ If it fails (e.g. connection refused, no route to host), it logs the error.
  • 🔁 Then automatically retries the next server in the list.
  • 🎯 If any server responds, the client gets a valid response.
  • 🧾 The failure is logged, but not surfaced to the client.

How to Handle It Better

To handle NGINX failover more effectively, especially when backend servers (containers or services) may fail, you can improve your setup using a combination of configuration changes and monitoring strategies.

1. Use max_fails and fail_timeout in the upstream block
This limits how many times NGINX will try a failing server before temporarily marking it as unavailable.

upstream demo_app {
    server docker-demo-1:8080 max_fails=3 fail_timeout=30s;
    server docker-demo-2:8080 max_fails=3 fail_timeout=30s;
    server docker-demo-3:8080 max_fails=3 fail_timeout=30s;
}
Enter fullscreen mode Exit fullscreen mode
  • max_fails=3: if 3 failed attempts occur
  • fail_timeout=30s: within 30 seconds, the server is marked as unavailable for that time

2. Configure proxy_next_upstream for Fine-Grained Failover Control
Controls what NGINX considers a failure worth retrying:

location / {
    proxy_pass http://demo_app;
    proxy_next_upstream error timeout http_502 http_503 http_504;
    proxy_next_upstream_tries 2;
}
Enter fullscreen mode Exit fullscreen mode
  • Limits retries to specific failure types
  • proxy_next_upstream_tries 2 limits how many times NGINX will retry across upstream servers
  • Avoids retrying on all types of errors (e.g., HTTP 404s)

Active Health Checks
Nginx Open Source doesn’t support active health checks out of the box. You’d need:

  • Nginx Plus (commercial) Or use external tools like:
  • Consul
  • Traefik
  • HAProxy

Best Practice

docker run -d --restart unless-stopped docker-demo:latest
Enter fullscreen mode Exit fullscreen mode

This tells Docker:

  • ✅ Restart the container automatically if it crashes or the Docker daemon restarts.
  • 🔁 Restart the container on system reboot.
  • 🛑 Do not restart the container if you manually stop it (docker stop my-app).

Clean up :

docker stop docker-demo-1 docker-demo-2 docker-demo-3 nginx-lb
Enter fullscreen mode Exit fullscreen mode
docker rm docker-demo-1 docker-demo-2 docker-demo-3 nginx-lb
Enter fullscreen mode Exit fullscreen mode
docker network rm demo-net
Enter fullscreen mode Exit fullscreen mode

Conclusion

  • Any container on a user-defined network can find any other container on the same network by using its service name as a hostname.
  • When a client sends a request to the Nginx server, Nginx checks its configuration to see where to forward the request. In our case, we have two backend servers running on ports 8081,8082 and 8083. Nginx forwards the request to one of these backend servers based on its load-balancing algorithm. it Uses round-robin load balancing by default.
  • SSL Termination: Nginx can handle all the HTTPS complexity, decrypting incoming requests and forwarding them as simple HTTP to your backend services.

Option 2: Docker Compose + NGINX

When using NGINX without Docker Compose, you have to manually run each container, create networks, and link them together using docker run commands. It gives you full control but can quickly become repetitive, error-prone, and harder to scale or maintain.

In contrast, using Docker Compose simplifies everything. You define your app, NGINX, and their relationships in a single docker-compose.yml file. Networking, container names, startup order, and volume mounting are handled automatically. It's much easier to manage, scale, and share with others.

In short:

  • Use Docker Compose for multi-container setups and repeatable environments.
  • Use the manual approach only for quick experiments or very simple use cases.

Combines container orchestration and NGINX in a docker-compose.yml file.

  • ✅ Easier management of multiple containers
  • ✅ Simplified service-to-service communication
  • ✅ Can auto-restart services on failure
  • ❌ Still requires static configuration

Use case: Local development, PoCs, small apps.

Goal

  • Run 3 instances of a Spring Boot app
  • Use NGINX as a reverse proxy/load balancer
  • Use Docker Compose to orchestrate everything

Docker Compose (docker-compose.yml)
Make sure your docker-compose.yml file is in the same folder as the nginx.conf file.

version: "3.8"

services:
  docker-demo-1:
    image: docker-demo:latest 
    container_name: docker-demo-1
    networks:
      - app-network

  docker-demo-2:
    image: docker-demo:latest 
    container_name: docker-demo-2
    networks:
      - app-network

  docker-demo-3:
    image: docker-demo:latest 
    container_name: docker-demo-3
    networks:
      - app-network

  nginx:
    image: nginx:latest
    container_name: nginx-lb
    ports:
      - "8080:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - docker-demo-1
      - docker-demo-2
      - docker-demo-3
    networks:
      - app-network

networks:
  app-network:

Enter fullscreen mode Exit fullscreen mode

This command is used to start your Docker Compose application while ensuring that all images are rebuilt before running the containers.

  • docker-compose up:
    Starts the services defined in your docker-compose.yml file. If the required images don’t exist locally, Docker Compose will pull them from a registry or build them if build instructions are provided.

  • --build:
    Forces Docker Compose to build (or rebuild) the images specified by the build: section in your compose file before starting the containers. This is useful when you’ve made changes to your Dockerfiles or application code and want to make sure the containers run with the latest version.

docker-compose up --build
Enter fullscreen mode Exit fullscreen mode

Results


The REST endpoint is working fine.

Traffic is balanced across all container instances.

Use Docker network aliases and service names, not container names

In Docker Compose, services are discoverable by their service name via internal DNS — no need for container_name. Let's Avoid hardcoding container names in the configuration file to prevent naming conflicts and improve portability.
docker-compose.yml

version: "3.8"
services:
  demo-app:
    image: docker-demo:latest
    networks:
      - app-network
  nginx:
    image: nginx:latest
    container_name: nginx-lb
    ports:
      - "8080:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - demo-app
    networks:
      - app-network
networks:
  app-network:

Then inside the nginx.conf, you can refer to demo-app as a hostname, and Docker DNS will resolve that to all running containers for that service (when scaled).

http {
    upstream demo_app {
        server demo-app:8080;
    } 

Before starting your Docker Compose application, ensure that any previously running containers are stopped and removed to avoid conflicts.

docker-compose up --build


The application is now up and running, but currently only a single instance is active. To enable failover and improve resilience, we need to run multiple instances. This can be achieved using the scale option in Docker Compose.

docker-compose up --scale demo-app=3 -d


Lets verify that NGINX is distributing traffic across all scaled instances.

NGINX ends up routing all traffic to just one instance. the DNS demo-app resolves to one container IP, even if you've scaled the service
If you scale up after NGINX is already running, it may not pick up the new IPs. To fix that:

docker compose restart nginx

Now NGINX is distributing traffic to all instances.

Summary: Docker Compose + NGINX for Load Balancing

Pros:

  • Docker Compose with NGINX offers a clean, reproducible way to route traffic across containerized microservices. It enables basic load balancing, modular service orchestration, and secure reverse proxying ideal for local development and small-scale deployments.

Cons / Limitations:

  • Static service definitions; no dynamic scaling or auto-discovery
  • Single-node scope; not suitable for multi-host clusters
  • Manual config updates and limited fault tolerance
  • Requires extra setup for SSL, observability, and advanced routing
  • Not ideal for simulating distributed failures or service mesh patterns

If you're running all Docker containers on the same host, then Disaster Recovery (DR) is not really possible in the traditional sense—because there's a single point of failure: the host itself.

  • If the host machine (e.g., a physical server or VM) crashes, dies, or is compromised, then all your containers and their data are lost.
  • Docker containers are ephemeral by default. Unless you're persisting data (e.g., with volumes), the containers can’t recover their state after a crash.
  • No redundancy. No failover. If high availability or DR is critical:
  • Use advance orchrestration tool.
  • Deploy containers across multiple nodes (hosts).
  • Use replication, health checks, and failover strategies.
  • Data can be replicated using stateful sets or external databases with their own DR.

Overall, Docker Compose with NGINX is a great stepping stone for learning containerized load balancing before moving to more advanced orchestration solutions like Docker Swarm, Kubernetes (K8s), OpenShift.

Summary Table

Top comments (0)