DEV Community

Cover image for Running Docker Swarm in Docker-in-Docker (DinD)
jungledev
jungledev

Posted on • Edited on

Running Docker Swarm in Docker-in-Docker (DinD)

Prerequisites

  • Understanding of how Docker containers work
  • Familiarity with docker run, docker build, and docker-compose
  • Knowing how to manage images, volumes, and networks

It’s recommended to read my previous article on Docker fundamentals before proceeding with this post.

Introduction

Docker Swarm is a native clustering and orchestration tool for Docker. It allows you to run and manage multiple Docker engines (nodes) as a single virtual system. This is useful for deploying and scaling containerized applications across multiple machines.

It’s built into Docker, so you don’t need any external tools to use it.
A Swarm is a group of machines (called nodes) running Docker.

Regular Docker commands only work with one container at a time: Executing docker run generates a single container on your current host machine. But with Docker Swarm, you can start multiple container replicas that are distributed over a fleet of Docker hosts in your Swarm cluster. The Swarm controller monitors your hosts and Docker containers to ensure the desired number of healthy replicas is running.

There are two types of nodes:

  • 🧠 Manager nodes – manage the cluster and make decisions.
  • 🧱 Worker nodes – run the containers (services).

Docker Swarm orchestrates containers across multiple hosts, so in this example, we need two servers, one as the manager node and the other as the worker node.

To begin, let’s initialize Docker Swarm on the manager node server.

Create a Docker Swarm cluster.

docker swarm init
Enter fullscreen mode Exit fullscreen mode

OR if joining an existing swarm

docker swarm join --token <token> <manager-ip>:2377
Enter fullscreen mode Exit fullscreen mode

Once initialized or joined, your Docker CLI is aware that you're in Swarm mode.

Verify that the Swarm has been created correctly.

Check Swarm Status

docker info | grep -i swarm
Enter fullscreen mode Exit fullscreen mode

List Swarm Nodes

docker node ls
Enter fullscreen mode Exit fullscreen mode

🧠 Only manager nodes show MANAGER STATUS. You need at least one manager.

We can now deploy our docker-compose.yml to our local Docker Swarm cluster.

nginx.conf

worker_processes 1;
events { worker_connections 1024; }
http {
    upstream demo_app {
        server demo-app:8080;
    }
    log_format with_upstream '$remote_addr - [$time_local] '
                           '"$request" $status $body_bytes_sent '
                           '"upstream=$upstream_addr" '
                           '"u_status=$upstream_status" '
                           '"u_time=$upstream_response_time"';

    access_log /var/log/nginx/access.log with_upstream;

    server {
        listen 80;
        server_name  localhost;

        location / {
            proxy_pass http://demo_app;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            add_header X-Upstream-Server $upstream_addr;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

docker-compose.yml

version: "3.8"
services:
  demo-app:
    image: docker-demo:latest
    deploy:
      replicas: 2
      restart_policy:
        condition: on-failure
    networks:
      - app-network
  nginx:
    image: nginx:latest
    container_name: nginx-lb
    ports:
      - "8080:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - demo-app
    networks:
      - app-network
networks:
  app-network:
Enter fullscreen mode Exit fullscreen mode

Deploy the stack

docker stack deploy -c docker-compose.yml demo-app
Enter fullscreen mode Exit fullscreen mode

Stack and services are created on the Swarm

  • Docker parses docker-compose.yml
  • Creates services as demo-app_
  • Sets up overlay networks and volumes
  • Distributes containers across Swarm nodes
  • Uses your local registry credentials to pull private images

Lists stacks deployed in Docker Swarm mode.

docker stack ls
Enter fullscreen mode Exit fullscreen mode

After a few minutes, the Swarm services should have successfully started on your local machine.

docker stack services demo-app
Enter fullscreen mode Exit fullscreen mode

list down of all processes within the stack

docker stack ps demo-app
Enter fullscreen mode Exit fullscreen mode

Let’s invoke a REST endpoint from a demo-app running in Docker Swarm, routed through NGINX.

Verify NGINX logs.

All traffic is hitting a single instance, even though we’ve scaled the service to 2 replicas. NGINX not distributing requests properly it seems.

Let's verify the demo-app service instances to verify correct number of replicas

docker service ls
docker service ps demo-app_demo-app
Enter fullscreen mode Exit fullscreen mode

We can see 2 replicas which looks fine as per our configuration.


docker service ps demo-app_demo-app
Enter fullscreen mode Exit fullscreen mode

Look at the NODE column— both replicas are on the same node.

Setup is working as expected, even though it seems broken.

What's Happening

  • You have 2 replicas of a service in Docker Swarm.
  • Both replicas are running on the same node (your local machine).
  • You're accessing the service using localhost or 127.0.0.1.
  • Docker only sends traffic to one of the replicas, even though routing mesh is technically enabled.

Routing Mesh Is Node-Level, Not Container-Level

When you access localhost:8080, the traffic:

  • Hits the local node’s ingress port.
  • Is routed via IPVS to one of the service tasks (containers).
  • But IPVS load balancing on a single node often defaults to one replica unless you use external load balancing or DNS tricks.

How to Distribute Traffic Across Replicas

To simulate a multi-node Docker Swarm cluster locally with routing mesh, you can use Docker-in-Docker (DinD) or VMs, but the cleanest and most reproducible method is using Docker containers as virtual nodes. This lets you test ingress routing, service replication, and node constraints — all on your local machine. A Docker Swarm routing mesh is the built-in load balancer that automatically routes incoming requests to published service ports across all nodes in the cluster, even if the node receiving the request isn't running a replica of that service.

What You'll Do

  • Create 2 Docker-in-Docker containers: node1 (manager) and node2 (worker)
  • Enable communication between them
  • Initialize Docker Swarm
  • Join the nodes into a cluster

Each DinD container runs its own Docker daemon — completely isolated from:

  • The host Docker
  • Other DinD containers

Removes the current node from a Docker Swarm cluster, even if it's a manager node.

docker swarm leave --force
Enter fullscreen mode Exit fullscreen mode

Create a Custom Docker Network and Connect Them to a Shared Network
docker-compose.yml

version: '3.8'
services:
  manager:
    image: docker:dind
    container_name: swarm-manager
    privileged: true
    hostname: manager
    networks:
      - swarm_net
    command: dockerd --host=tcp://0.0.0.0:2375 --host=unix:///var/run/docker.sock
    ports:
      - "8080:8080"   # <-- forward from host to DinD
      - "2377:2377"   # swarm mgmt
      - "7946:7946"
      - "4789:4789/udp"
  worker1:
    image: docker:dind
    container_name: swarm-worker1
    privileged: true
    hostname: worker1
    networks:
      - swarm_net
networks:
  swarm_net:
    driver: bridge
Enter fullscreen mode Exit fullscreen mode
  • --privileged is required for DinD
  • --network swarm_net ensures both containers can talk

start manager and worker containers.

docker compose up -d
Enter fullscreen mode Exit fullscreen mode

Connect to manager container

docker exec -it swarm-manager sh
Enter fullscreen mode Exit fullscreen mode

Then inside the container init docker swarm

docker swarm init --advertise-addr 172.18.0.3:2377
Enter fullscreen mode Exit fullscreen mode

to find ip address hostname -i on swarm-manager

Copy the docker swarm join --token :2377 command that is displayed in the output. You will use this to connect the worker nodes.

docker node ls
Enter fullscreen mode Exit fullscreen mode

Open another terminal and connect to swarm-worker1:

docker exec -it swarm-worker1 sh
Enter fullscreen mode Exit fullscreen mode

Join swarm-worker1 using the token generated in swarm-manager

docker swarm join \
  --token <PASTE_TOKEN_HERE> \
  <ip>:2377
Enter fullscreen mode Exit fullscreen mode

Back in swarm-manager container: Check Swarm Nodes
You should see:

  • manager(Manager)
  • worker1(Worker)

You're now running a 2-node Docker Swarm cluster on a single machine using Docker-in-Docker!

Share Host Images with the DinD Container (manager)

Save image as tar file

docker save docker-demo:latest -o docker-demo.tar

docker save nginx:latest -o nginx.tar

Enter fullscreen mode Exit fullscreen mode

pull latest nginx if not exist docker pull nginx:latest

copy tar file into swarm-manager and swarm-worker1

docker cp docker-demo.tar swarm-manager:/docker-demo.tar

docker cp nginx.tar swarm-manager:/nginx.tar

docker cp docker-demo.tar swarm-worker1:/docker-demo.tar

docker cp nginx.tar swarm-worker1:/nginx.tar
Enter fullscreen mode Exit fullscreen mode

load tar file in swarm-manager and swarm-worker1

docker load -i /docker-demo.tar

docker load -i /nginx.tar
Enter fullscreen mode Exit fullscreen mode

Update docker-compose.yml to use Swarm overlay networks:

You need to use the previously created nginx.conf abd below docker-compose.yml file within the swarm-manager container’s terminal.

version: "3.8"
services:
  demo-app:
    image: docker-demo:latest
    deploy:
      replicas: 2
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
      placement:
        max_replicas_per_node: 1  # Distribute across nodes
      update_config:
        parallelism: 1
        delay: 10s
        failure_action: rollback
    networks:
      - app-network

  nginx:
    image: nginx:latest
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
    ports:
      - target: 80
        published: 8080
        protocol: tcp
        mode: ingress
    configs:
      - source: nginx_conf
        target: /etc/nginx/nginx.conf
    depends_on:
      - demo-app
    networks:
      - app-network
networks:
  app-network:
    driver: overlay
    attachable: true

configs:
  nginx_conf:
    file: ./nginx.conf

Enter fullscreen mode Exit fullscreen mode
  • target: 80 — the port inside the container NGINX listens on.
  • published: 8080 — the port exposed on the host (i.e. localhost:8080).
  • mode: ingress — makes the port available on all nodes, even if the container isn’t running there.

What is the overlay driver?
The overlay network driver creates a distributed network that can span multiple docker hosts.Overlay networks were designed to be used with docker swarm services.

When a docker host initiates or joins a swarm, 2 new networks are created:

  • a default overlay network called ingress which handles the control and data traffic related to swarm services.
  • and a virtual bridge network called docker_gwbridge that connects overlay networks to the individual Docker daemon’s physical network.

Deploy the stack

docker stack deploy -c docker-compose.yml demo-app
Enter fullscreen mode Exit fullscreen mode

Please ensure that both nginx.conf and docker-compose.yml files are present on swarm-manager.

This will schedule services across all nodes.

check for task logs

docker service ps demo-app_demo-app --no-trunc
Enter fullscreen mode Exit fullscreen mode

Instances are created in both manager and worker1

docker ps then tail nginx logs docker logs -f 2396e0f9928f
When you access http://localhost:8080/api/customers, the traffic:

Traffic is still being routed only to manager (or node where nginx is up and running), instead of being distributed across the cluster.

we’re hitting the core of how Docker Swarm routing mesh works vs. how Nginx works inside it.

The key point:

  • Swarm routing mesh only works when you --publish a port on the service itself.
  • If you put Nginx inside Swarm and only expose it (--publish 8080:80), then all host traffic will hit the Nginx container(s) only. Nginx is not aware of the Swarm routing mesh — it just proxies to whatever demo-app tasks it can see.

That means

  • If Nginx runs only on manager, all traffic lands on manager.
  • If Nginx runs globally, each node with Nginx can serve traffic.

Instead of relying on Nginx as an external load balancer to distribute requests across service replicas in a Docker Swarm, you can leverage Swarm's built-in Ingress Routing Mesh to achieve similar functionality directly within the swarm.

Option 1: Use Swarm’s built-in routing mesh (no external Nginx needed)

If you just want load balancing, you don’t need Nginx at all.

version: "3.8"
services:
  demo-app:
    image: docker-demo:latest
    deploy:
      replicas: 2
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
      placement:
        max_replicas_per_node: 1  # Distribute across nodes
      update_config:
        parallelism: 1
        delay: 10s
        failure_action: rollback
    ports:
     - target: 8080
       published: 8080
       protocol: tcp
       mode: ingress
Enter fullscreen mode Exit fullscreen mode

Now, Swarm’s ingress load balancer will automatically:

  • Accept traffic on every node, on port 8080
  • Distribute it across all 3 replicas (round-robin)

Option 2: Keep Nginx, but run it as a Swarm service

If you really need Nginx (e.g., TLS termination, URL rewriting), you must deploy it in Swarm with --mode global or multiple replicas:

version: "3.8"
services:
  demo-app:
    image: docker-demo:latest
    deploy:
      replicas: 2
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
      placement:
        max_replicas_per_node: 1  # Distribute across nodes
      update_config:
        parallelism: 1
        delay: 10s
        failure_action: rollback
    networks:
      - app-network

  nginx:
    image: nginx:latest
    deploy:
      mode: global
      restart_policy:
        condition: on-failure
    ports:
      - target: 80
        published: 8080
        protocol: tcp
        mode: ingress
    configs:
      - source: nginx_conf
        target: /etc/nginx/nginx.conf
    depends_on:
      - demo-app
    networks:
      - app-network
networks:
  app-network:
    driver: overlay
    attachable: true

configs:
  nginx_conf:
    file: ./nginx.conf

Enter fullscreen mode Exit fullscreen mode

Why --mode global helps here

If you only run 1 Nginx replica (say, on the manager), all cluster ingress traffic goes through that one container.

With global mode, traffic is distributed at the edge, because each node can handle requests locally and forward them internally.

Even when you run Nginx in global mode (so one replica per node), the upstream section in nginx.conf will look the same on all nodes.

it can look like everything is funneled through the manager, but in reality the VIP is distributing traffic at L4 inside the overlay.

When to consider an external load balancer

While the routing mesh provides a robust load balancing mechanism, you might still consider using an external load balancer like Nginx in specific scenarios:

  • Advanced Load Balancing Features: External load balancers often offer more granular control over load balancing strategies (e.g., least connections, IP hash) and advanced features like SSL termination and layer 7 (HTTP) routing, according to a Medium article.

  • Existing Infrastructure: If you already have a mature Nginx setup for other purposes and want to leverage it for your swarm services, it might be more efficient to integrate with the existing setup.

Cleanup

Remove the stack (stops and removes all services)

docker stack rm demo-app
Enter fullscreen mode Exit fullscreen mode

Stop All containers

docker stop $(docker ps -q)
Enter fullscreen mode Exit fullscreen mode

Remove all containers

docker rm $(docker ps -aq)
Enter fullscreen mode Exit fullscreen mode

To remove all the stopped containers

docker rm $(docker ps -q -f status=exited)
Enter fullscreen mode Exit fullscreen mode

To remove a network

docker network rm dind-net
Enter fullscreen mode Exit fullscreen mode

Summary

Docker Swarm is a simple, native way to orchestrate containers across multiple machines. It’s great for small to medium-scale deployments where you want:

  • Quick setup
  • Easy scaling
  • Built-in security
  • Docker-native commands For more advanced or enterprise-grade orchestration, Kubernetes is usually preferred.

References & Credits

AI tools were used to assist in research and writing but final content was reviewed and verified by the author.
Docker swarm

Top comments (0)