DEV Community

Cover image for docker swarm cluster using docker-machine
Deepak Sabhrawal
Deepak Sabhrawal

Posted on • Updated on

docker swarm cluster using docker-machine

The docker swarm tool is used for managing docker host clusters or orchestration of docker hosts. Today, in this post we will see how we can create a docker swarm cluster locally using Virtualbox and docker-machine. The docker-machine creates docker hosted virtual nodes that are way faster than running virtual machines natively on Virtualbox.

Why docker swarm?

  1. High availability
  2. Container scaling
  3. Load Balancing

We can have many nodes in a cluster but at least one manager node is required to manage multiple worker nodes. Version 1.2 onwards the docker swarm comes natively with the docker and no separate installation is required.

The manager node is responsible for all the operations like high availability, scaling, Load Balancing, etc. and can also act as a worker node to run the load if required.

We will be using docker-machine here, click to learn more about docker-machine and installation guide

Create docker hosted nodes and attach to current shell

Create one manager node using docker-machine
$docker-machine create --driver virtualbox manager
#Get the IP address of the manager node
$ docker-machine ip manager
192.168.99.105
#attach current shell with the worker shell
$eval $(docker-machine env manager)
#activate the manager shell
$docker-machine active
manager
#The current shell is now attached to the manager node
Enter fullscreen mode Exit fullscreen mode

Initialize the docker swarm cluster on the manager node

$ docker swarm init --advertise-addr 192.168.99.105
Swarm initialized: current node (jrivcbnx4jh6opbrm1qed84ue) is now a manager.

To add a worker to this swarm, run the following command:

   docker swarm join --token SWMTKN-1-2wb0sivbsxsx0t3va8wv0qywa303q1mcc2syplrwj0q1301kfs-8qfc9yhwst06y8c1gusftrjkl 192.168.99.105:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
Enter fullscreen mode Exit fullscreen mode

Check Manager Node

$docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
jrivcbnx4jh6opbrm1qed84ue *   manager             Ready               Active              Leader              19.03.5
Enter fullscreen mode Exit fullscreen mode

Till now we created one manager node using docker-machine and initialized the docker swarm cluster on it.
Now, we will create the worker nodes. You can open another shell to run these commands or run below command to detach the current shell from the docker-machine manager node or switch between the docker-machine environments.

#You will be back to the main shell, unset the environment
$eval $(docker-machine env -u)
Enter fullscreen mode Exit fullscreen mode

Now, let's create a worker node and switch working environment.

$docker-machine create --driver virtualbox worker1
Running pre-create checks...
Creating machine...
(worker1) Copying /home/deepak/.docker/machine/cache/boot2docker.iso to /home/deepak/.docker/machine/machines/worker1/boot2docker.iso...
(worker1) Creating VirtualBox VM...
(worker1) Creating SSH key...
(worker1) Starting the VM...
(worker1) Check network to re-create if needed...
(worker1) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env worker1

# Run this command to configure your shell to attach to worker1 
$eval $(docker-machine env worker1)
$docker-machine active
worker1
Enter fullscreen mode Exit fullscreen mode

Register this worker node with the docker-swarm cluster with the command we have got in the output of the activating the manager node.

$docker swarm join --token SWMTKN-1-2wb0sivbsxsx0t3va8wv0qywa303q1mcc2syplrwj0q1301kfs-8qfc9yhwst06y8c1gusftrjkl 192.168.99.105:2377
#output: This node joined a swarm as a worker.
Enter fullscreen mode Exit fullscreen mode

Now, switch the docker-machine environment and check the registration on the manager node.

#switch to the manager node & check
$eval $(docker-machine env manager)
$docker-machine active
manager
$docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
jrivcbnx4jh6opbrm1qed84ue *   manager             Ready               Active              Leader              19.03.5
ymq8yt76ogpywk2eb6rzt9au1     worker1             Ready               Active                                  19.03.5
Enter fullscreen mode Exit fullscreen mode

Repeat the same steps to add another worker node to the swarm cluster.
The $docker node ls command on manager node should return

ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
jrivcbnx4jh6opbrm1qed84ue *   manager             Ready               Active              Leader              19.03.5
ymq8yt76ogpywk2eb6rzt9au1     worker1             Ready               Active                                  19.03.5
ovr9a15sv0gw2lc68k756qth2     worker2             Ready               Active                                  19.03.5

Enter fullscreen mode Exit fullscreen mode

Now, we have one manager node and two worker nodes active in our swarm cluster. We can add as many nodes as a manager or a worker. To add a manager and a worker node run the below commands to get the join token.
Use the join token to get the node registered in the cluster.

$docker swarm join-token manager
To add a manager to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-2wb0sivbsxsx0t3va8wv0qywa303q1mcc2syplrwj0q1301kfs-0kpbj4agn8ptyzcmegg6c4hnk 192.168.99.105:2377

$docker swarm join-token worker
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-2wb0sivbsxsx0t3va8wv0qywa303q1mcc2syplrwj0q1301kfs-8qfc9yhwst06y8c1gusftrjkl 192.168.99.105:2377

Enter fullscreen mode Exit fullscreen mode

Now, we will create services on our nodes. To create a service run the below command

$ docker service create --name webservice -p 8001:80 nginx:latest
#Check the service
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
05ggmfg0n3o4        webservice          replicated          1/1                 nginx:latest        *:8001->80/tcp
Enter fullscreen mode Exit fullscreen mode

This service can run anywhere in the docker cluster on port 8001, hence 8001 is the cluster-wide port. Let's check where the service is running and if that is running on manager node we can set manager node availability to drain and then the service run on worker nodes only.

$docker service ps webservice
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
9tt1y854e55p        webservice.1        nginx:latest        manager             Running             Running about a minute ago                       
#It is running on manager node, set the manager availability to drain 
$docker node update --availability drain manager
manager
$docker service ps webservice
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE             ERROR               PORTS
ns6xellg2hgc        webservice.1        nginx:latest        worker2             Running             Preparing 7 seconds ago                       
9tt1y854e55p         \_ webservice.1    nginx:latest        manager             Shutdown            Shutdown 4 seconds ago  
#The service automatically moved to worker2 node and load removed from manager
Enter fullscreen mode Exit fullscreen mode

Irrespective of where the service is running, if we hit any node in our cluster on port 8001 we should be able to see the nginx welcome page.

Auto Scaling & Load Balancing
Now, we will see how the docker swarm manages the scaling, Load Balancing. To scale the service run below command and it will scale up throughout out the cluster on all available active nodes. Our manager node is in drain state hence no service will be running over there.

$docker service scale webservice=10
webservice scaled to 10
overall progress: 10 out of 10 tasks 
1/10: running   [==================================================>] 
2/10: running   [==================================================>] 
3/10: running   [==================================================>] 
4/10: running   [==================================================>] 
5/10: running   [==================================================>] 
6/10: running   [==================================================>] 
7/10: running   [==================================================>] 
8/10: running   [==================================================>] 
9/10: running   [==================================================>] 
10/10: running   [==================================================>] 
verify: Service converged 

#Verify where these are running
$docker service ps webservice
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
ns6xellg2hgc        webservice.1        nginx:latest        worker2             Running             Running 16 minutes ago                           
9tt1y854e55p         \_ webservice.1    nginx:latest        manager             Shutdown            Shutdown 16 minutes ago                          
nza4dxb4gq5t        webservice.2        nginx:latest        worker2             Running             Running about a minute ago                       
tddgybed5mon        webservice.3        nginx:latest        worker2             Running             Running about a minute ago                       
lqgjvagrmscc        webservice.4        nginx:latest        worker1             Running             Running 57 seconds ago                           
0vt8ou31sxds        webservice.5        nginx:latest        worker1             Running             Running 57 seconds ago                           
xrmvbrbir68e        webservice.6        nginx:latest        worker1             Running             Running 57 seconds ago                           
k0f1agcqz11u        webservice.7        nginx:latest        worker1             Running             Running 57 seconds ago                           
y8oa9b9pug0u        webservice.8        nginx:latest        worker2             Running             Running about a minute ago                       
266sik5ude24        webservice.9        nginx:latest        worker1             Running             Running 57 seconds ago                           
mb4jpa4fcigk        webservice.10       nginx:latest        worker2             Running             Running about a minute ago  
Enter fullscreen mode Exit fullscreen mode

If you see the service is automatically load-balanced among the active nodes, this is the beauty of the docker swarm automatic load balancer.

High Availability

$docker node update --availability drain worker1
worker1
$docker service ps webservice
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE             ERROR               PORTS
ns6xellg2hgc        webservice.1        nginx:latest        worker2             Running             Running 20 minutes ago                        
9tt1y854e55p         \_ webservice.1    nginx:latest        manager             Shutdown            Shutdown 20 minutes ago                       
nza4dxb4gq5t        webservice.2        nginx:latest        worker2             Running             Running 5 minutes ago                         
tddgybed5mon        webservice.3        nginx:latest        worker2             Running             Running 5 minutes ago                         
fl1649e0h1vj        webservice.4        nginx:latest        worker2             Running             Running 5 seconds ago                         
lqgjvagrmscc         \_ webservice.4    nginx:latest        worker1             Shutdown            Shutdown 7 seconds ago                        
z8bjrweqq676        webservice.5        nginx:latest        worker2             Running             Running 5 seconds ago                         
0vt8ou31sxds         \_ webservice.5    nginx:latest        worker1             Shutdown            Shutdown 7 seconds ago                        
r4q56ukiyz93        webservice.6        nginx:latest        worker2             Running             Running 5 seconds ago                         
xrmvbrbir68e         \_ webservice.6    nginx:latest        worker1             Shutdown            Shutdown 7 seconds ago                        
00vbb5b7dk7s        webservice.7        nginx:latest        worker2             Running             Running 5 seconds ago                         
k0f1agcqz11u         \_ webservice.7    nginx:latest        worker1             Shutdown            Shutdown 7 seconds ago                        
y8oa9b9pug0u        webservice.8        nginx:latest        worker2             Running             Running 5 minutes ago                         
0gaahurh7d81        webservice.9        nginx:latest        worker2             Running             Running 5 seconds ago                         
266sik5ude24         \_ webservice.9    nginx:latest        worker1             Shutdown            Shutdown 7 seconds ago                        
mb4jpa4fcigk        webservice.10       nginx:latest        worker2             Running             Running 5 minutes ago 
Enter fullscreen mode Exit fullscreen mode

If you see that, when we set one of our worker nodes to drain state, how high availability is handled. All the load from worker1 node automatically shifted to worker2 node to maintain HA.

Holla! here it comes to an end with the learning of creating docker host using docker-machine and managing nodes, services using docker swarm.

Happy dockering! & Keep learning!

Top comments (1)

Collapse
 
luiscusihuaman profile image
LuisCusihuaman

MANY THANKS!!! 🤟