DEV Community

Mohamed El Eraky
Mohamed El Eraky

Posted on • Updated on

Docker Swarm Series: #2nd Create a highly available environment

Image description



Inception

Hello everyone, This article is part of The Swarm series, The knowledge in this series is built in sequence, Check out my profile.

In the last article we covered a high overview about Orchestration tools and Swarm, what ports should to be opened between nodes, And setup the environment using Play-with-docker labs.

In This article We will complete from the stand-up point, Will create a highly available Swarm environment using Play-with-docker labs, and deploy a simple webapp application using DockerCLI commands.


Lab Overview

The MAIN point of having an Orchestration tool is to have a highly available environment and highly available applications, besides the other features that provide (e.g. load balancer, monitoring, etc,,) Because of that Today's article will focus on how to create a high available environment and deploy a simple webapp using DockerCLI.

enough talking and let's get started


Highly available environment overview

  • As we need our application to be highly available to avoid single point of failure if the application goes down. it's the same idea on the environment level, if we created a high available application and load balance the traffic between them but the application lives on the same node server. here we have a high available application (from the traffic aspect) however we don't on environment level, if the node server goes down the application will goes down too. To avoid this we should have a highly available environment and the application lives on more than one node, And this is the idea behind the Orchestration tools to deliver the simplicity to achieve this.

  • To create a highly available environment on Swarm we should have at least three manager node but typically no more than seven, manager nodes container the necessary information to manage the cluster, if the manager node goes down this will cause cluster failed function, How to determine the number of manager nodes:

    • Three manager nodes tolerate one node failure.
    • Five manager nodes tolerate two node failures.
    • Sever manager nodes tolerate three node failures.

Regarding worker nodes, at least two worker nodes for redundancy and fault tolerance, but you can add more nodes as needed to handle the workload.

Actually, The Swarm Manager node will host the applications like worker node as well.

Therefor, In this lab will create Three manager nodes with two worker nodes.


Create a highly available environment

  • Open Play-with-docker labs.
  • Press ADD NEW INSTANCE, and initiate Docker swarm mode using:
docker swarm init --advertise-addr eth0

# to get the interface name use
ip a s
Enter fullscreen mode Exit fullscreen mode

Image description

  • copy and past -ctrl+shift+v- the highlighted command at the screenshot to generate a token -at the same node- that will use to join the other managers to the cluster.

Image description

  • Press ADD NEW INSTANCE to create, And join another Manager node to the the cluster.
  • copy and past the highlighted command at the last screenshot to join the cluster as a manager node.

Image description

  • Repeat the last step to join another manager node at the same cluster.

Image description

  • let's create the Worker nodes, Go back to the 1st manager node created and copy the command that join the worker nodes to the swarm cluster.

  • Press ADD NEW INSTANCE, past the command to join the cluster as worker node:

Image description

Image description

  • Repeat the last step to join another worker node to the cluster.


  • Let's check out our environment, Go to the any one of the manager nodes, and type the below command:

docker node ls
Enter fullscreen mode Exit fullscreen mode

Image description

As listed there are three manager nodes Besides two worker nodes, The asterisk means the node number two that handle the command you've written

docker node ls



Also at the manager status column, There is a leader and the reachable, The leader is the manager node leader initiator and the reachable means that the node is currently available and can communicate with the other nodes in the Swarm cluster.


Deploy a webapp application overview

Now as we have our cluster running and ready to deploy containers, To deploy a container we need to create a service, the service is and abstraction that represent multiple container of the same image deployed across the cluster, you can assume the service is the same concept as POD in K8s.

run the service by using docker service command instead of using docker run or start in normal docker mode, try to differentiate between normal docker mode and docker swarm mode with these commands

Deploy a webapp application

  • copy and past the following command to run your first app:
docker service create --detach=true --name nginx1 --publish 80:80 --mount source=/etc/hostname,target=/usr/share/nginx/html/index.html,type=bind,ro nginx:1.12

Enter fullscreen mode Exit fullscreen mode

This command statement is declarative, and Swarm will try to maintain the state declared, which means Swarm will compare the desired state of the application With the Actual state, However this is the first running of this application So the desired have been declared at the command however the actual Swarm will created for us.

The --mount flag is useful to have Nignx print out the hostname of the node its running on, will try this out at the next article.

  • After run the application list see its status by using:
docker service ls
Enter fullscreen mode Exit fullscreen mode

Image description


  • To take deeper look at the running tasks use:
docker service ps nginx1
Enter fullscreen mode Exit fullscreen mode

This will print out more info about this application including the node that host this app.

mention here that you can land on this node and simple run the normal node command: docker container ls

Image description

  • Let's Test the app service, go to another node -that not host the service- and run:
curl localhost:80
Enter fullscreen mode Exit fullscreen mode

Image description


That's it, Hope this article inspired you and will appreciate your feedback. Thank you.

Top comments (0)