DEV Community

Cover image for AWS Project: Creating a Cluster of Virtual Machines Using Docker Swarm
Asif Khan
Asif Khan

Posted on

AWS Project: Creating a Cluster of Virtual Machines Using Docker Swarm

Introduction

As modern applications demand high scalability and reliability, container orchestration tools like Docker Swarm simplify managing clusters of machines and services. This project focuses on building a Docker Swarm cluster using Amazon EC2 instances. By the end of this tutorial, you'll understand how to set up a manager node and worker nodes in a Docker Swarm, deploy an Nginx service to test the cluster, and scale services across multiple nodes.

Why Use Docker Swarm on AWS?

Docker Swarm enables seamless container orchestration, allowing easy deployment of services across clusters of virtual machines. Using AWS EC2 as the infrastructure provider allows for the elastic scaling of resources as demand grows. This project showcases how to combine these two powerful technologies to create a production-grade container orchestration setup.

What You Will Learn

  • How to configure EC2 instances to form a Docker Swarm cluster.
  • How to deploy services (e.g., Nginx) across multiple EC2 instances using Docker Swarm.
  • How to scale the services and manage the cluster.

Tech Stack

In this project, the following technologies and services were used:

  • Amazon EC2: Virtual machines on AWS for running the Docker Swarm cluster.
  • Docker Swarm: Native Docker tool for managing clusters and orchestrating containers.
  • Nginx: A web server used as the test service to verify the Docker Swarm setup.

Prerequisites

Before starting this project, ensure you have the following:

  1. Basic AWS Knowledge: Understanding how to launch and manage EC2 instances.
  2. Basic Knowledge of Docker: Understanding how to use Docker and its basic operations.

Problem Statement or Use Case

Running services like web applications or APIs in production often requires multiple virtual machines to handle traffic, maintain high availability, and ensure scalability. Manually managing these machines and containers can be labor-intensive and prone to error. Docker Swarm offers an easy-to-use orchestration tool that allows seamless container management, deployment, and scaling across clusters.

Key Challenge

The primary goal is to set up a Docker Swarm cluster consisting of multiple EC2 instances, deploy a service (Nginx) across the cluster, and ensure that the service scales effectively with multiple worker nodes.

Solution

In this project, you will set up a manager node on AWS EC2 to control the Swarm, create worker nodes that join the manager, and then deploy and scale an Nginx service to test the functionality of the cluster. This setup can be used to manage various production-grade services or web applications in a real-world scenario.

Architecture Diagram

Here’s a high-level overview of the architecture:

Architecture Diagram of a Cluster of Virtual Machines Built Using Docker Swarm

Component Breakdown

  1. EC2 Manager Node: Acts as the central controller for the Docker Swarm cluster, coordinating worker nodes and managing service deployments.
  2. EC2 Worker Nodes: These instances run services like Nginx, based on instructions from the manager node.
  3. Nginx Service: A lightweight web server deployed on the worker nodes to test the functionality of Docker Swarm.
  4. User Data Scripts: Automates the installation of Docker and configuration of the Swarm during EC2 instance creation.

Step-by-Step Implementation

1. Create an EC2 Instance for the Manager Node

  • Launch an EC2 instance (Amazon Linux 2 or Ubuntu) to act as the Swarm manager.
  • Use a user-data script to install Docker and initialize the Swarm:
#!/bin/bash
sudo yum update
sudo yum -y install docker
service docker start
usermod -a -G docker ec2-user
chkconfig docker on
pip3 install docker-compose
Enter fullscreen mode Exit fullscreen mode
  • Initialize the Swarm:
docker swarm init --advertise-addr <manager-instance-ip>
Enter fullscreen mode Exit fullscreen mode

The --advertise-addr option ensures other nodes can reach the manager using its IP address.

2. Create Worker Nodes and Join Them to the Swarm

  • Launch 2-3 more EC2 instances for worker nodes. Use the following user-data script on each instance to install Docker and join the Swarm:
docker swarm join --token <swarm-join-token> <manager-instance-ip>:2377
Enter fullscreen mode Exit fullscreen mode
  • Retrieve the Swarm join token from the manager node:
docker swarm join-token worker
Enter fullscreen mode Exit fullscreen mode

3. Verify the Swarm Cluster

  • On the manager node, verify that the worker nodes have successfully joined the cluster:
docker node ls
Enter fullscreen mode Exit fullscreen mode

You should see the worker nodes listed as part of the Swarm cluster.

4. Deploy the Nginx Service

Deploy an Nginx service across the cluster:

docker service create --name web_app --replicas 1 --publish 80:80 nginxdemos/hello
Enter fullscreen mode Exit fullscreen mode

5. Verify the Service

  • Access the public IP address of any EC2 instance in the cluster. Swarm load-balances the service, so it will respond from any instance.

6. Delete the Resources

  • Remove the deployed service and terminate the Swarm cluster:
docker service rm web_app
docker swarm leave --force
Enter fullscreen mode Exit fullscreen mode
  • Terminate all EC2 instances from the AWS console or using the AWS CLI.

Challenges Faced and Solutions

  • Challenge 1: Docker Swarm Token Expiration

    If the Swarm join token expires, generate a new token using the docker swarm join-token worker command.

  • Challenge 2: Network Configuration

    Ensure proper security group settings for EC2 instances. Allow inbound traffic on ports 22 (SSH), 80 (HTTP), and 2377 (Docker Swarm communication).

  • Challenge 3: Instance Failure

    If an EC2 worker node fails, Docker Swarm automatically re-deploys services on the remaining active nodes.

Output

Output 1

Output 2

Conclusion

This project demonstrates the power of Docker Swarm for orchestrating containers across multiple EC2 instances on AWS. By setting up a manager node and multiple worker nodes, you’ve created a scalable and reliable environment for deploying services like Nginx. This setup can be expanded to host various microservices and production workloads, making it an excellent solution for modern cloud-based architectures.

Feel free to explore the code repository for this project in my GitHub repository.

Appendix

Asif Khan — Aspiring Cloud Architect | Weekly Cloud Learning Chronicler

LinkedIn/Twitter/GitHub

Top comments (0)