DEV Community

Cover image for Configuring Kubernetes Multinode Cluster over AWS using Ansible
Piyush Bagani
Piyush Bagani

Posted on • Edited on

Configuring Kubernetes Multinode Cluster over AWS using Ansible

Hello Readers👋,

In this blog, I am going to explain "How to Configure the Kubernetes Multinode cluster over Aws cloud using Ansible".

What is Kubernetes?

Kubernetes is a container orchestration system, which can be used to manage large numbers of containers on top of physical infrastructure. It aims to provide a platform for automating deployment, scaling, and operations of application containers across clusters of hosts.

To know more about Kubernetes, you can visit my other blog.
Link: https://dev.to/piyushbagani15/kubernetes-in-action-53nf

The Configuration Tool — Ansible :

Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy and maintain.

To know more about Ansible, you can visit my other blog.
Link: https://dev.to/piyushbagani15/ansible-in-action-how-aws-is-solving-challenges-using-ansible-oll

WHAT IS AWS?

Amazon Web Services (AWS) is the top cloud player in this cloud era; it offers scalable, reliable, and inexpensive cloud computing services.
To know more about AWS, you can visit my other blog.
Link: https://dev.to/piyushbagani15/how-byju-s-make-students-fall-in-love-with-learning-with-the-help-of-aws-447k

This was the introduction to the technologies I have used to implement this particular task.

Now let us discuss some more terminology.

What is a Kubernetes Cluster?

Mainly, it is a set of nodes that run containerized applications.
Here nodes are mainly Kubernetes Master Node and number of Worker nodes(slaves).
In the single-node cluster, different programs and resources are running to monitor the pods if the node goes down then the entire service will be lost so we will face a single point of failure. so it's recommended to use the Multinode cluster.

Let's start with the main practical.

Overview:

I have launched an EC2 instance on AWS cloud which is serving as the controller node. In this node, we are going to do all the things.

Setup the configuration file of ansible as follows:

Alt Text
Inventory file could be created either globally inside the controller node in the path (/etc/ansible/ansible.cfg) or could be created at the workspace where we are going to run our playbooks/roles.
To avoid some warnings given by the command we have to disable it, using command_warnings=false

The remote user is that we are going to log in, here we have launched the ec2 instances hence the remote username is ec2-user.

(Note: Just for information we have implemented this infrastructure on AWS cloud. You can do this in your local systems by launching multiple VM(s))

Also, we need to disable the ssh key, as when we do ssh it asks you for yes or no. We have to write host_key_checking=false to disable it.

Ansible uses existing privilege escalation systems to execute tasks with root privileges or with another user’s permissions. Because this feature allows you to ‘become’ another user, different from the user that logged into the machine (remote user), we call it to become. The become keyword leverages existing privilege escalation tools like sudo, su, pfexec, doas, pbrun, dzdo, ksu, machinectl, and others.

To login into that newly launched OS, we need to provide its respective key. Here .pem format will work. We need to give permission to that key in the read mode. Command for that:-

chmod 400 keyname.pem
Enter fullscreen mode Exit fullscreen mode

Note: Create a Key-Pair on AWS Cloud and then download it. Then transfer that key using WinSCP from Windows to the instance where Ansible is configured.

Create an Ansible Playbook for launching two Instances on the top of AWS Cloud for configuring the Kubernetes Cluster.

The YAML code force instance.yml is as follows:
Alt Text
Alt Text
Here you can see I have added the vars_files secure.yml, which consists of access_key and secret_key.
These 2 are sensitive information, hence I have packed them in ansible-vault.
Alt Text
These keys can be generated while creating the IAM user in your AWS account.
Link: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html
Now, if we will try to read the vault file using the cat command, then we cannot read.
Alt Text
Our secure.yml is locked and can be opened only by giving right password set by us.

Now we can run the playbook using the following command.

ansible-playbook instance.yml --ask-vault-pass
Enter fullscreen mode Exit fullscreen mode

This is because instance.yml is using secure.yml as its vars_files.
Alt Text
From the above image, it is confirmed that our playbook ran successfully. To confirm we can check the AWS EC2 management console. Two instances are launched with the tag names Master and Slave respectively.
Alt Text
Also, The Ansible Inventory is successfully updated.

Now, we can create an ansible role with the following command

ansible-galaxy init <role_name>
Enter fullscreen mode Exit fullscreen mode

I have created 2 roles-- Master and Slave and moved to them respectively.
Alt Text

In the Master ansible role in the tasks folder we have main.yml update all tasks code in it.
Alt Text
Alt Text
Alt Text

Also, create a daemon.json and k8s.conf file inside files/ folder of Master role, this is the same for the Slave role.

Here we are first installing Docker, Kubeadm, and ip-tables which are pre-requisite software at the master node, further changing container service and initializing Kubernetes master then flannel for tunneling and connection between Slave and Master.

Steps to create an Ansible Playbook inside Slave Role for configuring Slave Node of Kubernetes Cluster.

Go inside the Slave role, then inside the tasks folder/directory, and edit the main.yml as we did in Master.

Alt Text
Alt Text
Alt Text

Create the daemon.json and k8s.conf file inside the files directory.
Alt Text

Same as Kubernetes Master, Slave pre-requisites are the three software i.e. Docker, Kubeadm, and ip-tables. We have to have updates for the ip table for which we have used /etc/sysctl.d/k8s.conf file in slave.
Registering / Joining of Slave node to master node could only be done via the key that is provided by the mater after the whole setup and initialization. We need to copy the key in slave nodes. For this purpose, we have used tokens here.

Main playbook

Create the main.yml for running both Master and Slave Roles for configuring Kubernetes Cluster.
Alt Text
We have given "localhost" as the host of the playbook as the whole task is running dynamically.
Now, run the Ansible Playbook main.yml by using the command given below.
Alt Text
Alt Text
Alt Text
Alt Text
The Playbook is executed Successfully without any errors and the Multi-node cluster is configured.

Verify that the Multi-node Cluster is configured or not.

Log in to the Master instance and run the following commands.
Alt Text

Hence, Kubernetes Cluster is Configured Successfully.

Following is the Github Repository for your reference

Link: https://github.com/PiyushBagani15/Kubernetes_MultiNode_Cluster_using-Ansible

Connect with me on LinkedIN

Link: https://www.linkedin.com/in/piyush-bagani/

Top comments (1)

Collapse
 
piyushbagani15 profile image
Piyush Bagani

Make sure to install "boto" inside the controller node as it is the prerequisite.
"pip install boto" will help you.

Thanks