DEV Community

Piyush Chaudhari for AWS Community Builders

Posted on • Edited on

Deploy an AWS EKS Cluster using Terraform (IaC)

In this blog I’ll explain you about AWS EKS (Elastic Kubernetes Service) and how to deploy EKS cluster on AWS using Terraform.

What is AWS EKS?

Amazon Elastic Kubernetes Service(EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes Control Plane/Nodes. AWS EKS is a managed AWS Kubernetes service that scales, manages and deploys containerized applications. It typically runs in the Amazon public cloud, but can also be deployed on premises.
Image description

The Kubernetes management infrastructure of Amazon EKS runs across multiple Availability Zones (AZ). AWS EKS helps you provide highly available and secure clusters and automates key tasks such as patching, node provisioning, and updates.

How Does AWS EKS Work?

AWS EKS Clusters are composed of the following components:

  • Control Plane: Composed of 3 master nodes, each running in a different AZ to ensure High-Availability.
  • Worker Nodes: Run on Amazon EC2 instances located in a VPC, which is not managed by AWS. You can control and configure the VPC allocated for worker nodes. You can use a SSH to give your existing automation access or to provision worker nodes.

Image description

There are 2 main deployment options, you can deploy one cluster for each environment/application. Or alternatively, you can define IAM security policies and Kubernetes namespaces to deploy one cluster for multiple applications/environments.

And for restricting the traffic between control-plane and your cluster, EKS also provides support of Amazon VPC network policies. Only authorized clusters and accounts, defined by Kubernetes role-based access control (RBAC), can view or communicate with control plane components.

You can read more about AWS EKS from here

What is Terraform?

Terraform is a free and open-source infrastructure as code (IaC) that can help to automate the deployment, configuration, and management of the remote servers. Terraform can manage both existing service providers and custom in-house solutions.

Image description

You can read more about Terraform from here


Now, I’m going to create an EKS Cluster with the help of Terraform (IaC).

Prerequisites

  • An AWS Account
  • Basic Knowledge of AWS Cloud, Terraform & Kubernetes

Now, let’s start creating terraform code files for our AWS EKS based Kubernetes cluster.

Step-1 Start with Creating Terraform Files

Here, I will be using Visual Studio Code on my local machine. I have already installed Terraform and authenticated with necessary IAM user with sufficient privileges to interact with my AWS account programmatically.

Create vars.tf file and add the below content in it:



variable "access_key" {
  default = "<YOUR-AWS-ACCESS-KEY>"
}
variable "secret_key" {
    default = "<YOUR-AWS-SECRET-KEY>"
}


Enter fullscreen mode Exit fullscreen mode

Kindly replace the necessary AWS Access keys, Secret keys according to your IAM user. Make sure you have sufficient privileges.

Image description

Create main.tf file and add the below content in it:



provider "aws" {
    region = "eu-central-1"
    version = ">= 3.40.0"    
    access_key = "${var.access_key}"
    secret_key = "${var.secret_key}"
}
data "aws_availability_zones" "azs" {
    state = "available"
}


Enter fullscreen mode Exit fullscreen mode

Here, I am using "eu-central-1" region here but you can use any region as per the business requirement.

Image description

Create vpc.tf file and add the below content in it:



variable "region" {
    default = "eu-central-1"
}
data "aws_availability_zones" "available" {}
locals {
    cluster_name = "Piyush-EKS-Cluster"
}
module vpc {
    source = "terraform-aws-modules/vpc/aws"

    name = "Piyush-EKS-VPC"
    cidr = "10.0.0.0/16"

    azs = data.aws_availability_zones.available.names
    private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
    public_subnets =  ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]

    enable_nat_gateway = true
    single_nat_gateway = true

  enable_dns_hostnames= true
tags = {
    "Name" = "Piyush-EKS-VPC"
}
public_subnet_tags = {
    "Name" = "EKS-Public-Subnet"
}
private_subnet_tags = {
    "Name" = "EKS-Private-Subnet"
}
}


Enter fullscreen mode Exit fullscreen mode

Let’s understand this file.
I am using the AWS VPC (Virtual Private Cloud) module for the VPC creation.
Once you run the above code it will create an AWS VPC named Piyush-EKS-VPC having 10.0.0.0/16 as a CIDR range in the eu-central-1 region.
This AWS VPC has 3 Private [10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24] & 3 Public [10.0.4.0/24, 10.0.5.0/24, 10.0.6.0/24] subnets.
I have also enabled the NAT Gateway & DNS HOSTNAME in our VPC.
And, data aws_availability_zones and azs will provide the list of the Availability zone for the eu-central-1 region.

Image description

Create sg.tf for AWS Security Group and add the below content in it:



resource "aws_security_group" "worker_group_one" {
    name_prefix = "worker_group_one"
    vpc_id = module.vpc.vpc_id
ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
cidr_blocks = [
            "10.0.0.0/8"
        ]
    }
}
resource "aws_security_group" "worker_group_two" {
    name_prefix = "worker_group_two"
    vpc_id = module.vpc.vpc_id

    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
cidr_blocks = [
            "10.0.0.0/8"
        ]
    }
}
resource "aws_security_group" "all_worker_management" {
    name_prefix = "all_worker_management"
    vpc_id = module.vpc.vpc_id
ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
cidr_blocks = [
            "10.0.0.0/8"
        ]
    }
}


Enter fullscreen mode Exit fullscreen mode

Now here,
I am creating 2 Security Groups for 2 Worker Nodes Groups
Port 22 is open for SSH Connections but I’ve restricted access for 10.0.0.0/8 CIDR only.

Image description

Create eks.tf file EKS-Cluster and add the below content in it:



module "eks"{
    source = "terraform-aws-modules/eks/aws"
    version = "17.18.0"
    cluster_name = local.cluster_name
    cluster_version = "1.23"
    subnets = module.vpc.private_subnets
tags = {
        Name = "Piyush-EKS-Cluster"
    }
vpc_id = module.vpc.vpc_id
    workers_group_defaults = {
        root_volume_type = "gp3"
    }
worker_groups = [
        {
            name = "Worker-Group-1"
            instance_type = "t2.medium"
            asg_desired_capacity = 2
            additional_security_group_ids = [aws_security_group.worker_group_one.id]
        },
        {
            name = "Worker-Group-2"
            instance_type = "t2.medium"
            asg_desired_capacity = 1
            additional_security_group_ids = [aws_security_group.worker_group_two.id]
        },
    ]
}

data "aws_eks_cluster" "cluster" {
    name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
    name = module.eks.cluster_id
}


Enter fullscreen mode Exit fullscreen mode

Here,
For EKS Cluster creation, I have used Terraform AWS EKS Module.
This will create 2 worker groups (worker_group_one & worker_group_two) with the desired capacity of 3 instances of type t2.medium.

Image description

Create kubernetes.tf file for the Kubernetes Provider and add the below content in it:



provider "kubernetes" {

    host = data.aws_eks_cluster.cluster.endpoint
    token = data.aws_eks_cluster_auth.cluster.token
    cluster_ca_certificate = base64encode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}


Enter fullscreen mode Exit fullscreen mode

From the above code you can ensure that we’re using the very recently created EKS cluster as the host and we’re using token for the authentication and cluster_ca_certificate for the CA certificate.

Image description

Create output.tf file for the outputs:



output "cluster_id" {
    value = module.eks.cluster_id
}
output "cluster_endpoint" {
    value = module.eks.cluster_endpoint
}


Enter fullscreen mode Exit fullscreen mode

Now, we are done with writing all the Terraform files.

Step-2: Initialize Directory with Terraform

I am going to run terraform init inside the working directory and then it’ll download all the necessary providers and all the other required modules. Run the following command in our VSCode:



terraform init


Enter fullscreen mode Exit fullscreen mode

Image description

Image description

Step-3: Create Terraform Plan

Run terraform plan command in the working directory and it’ll give you the execution plan.

Image description

Now, let’s check the plan first and make sure that everything that we’ve written is what plan has suggested. We can redirect the output to a text file as well. :-)

Step-4: Create EKS Cluster using Terraform Command

Run terraform apply command and it will create the entire Kubernetes Cluster on AWS i:e; AWS EKS cluster.

Image description

Image description

After running this command terraform has created the below resources in my AWS account :

IAM Role
VPC
NAT Gateway
Security Group
Route Table
Public-Private Subnets
EKS Cluster

Step-5 Check EKS Cluster on AWS

Now, I’m gonna log-in into my AWS Account to verify all the resources.

VPC

Image description

Subnets

Image description

NAT Gateway

Image description

Route Tables

Image description

Security Groups

Image description

AWS EKS Cluster

Image description

EC2 Instances

Image description

Bingo!!!! Our EKS cluster is up & running now.
We have successfully provisioned an AWS EKS Cluster using Terraform (IaC).
Now you can play around it and make some changes and then modify it accordingly.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow me on "LinkedIn" & my other blogs -
cpiyush151 - Wordpress
cpiyush151 - Hashnode

Top comments (0)