Welcome back to yet another learning experience with me.
In this tutorial, you are going to learn how to use Terraform to setup an EKS cluster, a VPC, and subnets.
At the end of the tutorial You will have achieved the following objectives :
How to set up Node and EKS cluster roles.
How to use Terraform to create an EKS cluster with the appropriate roles.
How to configure your cluster's VPC and subnets.
How to Set up a Node Group.
The following are the prerequisites for following the guide below in the creation of your EKS cluster:
An active AWS account
An Ubuntu Machine
Terraform installed on the machine
AWS CLI
Without any further delay, let us begin the tutorial.
1. CREATING THE EKS PROVIDER BLOCK
This part of our tutorial involves the creation of the provider block.
The provider block helps terraform to interact with other cloud providers and APIs
To get started you will create first have to crate a new file and name it “provider.tf”
In this file you will define AWS as our provider and our specific region is us-east-1
This is subject to change so please feel free to change the region to your preferred one.
provider "aws" {
region = "us-east-1"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
2. Creating THE VPC
This part will involve the creation of vpc for our EKS cluster
The name tag on this vpc will be called “main” and it will have a Cidr_block of “10.0.0.0 /16”.
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "main"
}
}
3.CREATING AN INTERNET GATEWAY
Up next we will create the internet gateway this enables resources in your public subnet for example EC2 instance to connect to the internet if they have a public IPV6 or IPV4 address
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.main.id
tags = {
Name = "igw"
}
}
The internet gateway will use the VPC ID to attach itself to the VPC. You also need a name tag for this resource, and I have chosen ‘igw’ for this project’s internet gateway. Feel free to rename it if you like.
4.CREATION OF SUBNETS
Next we will be creating two subnets in the VPC, public and private, and two availability zones will be attached to them
resource "aws_subnet" "private-us-east-1a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.0.0/19"
availability_zone = "us-east-1a"
tags = {
"Name" = "private-us-east-1a"
"kubernetes.io/role/internal-elb" = "1"
"kubernetes.io/cluster/demo" = "owned"
}
}
resource "aws_subnet" "private-us-east-1b" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.32.0/19"
availability_zone = "us-east-1b"
tags = {
"Name" = "private-us-east-1b"
"kubernetes.io/role/internal-elb" = "1"
"kubernetes.io/cluster/demo" = "owned"
}
}
resource "aws_subnet" "public-us-east-1a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.64.0/19"
availability_zone = "us-east-1a"
map_public_ip_on_launch = true
tags = {
"Name" = "public-us-east-1a"
"kubernetes.io/role/elb" = "1"
"kubernetes.io/cluster/demo" = "owned"
}
}
resource "aws_subnet" "public-us-east-1b" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.96.0/19"
availability_zone = "us-east-1b"
map_public_ip_on_launch = true
tags = {
"Name" = "public-us-east-1b"
"kubernetes.io/role/elb" = "1"
"kubernetes.io/cluster/demo" = "owned"
}
}
5.NATGATEWAY AND ELASTIC IP
Next step is to create a NAT gateway and an elastic IP.
The elastic IP will be attached to the NAT gateway and also connected to a public subnet
resource "aws_eip" "nat" {
vpc = true
tags = {
Name = "nat"
}
}
resource "aws_nat_gateway" "nat" {
allocation_id = aws_eip.nat.id
subnet_id = aws_subnet.public-us-east-1a.id
tags = {
Name = "nat"
}
depends_on = [aws_internet_gateway.igw]
}
provisioning of an Internet Gateway is what the NAT gateway requires first, and our name tag for the NAT gateway will be “nat” (feel free to change it if you like).
6.ROUTE TABLES
A route table is a set of rules that determine the direction of network traffic from your subnet or gateway.
resource "aws_route_table" "private" {
vpc_id = aws_vpc.main.id
route = [
{
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.nat.id
carrier_gateway_id = ""
destination_prefix_list_id = ""
egress_only_gateway_id = ""
gateway_id = ""
instance_id = ""
ipv6_cidr_block = ""
local_gateway_id = ""
network_interface_id = ""
transit_gateway_id = ""
vpc_endpoint_id = ""
vpc_peering_connection_id = ""
},
]
tags = {
Name = "private"
}
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route = [
{
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
nat_gateway_id = ""
carrier_gateway_id = ""
destination_prefix_list_id = ""
egress_only_gateway_id = ""
instance_id = ""
ipv6_cidr_block = ""
local_gateway_id = ""
network_interface_id = ""
transit_gateway_id = ""
vpc_endpoint_id = ""
vpc_peering_connection_id = ""
},
]
tags = {
Name = "public"
}
}
resource "aws_route_table_association" "private-us-east-1a" {
subnet_id = aws_subnet.private-us-east-1a.id
route_table_id = aws_route_table.private.id
}
resource "aws_route_table_association" "private-us-east-1b" {
subnet_id = aws_subnet.private-us-east-1b.id
route_table_id = aws_route_table.private.id
}
resource "aws_route_table_association" "public-us-east-1a" {
subnet_id = aws_subnet.public-us-east-1a.id
route_table_id = aws_route_table.public.id
}
resource "aws_route_table_association" "public-us-east-1b" {
subnet_id = aws_subnet.public-us-east-1b.id
route_table_id = aws_route_table.public.id
}
We will create a routing table for both the public and private subnets.
The empty spaces are just a default, you can make changes to them if you want.
The next step is to attach the route table to each availability zone. We will then connect each Route Table to the VPC created above. Feel free to change the name tag if you want.
7.CREATING THE EKS CLUSTER WITH ROLE
The service-linked role called AWSServiceRoleForAmazonEKS is used by Amazon EKS for cluster management in your account
The attached policies allow the role to manage the following resources: network interfaces, security groups, logs, and VPCs.
In this section, we will create an EKS cluster with a role.
However, we will first create an IAM role policy and connect it to the public and private subnets.
resource "aws_iam_role" "demo" {
name = "eks-cluster-demo"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "demo-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.demo.name
}
resource "aws_eks_cluster" "demo" {
name = "demo"
role_arn = aws_iam_role.demo.arn
vpc_config {
subnet_ids = [
aws_subnet.private-us-east-1a.id,
aws_subnet.private-us-east-1b.id,
aws_subnet.public-us-east-1a.id,
aws_subnet.public-us-east-1b.id
]
}
depends_on = [aws_iam_role_policy_attachment.demo-AmazonEKSClusterPolicy]
}
8.EKS NODE GROUP & OPENID
We will now create a node group for our EKS cluster.
This node group will need to be attached to three roles policies which are: nodes-AmazonEKSWorkerNodePolicy, nodes-AmazonEKS_CNI_Policy, and nodes-AmazonEC2ContainerRegistryReadOnly.
resource "aws_iam_role" "nodes" {
name = "eks-node-group-nodes"
assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}]
Version = "2012-10-17"
})
}
resource "aws_iam_role_policy_attachment" "nodes-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.nodes.name
}
resource "aws_iam_role_policy_attachment" "nodes-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.nodes.name
}
resource "aws_iam_role_policy_attachment" "nodes-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.nodes.name
}
resource "aws_eks_node_group" "private-nodes" {
cluster_name = aws_eks_cluster.demo.name
node_group_name = "private-nodes"
node_role_arn = aws_iam_role.nodes.arn
subnet_ids = [
aws_subnet.private-us-east-1a.id,
aws_subnet.private-us-east-1b.id
]
capacity_type = "ON_DEMAND"
instance_types = ["t3.small"]
scaling_config {
desired_size = 2
max_size = 5
min_size = 0
}
update_config {
max_unavailable = 1
}
labels = {
role = "general"
}
# taint {
# key = "team"
# value = "devops"
# effect = "NO_SCHEDULE"
# }
# launch_template {
# name = aws_launch_template.eks-with-disks.name
# version = aws_launch_template.eks-with-disks.latest_version
# }
depends_on = [
aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly,
]
}
/* resource "aws_eks_node_group" "public-nodes" {
cluster_name = aws_eks_cluster.demo.name
node_group_name = "public-nodes"
node_role_arn = aws_iam_role.nodes.arn
subnet_ids = [
aws_subnet.public-us-east-1a.id,
aws_subnet.public-us-east-1b.id
]
capacity_type = "ON_DEMAND"
instance_types = ["t3.small"]
scaling_config {
desired_size = 2
max_size = 5
min_size = 0
}
update_config {
max_unavailable = 1
}
labels = {
role = "general"
}
# taint {
# key = "team"
# value = "devops"
# effect = "NO_SCHEDULE"
# }
# launch_template {
# name = aws_launch_template.eks-with-disks.name
# version = aws_launch_template.eks-with-disks.latest_version
# }
depends_on = [
aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly,
]
}
*/
# resource "aws_launch_template" "eks-with-disks" {
# name = "eks-with-disks"
# key_name = "local-provisioner"
# block_device_mappings {
# device_name = "/dev/xvdb"
# ebs {
# volume_size = 50
# volume_type = "gp2"
# }
# }
# }
We are only setting up private nodes for this project, which implies the nodes will be in the private subnet.
If you want the nodes to be public, you may edit the code.
We are also setting up autoscaling with two desirable states, five maximum states, and zero minimum states.
OPENID
IAM OIDC identity providers are entities in IAM that represent an external identity provider (IdP) service, such as Google or Salesforce, that implements the OpenID Connect (OIDC) standard.
When you want to create trust between an OIDC-compatible IdP and your AWS account, you utilize an IAM OIDC identity provider.
data "tls_certificate" "eks" {
url = aws_eks_cluster.demo.identity[0].oidc[0].issuer
}
resource "aws_iam_openid_connect_provider" "eks" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = [data.tls_certificate.eks.certificates[0].sha1_fingerprint]
url = aws_eks_cluster.demo.identity[0].oidc[0].issuer
}
OPENID TEST
This test allows us to associate a policy with our OpenID.
data "aws_iam_policy_document" "test_oidc_assume_role_policy" {
statement {
actions = ["sts:AssumeRoleWithWebIdentity"]
effect = "Allow"
condition {
test = "StringEquals"
variable = "${replace(aws_iam_openid_connect_provider.eks.url, "https://", "")}:sub"
values = ["system:serviceaccount:default:aws-test"]
}
principals {
identifiers = [aws_iam_openid_connect_provider.eks.arn]
type = "Federated"
}
}
}
resource "aws_iam_role" "test_oidc" {
assume_role_policy = data.aws_iam_policy_document.test_oidc_assume_role_policy.json
name = "test-oidc"
}
resource "aws_iam_policy" "test-policy" {
name = "test-policy"
policy = jsonencode({
Statement = [{
Action = [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
]
Effect = "Allow"
Resource = "arn:aws:s3:::*"
}]
Version = "2012-10-17"
})
}
resource "aws_iam_role_policy_attachment" "test_attach" {
role = aws_iam_role.test_oidc.name
policy_arn = aws_iam_policy.test-policy.arn
}
output "test_policy_arn" {
value = aws_iam_role.test_oidc.arn
}
9.AUTOSCALER
We will create an autoscaling role for our EKS cluster in this section.
data "aws_iam_policy_document" "eks_cluster_autoscaler_assume_role_policy" {
statement {
actions = ["sts:AssumeRoleWithWebIdentity"]
effect = "Allow"
condition {
test = "StringEquals"
variable = "${replace(aws_iam_openid_connect_provider.eks.url, "https://", "")}:sub"
values = ["system:serviceaccount:kube-system:cluster-autoscaler"]
}
principals {
identifiers = [aws_iam_openid_connect_provider.eks.arn]
type = "Federated"
}
}
}
resource "aws_iam_role" "eks_cluster_autoscaler" {
assume_role_policy = data.aws_iam_policy_document.eks_cluster_autoscaler_assume_role_policy.json
name = "eks-cluster-autoscaler"
}
resource "aws_iam_policy" "eks_cluster_autoscaler" {
name = "eks-cluster-autoscaler"
policy = jsonencode({
Statement = [{
Action = [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"ec2:DescribeLaunchTemplateVersions"
]
Effect = "Allow"
Resource = "*"
}]
Version = "2012-10-17"
})
}
resource "aws_iam_role_policy_attachment" "eks_cluster_autoscaler_attach" {
role = aws_iam_role.eks_cluster_autoscaler.name
policy_arn = aws_iam_policy.eks_cluster_autoscaler.arn
}
output "eks_cluster_autoscaler_arn" {
value = aws_iam_role.eks_cluster_autoscaler.arn
}
After you have created all of these files, you may execute the following commands to build your EKS cluster:
terraform init
This command aids in the initialization of Terraform and the download of the necessary provider plugins, which in this case is the AWS provider.
terraform apply -auto-approve
This command allows you to apply everything that has been written to your AWS account. And when this works properly.
Check your AWS account to see if the cluster is up and functioning. As shown below:
The image above shows the EKS cluster fully functioning, as well as the node groups we formed.
Please keep in mind that if you altered the name of these resources while putting them up in the files, you may have a new name for them now.
And there you have it, an EKS cluster in roughly 15 minutes! Use the following command to connect to your cluster:
$ aws eks --region example_region update-kubeconfig --name cluster_name
In the code above, edit example_region with the region where your cluster is running, and cluster_name with the name of the cluster you created.
You've finally finished building an eks cluster. Congratulations!!
Top comments (0)