DEV Community

Cover image for Creating an EKS Cluster and Node Group with Terraform
erozedguy for AWS Community Builders

Posted on • Updated on

Creating an EKS Cluster and Node Group with Terraform

DESCRIPTION

In this post I'm gonna explain how to deploy an EKS Cluster and EC2 node group using Terraform for the purpose
The Architecture consists of a VPC with 2 public subnets and 2 private subnets in different Availability Zones. Each public subnet contains a nat gateway that allows private subnets to access the Internet.
The EKS nodes will be create in the private subnets. The nodes are EC2 t3-micro instances managed by EKS

About EKS: https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html

ARCHITECTURE

eks

CODE

GITHUB repository: https://github.com/erozedguy/Terraform-EKS-Cluster-with-Node-Group

DEVELOPMENT

STEP 01 - Provide Networking part

I'm using my own terraform module to create the vpc, subnets, internet gateway, nat gateways etc

AWS_VPC module: https://github.com/erozedguy/AWS-VPC-terraform-module

Module's usage

module "aws_vpc" {
  source          = "github.com/erozedguy/AWS-VPC-terraform-module.git"
  networking      = var.networking
  security_groups = var.security_groups
}
Enter fullscreen mode Exit fullscreen mode

Variables for the module

variable "networking" {
  type = object({
   cidr_block       = string
   vpc_name         = string
   azs              = list(string)
   public_subnets   = list(string)
   private_subnets  = list(string)
   nat_gateways     = bool
  })
  default = {
   cidr_block       = "10.0.0.0/16"
   vpc_name         = "terraform-vpc"
   azs              = ["us-east-1a", "us-east-1b"]
   public_subnets   = ["10.0.1.0/24", "10.0.2.0/24"]
   private_subnets  = ["10.0.3.0/24", "10.0.4.0/24"]
   nat_gateways     = true
  }
}

variable "security_groups" {
  type = list(object({
    name        = string
    description = string
    ingress = object({
      description      = string
      protocol         = string
      from_port        = number
      to_port          = number
      cidr_blocks      = list(string)
      ipv6_cidr_blocks = list(string)
    })
  }))

  default = [{
    name        = "ssh"
    description = "Port 22"
    ingress = {
      description      = "Allow SSH access"
      protocol         = "tcp"
      from_port        = 22
      to_port          = 22
      cidr_blocks      = ["0.0.0.0/0"]
      ipv6_cidr_blocks = null
    }
  }]
}
Enter fullscreen mode Exit fullscreen mode

STEP 03 - IAM ROLES

EKS Cluster Role

Kubernetes clusters managed by Amazon EKS make calls to other AWS services on your behalf to manage the resources that you use with the service. That's why I create this role

References: https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html

resource "aws_iam_role" "EKSClusterRole" {
  name = "EKSClusterRole"
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "eks.amazonaws.com"
        }
      },
    ]
  })
}
Enter fullscreen mode Exit fullscreen mode
NODE GROUP ROLE

The Amazon EKS node kubelet daemon makes calls to AWS APIs on your behalf. Nodes receive permissions for these API calls through an IAM instance profile and associated policies. Before you can launch nodes and register them into a cluster, you must create an IAM role for those nodes to use when they are launched.

References: https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html

resource "aws_iam_role" "NodeGroupRole" {
  name = "EKSNodeGroupRole"
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "ec2.amazonaws.com"
        }
      },
    ]
  })
}
Enter fullscreen mode Exit fullscreen mode
ATTACH MANAGED IAM POLICIES TO IAM ROLES

This policy provides Kubernetes the permissions it requires to manage resources on your behalf. Kubernetes requires Ec2:CreateTags permissions to place identifying information on EC2 resources including but not limited to Instances, Security Groups, and Elastic Network Interfaces.

resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.EKSClusterRole.name
}
Enter fullscreen mode Exit fullscreen mode

This policy allows Amazon EKS worker nodes to connect to Amazon EKS Clusters.

resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.NodeGroupRole.name
}
Enter fullscreen mode Exit fullscreen mode

Provides read-only access to Amazon EC2 Container Registry repositories.

resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.NodeGroupRole.name
}
Enter fullscreen mode Exit fullscreen mode

This policy provides the Amazon VPC CNI Plugin (amazon-vpc-cni-k8s) the permissions it requires to modify the IP address configuration on your EKS worker nodes. This permission set allows the CNI to list, describe, and modify Elastic Network Interfaces on your behalf

resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.NodeGroupRole.name
}
Enter fullscreen mode Exit fullscreen mode

STEP 04 - Create EKS Cluster

You must to set up the arn of the IAM cluster role, kubernetes version and vpc configuration

resource "aws_eks_cluster" "eks-cluster" {
  name     = "eks-cluster"
  role_arn = aws_iam_role.EKSClusterRole.arn
  version  = "1.21"

  vpc_config {
    subnet_ids          = flatten([ module.aws_vpc.public_subnets_id, module.aws_vpc.private_subnets_id ])
    security_group_ids  = flatten(module.aws_vpc.security_groups_id)
  }

  depends_on = [
    aws_iam_role_policy_attachment.AmazonEKSClusterPolicy
  ]
}
Enter fullscreen mode Exit fullscreen mode

NODE GROUP

To create and run PODS we need a infrastructure. We can select between EC2 instances (worker nodes) or FARGATE.
For this implementation I'm gonna provide EC2 "t3-micro" instances as worker nodes to run the PODS
For an isolated infrastructure and more security, the best practice is to create your worker nodes in private subnets.
We must provide the required information about the AMI, intance type, capacity type, disk size to create the worker nodes and also scaling configuration to specify the desired, max and min size of worker nodes

resource "aws_eks_node_group" "node-ec2" {
  cluster_name    = aws_eks_cluster.eks-cluster.name
  node_group_name = "t3_micro-node_group"
  node_role_arn   = aws_iam_role.NodeGroupRole.arn
  subnet_ids      = flatten( module.aws_vpc.private_subnets_id )

  scaling_config {
    desired_size = 2
    max_size     = 3
    min_size     = 1
  }

  ami_type       = "AL2_x86_64"
  instance_types = ["t3.micro"]
  capacity_type  = "ON_DEMAND"
  disk_size      = 20

  depends_on = [
    aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
    aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy
  ]
}
Enter fullscreen mode Exit fullscreen mode

USAGE

terraform init
terraform validate
terraform plan
terraform apply -auto-approve
Enter fullscreen mode Exit fullscreen mode

STEP 05 - Check Cluster & Node Group Creation

Check if the node gruoup was created using AWS Console
node group

Create or update the kubeconfig for Amazon EKS
For this purpose use this command:

aws eks update-kubeconfig --region <region-code> --name <cluster-name>

Replace <region-code> with you respective region, example us-east-1
and <cluster-name with your cluster's name

kubeconfig

Check node using kubectl
check nodes

Documentation: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

Latest comments (1)

Collapse
 
ishtiaqsamdani profile image
ishtiaqSamdani

how to add tags for ec2 nodes created?