DEV Community

Cover image for Managing AWS EKS with Terraform
Mariam Adedeji
Mariam Adedeji

Posted on

Managing AWS EKS with Terraform

In this tutorial, we'll learn how to use Terraform to manage an AWS Elastic Kubernetes Service (EKS) cluster.

What is Terraform?

Terraform is an Infrastructure-as-Code (IaC) tool that enables you to define and provision infrastructure resources using a declarative configuration language.

Why Use Terraform for AWS EKS?

  1. Infrastructure-as-Code: Automate and version control your infrastructure.
  2. Scalability: Easily scale your EKS cluster with Terraform configurations.
  3. Efficiency: Provision of multiple AWS services with a single tool.

Prerequisites

  1. Terraform - Installed on your local machine. You can set it up following this guide
  2. AWS Account – You’ll need an AWS account to access the AWS EKS and some other services. If you don’t have one, sign up here.
  3. AWS CLI: Installed and configured with your AWS credentials.
  4. kubectl: Installed to manage your EKS cluster from your machine.

Table of Contents

  1. Create Your Terraform Project Directory
  2. Define the AWS Provider
  3. Define the VPC and Networking Resources
  4. Define the EKS Cluster
  5. Create EKS Worker Nodes
  6. Apply the Terraform Configuration
  7. Configure kubectl to Access the EKS Cluster
  8. Deploy an Application Using Terraform
  9. Clean Up

Now, let’s get to it!


Step 1 — Create Your Terraform Project Directory

We’ll need to create a project directory where all our Terraform configuration files will live.

mkdir terraform-eks
cd terraform-eks
Enter fullscreen mode Exit fullscreen mode

Step 2 — Define the AWS Provider (main.tf)

Create main.tf file and add the following configuration:

provider "aws" {
  region = "eu-west-2"
}
Enter fullscreen mode Exit fullscreen mode

Here, we make Terraform use AWS as the provider and define the region in eu-west-2, but you can choose a region closer to your desired location.

Step 3 — Define the VPC and Networking Resources (network.tf)

AWS EKS requires a Virtual Private Cloud (VPC) and subnets to run.

Add the following configuration to the network.tf file. This will create a VPC, subnets, an internet gateway and route tables for your EKS cluster.

data "aws_availability_zones" "available" {}

resource "aws_vpc" "eks_vpc" {
  cidr_block = "10.0.0.0/16"  # IP range for the VPC
}

resource "aws_subnet" "eks_subnet" {
  count                   = 2
  vpc_id                  = aws_vpc.eks_vpc.id
  cidr_block              = cidrsubnet(aws_vpc.eks_vpc.cidr_block, 8, count.index)
  availability_zone       = element(data.aws_availability_zones.available.names, count.index)
  map_public_ip_on_launch = true  # Enable auto-assign public IP
}

resource "aws_internet_gateway" "eks_igw" {
  vpc_id = aws_vpc.eks_vpc.id
}

resource "aws_route_table" "eks_route_table" {
  vpc_id = aws_vpc.eks_vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.eks_igw.id
  }
}

resource "aws_route_table_association" "eks_route_table_assoc" {
  count         = 2  # Associates the route table with each subnet
  subnet_id     = element(aws_subnet.eks_subnet.*.id, count.index)
  route_table_id = aws_route_table.eks_route_table.id
}
Enter fullscreen mode Exit fullscreen mode

Step 4 — Define the EKS Cluster (eks.tf)

Now, let's create the EKS cluster itself, along with the required IAM role to manage the cluster. Create an eks.tf file and add the following configuration to it.

resource "aws_iam_role" "eks_role" {
  name = "eks-role"
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = ["eks.amazonaws.com", "ec2.amazonaws.com"]
        }
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "eks_policy" {
  role       = aws_iam_role.eks_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
}

resource "aws_iam_role_policy_attachment" "eks_node_policy" {
  role       = aws_iam_role.eks_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
}

resource "aws_iam_role_policy_attachment" "eks_cni_policy" {
  role       = aws_iam_role.eks_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
}

resource "aws_iam_role_policy_attachment" "eks_ec2_policy" {
  role       = aws_iam_role.eks_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
}

resource "aws_eks_cluster" "eks_cluster" {
  name     = "eks-cluster"
  role_arn = aws_iam_role.eks_role.arn

  vpc_config {
    subnet_ids = aws_subnet.eks_subnet[*].id
  }
}
Enter fullscreen mode Exit fullscreen mode
  • The EKS cluster needs an IAM role that grants it permissions to manage AWS services. Therefore, we attach eks_role with the necessary policies to EKS.
  • We also create an EKS cluster using aws_eks_cluster and specify the VPC subnets created earlier.

Step 5 — Create EKS Worker Nodes (eks-workers.tf)

Create eks-workers.tf file and add the following configuration. This will create a node group that will scale between 1 to 3 nodes.

resource "aws_eks_node_group" "node_group" {
  cluster_name    = aws_eks_cluster.eks_cluster.name
  node_group_name = "eks-node-group"
  node_role_arn   = aws_iam_role.eks_role.arn
  subnet_ids      = aws_subnet.eks_subnet[*].id

  scaling_config {
    desired_size = 2  # Initial number of nodes
    max_size     = 3  # Maximum number of nodes
    min_size     = 1  # Minimum number of nodes
  }

  instance_types = ["t3.medium"]  # Type of EC2 instances for worker nodes
}
Enter fullscreen mode Exit fullscreen mode
  • We define an EKS node group, which is a set of EC2 instances (worker nodes) that run the Kubernetes workloads.
  • The node group can scale between 1 and 3 nodes, depending on your workload's demand.

Step 6 — Apply the Terraform Configuration

Initialize Terraform to download the necessary plugins for AWS:

terraform init
Enter fullscreen mode Exit fullscreen mode

Before applying, it’s best practice to see the changes Terraform plans to make in our infrastructure. You can achieve this with the following command:

terraform plan
Enter fullscreen mode Exit fullscreen mode

Once the planning looks good, you can apply the configuration to create the EKS cluster and its resources with the following command:

terraform apply
Enter fullscreen mode Exit fullscreen mode

Confirm with yes when prompted.

Now, Terraform has created the EKS cluster, VPC, subnets, worker nodes and IAM roles based on the configurations we've written.

create eks with terraform

If you check your AWS console, your EKS cluster should be up and running!

aws console showing eks

Step 7 — Configure kubectl to Access the EKS Cluster

To manage the EKS cluster, we need to configure kubectl. This can be done using the AWS CLI.

Run the following command to configure your local kubectl to communicate with the EKS cluster.

aws eks --region eu-west-2 update-kubeconfig --name eks-cluster
Enter fullscreen mode Exit fullscreen mode

Then run the following command to confirm that you have access to the EKS cluster by listing Kubernetes services.

kubectl get svc
Enter fullscreen mode Exit fullscreen mode

Step 8 — Deploy an Application Using Terraform (deploy-nginx.tf)

Now, let’s deploy a simple Nginx application to the EKS cluster using the Kubernetes provider in Terraform.

First, create a deploy-nginx.tf file and define the Kubernetes provider.

provider "kubernetes" {
  host                   = aws_eks_cluster.eks_cluster.endpoint
  cluster_ca_certificate = base64decode(aws_eks_cluster.eks_cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.eks_auth.token
}

data "aws_eks_cluster_auth" "eks_auth" {
  name = aws_eks_cluster.eks_cluster.name
}
Enter fullscreen mode Exit fullscreen mode

Then add the following configuration to deploy-nginx.tf to define the Nginx pod and expose it via a Kubernetes service:

resource "kubernetes_pod" "nginx" {
  metadata {
    name = "nginx"
    labels = {
      app = "nginx"
    }
  }

  spec {
    container {
      name  = "nginx"
      image = "nginx:latest"

      resources {
        limits = {
          cpu    = "0.5"
          memory = "512Mi"
        }
        requests = {
          cpu    = "0.25"
          memory = "256Mi"
        }
      }
    }
  }
}

resource "kubernetes_service" "nginx_service" {
  metadata {
    name = "nginx-service"
  }

  spec {
    selector = {
      app = "nginx"
    }
    port {
      port = 80
      target_port = 80
    }
    type = "LoadBalancer"
  }
}
Enter fullscreen mode Exit fullscreen mode
  • The Kubernetes Pod defines an Nginx pod in the EKS cluster.
  • The Kubernetes Service exposes the Nginx pod as a LoadBalancer service so that it can be accessed publicly.

With deploy-nginx.tf properly configured, you can now deploy it to your EKS cluster using the following commands:

terraform plan

terraform apply
Enter fullscreen mode Exit fullscreen mode

Once the deployment is complete, you can check that the Nginx pod and service are running in your cluster with:

kubectl get pods

kubectl get svc
Enter fullscreen mode Exit fullscreen mode

nginx pod and services

nginx app

Step 9 — Clean Up (Optional)

If you no longer need all these resources, you can destroy them all to avoid unnecessary charges. To do that, run the following command:

terraform destroy
Enter fullscreen mode Exit fullscreen mode

In summary, we now have a working EKS cluster on AWS managed through Terraform. We set up the necessary VPC, created an EKS cluster with worker nodes, configured kubectl, and deployed an Nginx application.

If you’ve found this article helpful, please leave a like or a comment. If you have any questions, please let me know in the comment section.

Top comments (0)