Introduction
In today's cloud-native landscape, Kubernetes has become the de facto standard for container orchestration, and Amazon Elastic Kubernetes Service (EKS) provides a managed Kubernetes service that simplifies cluster management. When combined with Terraform, HashiCorp's Infrastructure as Code (IaC) tool, you can achieve reproducible, version-controlled, and automated Kubernetes infrastructure deployment.
This comprehensive guide will walk you through deploying a production-ready AWS EKS cluster using Terraform, covering everything from initial setup to operational best practices.
Prerequisites
Before beginning, ensure you have the following:
- AWS Account with appropriate IAM permissions
- AWS CLI installed and configured
- Terraform (v1.0+) installed
- kubectl for Kubernetes cluster interaction
- Basic understanding of AWS services, Kubernetes, and Terraform
Architecture Overview
Our deployment will create:
- A VPC with public and private subnets across multiple Availability Zones
- EKS control plane managed by AWS
- Managed node groups for worker nodes
- Necessary IAM roles and security groups
- Network components (NAT Gateway, Internet Gateway, Route Tables)
Efficient EKS Cluster Provisioning with Terraform's Modular Design
This implementation employs a Terraform module containing pre-defined configurations to simplify Amazon EKS cluster creation, demonstrating infrastructure-as-code efficiency.
Step 1: Prepare Your Environment - Install Terraform
Begin by installing Terraform locally to set up your development environment.
brew install terraform
Secondly, install the AWS CLI:
brew install awscli
Thirdly, install kubectl:
brew install kubernetes-cli
For installation on other operating systems, see the official documentation:
Terraform: Installation guide
AWS CLI: Installation instructions
kubectl: Installation documentation
Step 2: Configure AWS CLI Access
Run
aws configure
to set up authentication with your AWS account. You'll need to enter:
- AWS Access Key ID
- AWS Secret Access Key
- Default region: us-east-1
Terraform will use these credentials to create and manage your AWS resources.
Step 3: Prepare the Code Environment
We'll utilize the terraform-aws-modules/eks/aws Terraform module for this implementation.
data "aws_availability_zones" "available" {}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "main-vpc-eks"
}
}
resource "aws_subnet" "public_subnet" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true
tags = {
Name = "public-subnet-${count.index}"
}
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "main-igw"
}
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = "main-route-table"
}
}
resource "aws_route_table_association" "a" {
count = 2
subnet_id = aws_subnet.public_subnet.*.id[count.index]
route_table_id = aws_route_table.public.id
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.31"
cluster_name = "sage-nodes"
cluster_version = "1.31"
# Optional
cluster_endpoint_public_access = true
# Optional: Adds the current caller identity as an administrator via cluster access entry
enable_cluster_creator_admin_permissions = true
eks_managed_node_groups = {
sage-nodes = {
instance_types = ["t3.medium"]
min_size = 1
max_size = 3
desired_size = 2
}
}
vpc_id = aws_vpc.main.id
subnet_ids = aws_subnet.public_subnet.*.id
tags = {
Environment = "dev"
Terraform = "true"
}
}
Step 4: Apply Terraform Configuration
Begin by initializing Terraform to download dependencies, then apply the configuration to create your EKS cluster infrastructure.
terraform init
Post-initialization, generate a plan to review infrastructure changes.
terraform plan
Wait a few minutes for all resources to be created.
Step 5 – Configure kubectl Access
Run the following command:
aws eks --region us-east-1 update-kubeconfig --name example
At this point, verify that the login to the cluster was successful.
kubectl config current-context
Step 6 – Manage the Cluster
Use the following command to view all nodes in the cluster:
kubectl get nodes
To validate the cluster, deploy an NGINX instance.
kubectl run --port 80 --image nginx nginx
To see its status:
kubectl get pods
Now, establish a tunnel from your local environment to the pod.
kubectl port-forward nginx 3000:80
Step 7 – Resource Cleanup
To destroy the resources created in this session, execute the following command:
terraform destroy
Ensure to destroy the created resources to avoid incurring huge financial bills from AWS.
Conclusion
By completing this guide, you've established a production-ready AWS EKS cluster deployment using Terraform's infrastructure-as-code approach. This foundation enables consistent, version-controlled Kubernetes infrastructure management across all environments.
Chidubem Chinwuba is a dedicated Cloud/DevOps Engineer. He possesses a deep passion for technology and its transformative potential across industries. Overall, Chidubem is driven by his passion for technology and his aspiration to make a meaningful impact in the Cloud/DevOps domain. He is excited to continue his professional growth and contribute to projects that shape the future of technology.




Top comments (0)