DEV Community

Cover image for Step-by-Step Guide: Creating an Amazon EKS Cluster Using Terraform
Kelechi Edeh
Kelechi Edeh

Posted on

Step-by-Step Guide: Creating an Amazon EKS Cluster Using Terraform

Manually provisioning cloud infrastructure can be repetitive and error-prone. Tools like Terraform allow us to define our infrastructure as code, making deployments repeatable, auditable, and scalable.

In this article, I’ll walk you through how I created a production-ready Amazon EKS cluster using Terraform, AWS, and two powerful open-source modules:

What is Terraform?

Terraform is an open-source Infrastructure as Code (IaC) tool developed by HashiCorp. It allows you to define, provision, and manage cloud infrastructure using declarative configuration files. Rather than clicking through web consoles, Terraform empowers you to codify your infrastructure and manage it just like your application code with versioning, collaboration, and automation.

Why Use Terraform for AWS EKS?

Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service that simplifies running Kubernetes on AWS without the operational overhead of managing the control plane.

Provisioning EKS manually can be complex due to the number of components involved (VPCs, subnets, IAM roles, node groups, etc.). Terraform removes this complexity by:

  • Enabling repeatable and auditable deployments.
  • Simplifying dependency management between AWS resources.
  • Integrating with CI/CD pipelines for automated infrastructure changes.

Prerequisites

Before creating an EKS cluster, ensure you have:

  • An AWS account and credentials configured locally
  • Terraform installed (terraform -v)
  • AWS CLI installed and configured (aws configure)
  • Basic knowledge of Terraform syntax

Project Structure

To keep things clean and efficient, I used only three main files to deploy both the networking (VPC) and the EKS cluster, leveraging the official Terraform modules for best practices.

Here’s what my final project structure looks like:
├── eks.tf
├── provider.tf
├── terraform.tfstate
├── terraform.tfvars
└── vpc.tf

By keeping networking and compute separate, I can manage, extend, or even reuse each part of the infrastructure more easily.

Networking with terraform-aws-modules/vpc/aws

In vpc.tf, I used the terraform-aws-modules/vpc/aws module to create a complete VPC setup with:

  • Public and private subnets across multiple AZs
  • A NAT Gateway
  • Required tags for EKS subnet auto-discovery
  • DNS and VPN gateway support (optional for hybrid setups)

Here’s the simplified breakdown of what this module gives me:

# Configure the AWS Provider
provider "aws" {
  region = "us-east-1"
}


variable vpc_cidr_blocks {}
variable public_subnet_cidr_blocks {}
variable private_subnet_cidr_blocks {}

data "aws_availability_zones" "azs" {}

module "my-eks-cluster-vpc" {
  source = "terraform-aws-modules/vpc/aws"
  version = "5.1.2"

  name = "my-vpc"
  cidr = var.vpc_cidr_blocks
  private_subnets = var.private_subnet_cidr_blocks
  public_subnets = var.public_subnet_cidr_blocks

  azs = data.aws_availability_zones.azs.names


  enable_nat_gateway = true
  enable_vpn_gateway = true
  enable_dns_hostnames = true

  tags = {
    "kubernetes.io/cluster/my-eks-cluster" = "shared"

  }

  public_subnet_tags = {
    "kubernetes.io/cluster/my-eks-cluster" = "shared"
    "kubernetes.io/role/elb" = 1
  }
  private_subnet_tags = {
    "kubernetes.io/cluster/my-eks-cluster" = "shared"
    "kubernetes.io/role/internal-elb" = 1
  }
}

Enter fullscreen mode Exit fullscreen mode

Why tagging matters:
EKS needs to know which subnets it can use for placing worker nodes and load balancers. The tags kubernetes.io/cluster/<name> and kubernetes.io/role/internal-elb or elb signal to AWS which subnets are eligible

Deploying EKS with terraform-aws-modules/eks/aws

In eks.tf, I used the terraform-aws-modules/eks/aws module to spin up the Kubernetes control plane and a managed node group.

Here’s what it includes:

  • EKS control plane with the latest version
  • IAM roles and security groups (auto-generated)
  • Managed node group with autoscaling
  • Automatic subnet and VPC discovery from the vpc module
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "19.17.2"

  cluster_name = "my-eks-cluster"
  cluster_version = "1.30"

  subnet_ids = module.my-eks-cluster-vpc.private_subnets
  vpc_id = module.my-eks-cluster-vpc.vpc_id

  tags = {
    env = "dev"
  }

  #node group configuration
   eks_managed_node_groups = {
    dev = {
      min_size     = 1
      max_size     = 3
      desired_size = 2

      instance_types = ["t2.small"]
    }
  }


}
Enter fullscreen mode Exit fullscreen mode

Using terraform.tfvars for Inputs

To separate logic from data, I defined my VPC CIDR and subnet ranges in terraform.tfvars like this:

vpc_cidr_blocks = "10.0.0.0/16"
private_subnet_cidr_blocks = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnet_cidr_blocks = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
Enter fullscreen mode Exit fullscreen mode

This makes the code reusable just update the .tfvars file to spin up a new environment (e.g., staging, production, dev).

S3 Native State Locking

One nice bonus I added: S3 native locking. In previous Terraform versions, you needed a DynamoDB table for state locking. But with AWS Provider v5.20.0+, you can use use_lockfile = true to enable native state locking directly in S3.

resource "aws_s3_bucket" "terraform_state" {
  bucket = "terraform-state-bucker12345"

  lifecycle {
    prevent_destroy = false
  }
}

terraform {  
  backend "s3" {  
    bucket       = "terraform-state-bucker12345"  
    key          = "dev/terraform-state-file"  
    region       = "us-east-1"  
    encrypt      = true  
    //use_lockfile = true  #S3 native locking
  }  
}

Enter fullscreen mode Exit fullscreen mode

This keeps my state safe from concurrent edits without needing a separate DynamoDB table.

Wrapping up

In this article, I demonstrated how to provision a production-ready Amazon EKS cluster using just a few Terraform files and the official AWS modules for VPC and EKS.

By leveraging:

  • The terraform-aws-modules/vpc/aws module for a highly available and EKS-compatible VPC,
  • The terraform-aws-modules/eks/aws module to simplify Kubernetes cluster provisioning,
  • And S3 native locking to manage Terraform state securely without DynamoDB,

I was able to deploy a scalable and maintainable Kubernetes environment using clean, modular infrastructure-as-code. This setup is not only efficient and reusable but also adheres to AWS best practices.

code repo

Top comments (0)