DEV Community

POTHURAJU JAYAKRISHNA YADAV
POTHURAJU JAYAKRISHNA YADAV

Posted on

Terraform Modular EKS + Istio — Part 3

EKS Cluster Module (What Actually Creates Kubernetes)

After setting up VPC and IAM, the next step is creating the actual Kubernetes control plane using Amazon EKS.

This module is responsible for:

  • Creating the EKS cluster
  • Configuring networking
  • Enabling authentication
  • Setting up OIDC (required for IRSA)

📂 Module Files

modules/eks-cluster/
├── main.tf
├── variables.tf
└── outputs.tf
Enter fullscreen mode Exit fullscreen mode

📄 variables.tf

variable "cluster_name" {
  description = "Name of the EKS cluster"
  type        = string
}

variable "cluster_version" {
  description = "Kubernetes version"
  type        = string
  default     = "1.29"
}

variable "cluster_role_arn" {
  description = "ARN of the EKS cluster IAM role"
  type        = string
}

variable "private_subnet_ids" {
  description = "List of private subnet IDs"
  type        = list(string)
}

variable "public_subnet_ids" {
  description = "List of public subnet IDs"
  type        = list(string)
}
Enter fullscreen mode Exit fullscreen mode

🧠 What this module expects

This module doesn’t create everything itself. It depends on:

  • VPC module → for subnets
  • IAM module → for cluster role

Inputs tell it:

  • what to name the cluster
  • which version to run
  • where to place it

📄 main.tf

This is where the actual EKS cluster is created.


1. EKS Cluster Resource

resource "aws_eks_cluster" "cluster" {
  name     = var.cluster_name
  version  = var.cluster_version
  role_arn = var.cluster_role_arn
Enter fullscreen mode Exit fullscreen mode

What this does

This creates the EKS control plane.

Important point:

👉 You are NOT creating master nodes
👉 AWS manages them for you


2. Networking Configuration

vpc_config {
  subnet_ids = concat(var.private_subnet_ids, var.public_subnet_ids)
}
Enter fullscreen mode Exit fullscreen mode

Why both private and public subnets?

  • Private subnets → worker nodes
  • Public subnets → load balancers

If you only pass private:

  • ALB/NLB creation can fail later

If you only pass public:

  • not secure

👉 Passing both is correct production setup.


3. Access Configuration (Very Important)

access_config {
  authentication_mode                         = "API_AND_CONFIG_MAP"
  bootstrap_cluster_creator_admin_permissions = true
}
Enter fullscreen mode Exit fullscreen mode

What this solves

Newer EKS versions changed authentication behavior.

This block ensures:

  • API-based authentication is enabled
  • IAM + aws-auth both work

This line is critical

bootstrap_cluster_creator_admin_permissions = true
Enter fullscreen mode Exit fullscreen mode

👉 Gives admin access to the creator

Without this:

  • cluster gets created
  • but you cannot access it

4. Dependency Handling

depends_on = [var.cluster_role_arn]
Enter fullscreen mode Exit fullscreen mode

Why needed?

Even though role ARN is passed:

Terraform may not always detect dependency properly.

This ensures:

👉 IAM role exists before cluster creation


🔥 OIDC Setup (Core Concept)

This is the most important part of this module.


5. Fetch TLS Certificate

data "tls_certificate" "cluster" {
  url = aws_eks_cluster.cluster.identity[0].oidc[0].issuer
}
Enter fullscreen mode Exit fullscreen mode

What is happening?

  • EKS creates an OIDC endpoint
  • This block fetches its certificate

Used for:
👉 Trust verification in IAM


6. Create OIDC Provider

resource "aws_iam_openid_connect_provider" "cluster" {
  client_id_list  = ["sts.amazonaws.com"]
  thumbprint_list = [data.tls_certificate.cluster.certificates[0].sha1_fingerprint]
  url             = aws_eks_cluster.cluster.identity[0].oidc[0].issuer
Enter fullscreen mode Exit fullscreen mode

What this actually does

This creates a bridge:

Kubernetes → IAM
Enter fullscreen mode Exit fullscreen mode

Why this is required

Without OIDC:

❌ Pods cannot assume IAM roles
❌ IRSA will not work


Thumbprint

thumbprint_list = [...]
Enter fullscreen mode Exit fullscreen mode

👉 Ensures secure trust between AWS and OIDC provider


📄 outputs.tf

output "cluster_id" {
  value = aws_eks_cluster.cluster.id
}

output "cluster_arn" {
  value = aws_eks_cluster.cluster.arn
}

output "cluster_endpoint" {
  value = aws_eks_cluster.cluster.endpoint
}

output "cluster_security_group_id" {
  value = aws_eks_cluster.cluster.vpc_config[0].cluster_security_group_id
}

output "cluster_certificate_authority_data" {
  value = aws_eks_cluster.cluster.certificate_authority[0].data
}

output "oidc_provider_arn" {
  value = aws_iam_openid_connect_provider.cluster.arn
}

output "oidc_provider" {
  value = replace(aws_eks_cluster.cluster.identity[0].oidc[0].issuer, "https://", "")
}
Enter fullscreen mode Exit fullscreen mode

🧠 Why these outputs matter

These values are used by other modules:

  • Kubernetes provider → uses endpoint + certificate
  • IAM module → uses OIDC provider
  • Node module → uses cluster name

Example:

host = module.eks_cluster.cluster_endpoint
Enter fullscreen mode Exit fullscreen mode

🔥 What You Actually Built

AWS Managed Control Plane
        │
        │
OIDC Provider (for IAM integration)
Enter fullscreen mode Exit fullscreen mode

⚠️ Real Issues People Face

  • No OIDC → IRSA breaks
  • Wrong subnets → cluster unstable
  • Missing access_config → login issues
  • Wrong IAM role → cluster creation fails

🧠 Key Takeaways

  • EKS cluster = control plane only
  • AWS manages master nodes
  • OIDC is required for IAM integration
  • Outputs connect modules together

🚀 Next Step

Next module:

👉 Node Groups (actual compute layer)
👉 How EC2 instances join the cluster
👉 Scaling and updates


This module is where Kubernetes actually starts — but without nodes, it’s still empty.

Top comments (0)