DEV Community

POTHURAJU JAYAKRISHNA YADAV
POTHURAJU JAYAKRISHNA YADAV

Posted on

Terraform Modular EKS + Istio — Part 2

IAM Module (IRSA, OIDC, and Why This Controls Everything)

In the previous part, we built the VPC.

Now we move to something that causes the most confusion in EKS setups:

👉 IAM

This is not just “permissions”.

This module controls:

  • how EKS works
  • how nodes behave
  • how pods access AWS services

If this is wrong:

  • ALB won’t work
  • CSI drivers fail
  • Pods can’t access AWS
  • Debugging becomes painful

📂 Module Files

modules/iam/
├── main.tf
├── variables.tf
└── outputs.tf
Enter fullscreen mode Exit fullscreen mode

📄 variables.tf

variable "cluster_name" {
  description = "Name of the EKS cluster"
  type        = string
}

variable "oidc_provider_arn" {
  description = "ARN of the OIDC provider"
  type        = string
}

variable "oidc_provider" {
  description = "OIDC provider URL"
  type        = string
}
Enter fullscreen mode Exit fullscreen mode

🧠 What these inputs mean

  • cluster_name
    → used to name roles

  • oidc_provider_arn
    → comes from EKS module

  • oidc_provider
    → used for IRSA condition matching

👉 Important:

This module depends on EKS
Because OIDC is created inside the EKS module.


📄 main.tf (Core IAM Logic)


1. EKS Cluster Role

resource "aws_iam_role" "eks_cluster" {
  name = "${var.cluster_name}-cluster-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "eks.amazonaws.com"
        }
      }
    ]
  })
}
Enter fullscreen mode Exit fullscreen mode

🧠 What this actually does

This role is used by:

👉 EKS Control Plane (managed by AWS)


Key line

Service = "eks.amazonaws.com"
Enter fullscreen mode Exit fullscreen mode

👉 Means:

“EKS service is allowed to assume this role”


Attach Policy

resource "aws_iam_role_policy_attachment" "eks_cluster_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.eks_cluster.name
}
Enter fullscreen mode Exit fullscreen mode

Why this policy?

This allows EKS to:

  • manage nodes
  • communicate with AWS
  • create resources

2. Node Group Role

resource "aws_iam_role" "eks_nodes" {
  name = "${var.cluster_name}-node-group-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "ec2.amazonaws.com"
        }
      }
    ]
  })
}
Enter fullscreen mode Exit fullscreen mode

🧠 What this role is for

👉 Used by EC2 instances (worker nodes)


Key line

Service = "ec2.amazonaws.com"
Enter fullscreen mode Exit fullscreen mode

👉 Means:

EC2 instances can assume this role


3. Node Policies

Now we attach multiple policies.


a. Worker Node Policy

resource "aws_iam_role_policy_attachment" "eks_worker_node_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
}
Enter fullscreen mode Exit fullscreen mode

👉 Allows nodes to:

  • join cluster
  • communicate with control plane

b. CNI Policy

resource "aws_iam_role_policy_attachment" "eks_cni_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
}
Enter fullscreen mode Exit fullscreen mode

👉 This is very important

Allows:

  • Pod networking
  • ENI management

👉 Without this:
Pods won’t get IPs


c. ECR Access

resource "aws_iam_role_policy_attachment" "eks_container_registry_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
}
Enter fullscreen mode Exit fullscreen mode

👉 Allows nodes to:

  • pull Docker images

d. SSM Access

resource "aws_iam_role_policy_attachment" "eks_ssm_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
}
Enter fullscreen mode Exit fullscreen mode

👉 Allows:

  • SSM access
  • no need for SSH

👉 This is production best practice


4. EBS CSI Driver Role (🔥 Most Important Part)

resource "aws_iam_role" "ebs_csi_driver" {
  name = "${var.cluster_name}-ebs-csi-driver-role"
Enter fullscreen mode Exit fullscreen mode

This is NOT for nodes.

This is for:

👉 Kubernetes Pod (EBS CSI controller)


🔥 This is IRSA (Core Concept)

Action = "sts:AssumeRoleWithWebIdentity"
Enter fullscreen mode Exit fullscreen mode

👉 This is different from EC2 roles

This allows:

👉 Pods → assume IAM role


🔥 OIDC Trust

Principal = {
  Federated = var.oidc_provider_arn
}
Enter fullscreen mode Exit fullscreen mode

👉 This links:

  • EKS cluster
  • IAM

🔥 Condition (VERY IMPORTANT)

"${var.oidc_provider}:sub" = "system:serviceaccount:kube-system:ebs-csi-controller-sa"
Enter fullscreen mode Exit fullscreen mode

👉 This means:

ONLY this service account can assume the role:

kube-system / ebs-csi-controller-sa
Enter fullscreen mode Exit fullscreen mode

Why this matters

👉 This is fine-grained security

Instead of:

❌ giving full access to nodes

You do:

✅ give access only to specific pod


Attach Policy

resource "aws_iam_role_policy_attachment" "ebs_csi_policy" {
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
}
Enter fullscreen mode Exit fullscreen mode

What this enables

  • Create volumes
  • Attach volumes
  • Manage storage

📄 outputs.tf

output "eks_cluster_role_arn" {
  value = aws_iam_role.eks_cluster.arn
}

output "eks_nodes_role_arn" {
  value = aws_iam_role.eks_nodes.arn
}

output "ebs_csi_driver_role_arn" {
  value = aws_iam_role.ebs_csi_driver.arn
}
Enter fullscreen mode Exit fullscreen mode

🧠 Why outputs matter

These are used in:

  • EKS module
  • Node module
  • CSI module

Example:

cluster_role_arn = module.iam.eks_cluster_role_arn
Enter fullscreen mode Exit fullscreen mode

👉 This creates dependency automatically.


🔥 Real Architecture (What You Built)

EKS Control Plane → uses cluster role

EC2 Nodes → use node role

Pods (CSI) → use IRSA role (OIDC)
Enter fullscreen mode Exit fullscreen mode

⚠️ Real Mistakes People Make

  • Giving full IAM to nodes (bad security)
  • Not using IRSA
  • Wrong OIDC condition → role not assumed
  • Forgetting CNI policy → pods fail

🧠 Key Takeaways

  • IAM is not optional — it defines system behavior
  • Nodes and pods should have separate roles
  • IRSA is the correct way to give AWS access to pods
  • OIDC is what connects Kubernetes to IAM

🚀 Next

In Part 3:

👉 EKS Cluster Module
👉 How control plane is created
👉 What OIDC actually does internally


If you understand this module, you understand how AWS + Kubernetes actually connect behind the scenes.

Top comments (0)