DEV Community

MD Nur Ahmed
MD Nur Ahmed

Posted on

Different Ways to Manage AWS IAM to Connect to an EKS Cluster

Managing access to Amazon EKS clusters can be confusing. Should you use IAM roles? What about service accounts? How do the new EKS Access Entries differ from the old aws-auth ConfigMap? If you've ever been overwhelmed by these questions, you're not alone.

Think of EKS access management as the security checkpoint at an airport. IAM is your passport (authentication), and EKS access policies are your boarding pass (authorization). You need both to get on the plane, and different passengers need different levels of access.

In this comprehensive guide, I'll walk you through everything you need to know about IAM for EKS, complete with four hands-on scenarios and working Terraform code you can deploy today.

What You'll Learn

  • ✅ How IAM authentication and authorization works with EKS
  • ✅ The OLD WAY (aws-auth ConfigMap) vs NEW WAY (Access Entries)
  • ✅ Setting up developer access with read-only permissions
  • ✅ Configuring CI/CD pipelines (GitHub Actions, GitLab, Jenkins)
  • ✅ Implementing IRSA (IAM Roles for Service Accounts)
  • ✅ Production-ready best practices

GitHub Repository: eks-iam-hands-on

All code examples in this post are fully tested and ready to deploy! 🚀


Table of Contents

  1. Understanding IAM with EKS
  2. The Old Way vs The New Way
  3. Scenario 1: Basic EKS Cluster Setup
  4. Scenario 2: Developer Read-Only Access
  5. Scenario 3: CI/CD Pipeline Access
  6. Scenario 4: IRSA for Pods
  7. Best Practices
  8. Common Pitfalls
  9. Conclusion

Understanding IAM with EKS

Before diving into the code, let's understand how IAM works with EKS.

Two Layers of Access Control

EKS uses two separate layers of access control:

1. AWS IAM (Authentication)

  • "Who are you?"
  • Verifies your AWS identity
  • Happens at the AWS API level

2. Kubernetes RBAC (Authorization)

  • "What can you do?"
  • Determines your permissions inside the cluster
  • Happens at the Kubernetes API level
┌──────────────────────────────────────────────────┐
│  User/Role/Service Account                       │
│  (Your AWS Identity)                             │
└────────────────┬─────────────────────────────────┘
                 │
                 ▼
┌──────────────────────────────────────────────────┐
│  AWS IAM Layer                                   │
│  ✓ Authenticated: You are who you say you are   │
└────────────────┬─────────────────────────────────┘
                 │
                 ▼
┌──────────────────────────────────────────────────┐
│  EKS Access Entry / aws-auth ConfigMap           │
│  ✓ Mapped: Your IAM identity → K8s user/group   │
└────────────────┬─────────────────────────────────┘
                 │
                 ▼
┌──────────────────────────────────────────────────┐
│  Kubernetes RBAC                                 │
│  ✓ Authorized: You can perform specific actions │
└──────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

OIDC: The Bridge Between AWS and Kubernetes

For pods to access AWS services, EKS uses an OIDC (OpenID Connect) provider:

Pod → Kubernetes Service Account → OIDC Token → IAM Role → AWS Service
Enter fullscreen mode Exit fullscreen mode

This eliminates the need for long-lived AWS credentials in your pods!


The Old Way vs The New Way

AWS recently introduced EKS Access Entries as a modern alternative to the aws-auth ConfigMap. Let's compare:

❌ OLD WAY: aws-auth ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::123456789012:role/DeveloperRole
      username: developer
      groups:
        - system:masters  # Too permissive!
  mapUsers: |
    - userarn: arn:aws:iam::123456789012:user/alice
      username: alice
      groups:
        - developers
Enter fullscreen mode Exit fullscreen mode

Problems:

  • 😱 Manual YAML editing (error-prone)
  • 😱 Requires cluster access to modify
  • 😱 No fine-grained permission control
  • 😱 All-or-nothing access (system:masters or nothing)
  • 😱 Poor audit trail

✅ NEW WAY: EKS Access Entries

# Create access entry
resource "aws_eks_access_entry" "developer" {
  cluster_name  = aws_eks_cluster.main.name
  principal_arn = aws_iam_role.developer.arn
  type          = "STANDARD"
}

# Associate managed access policy
resource "aws_eks_access_policy_association" "developer" {
  cluster_name  = aws_eks_cluster.main.name
  principal_arn = aws_iam_role.developer.arn

  # AWS-managed policy for read-only access
  policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy"

  access_scope {
    type       = "namespace"  # Limit to specific namespaces!
    namespaces = ["dev", "staging"]
  }
}
Enter fullscreen mode Exit fullscreen mode

Benefits:

  • ✅ Infrastructure as Code friendly
  • ✅ AWS-managed policies
  • ✅ API-driven (no ConfigMap editing)
  • ✅ Fine-grained permissions
  • ✅ Namespace-scoped access
  • ✅ Better audit via CloudTrail
  • ✅ No cluster access needed

AWS Managed Access Policies

AWS provides three managed policies:

Policy Kubernetes Equivalent Use Case
AmazonEKSClusterAdminPolicy cluster-admin Full cluster access
AmazonEKSAdminPolicy admin Admin without cluster-scoped resources
AmazonEKSEditPolicy edit Create/update resources
AmazonEKSViewPolicy view Read-only access

Scenario 1: Basic EKS Cluster Setup

Let's start by creating an EKS cluster using the new Access Entry API.

Architecture

┌─────────────────────────────────────────────────────┐
│                  AWS Account                         │
│                                                      │
│  ┌────────────────┐         ┌────────────────────┐ │
│  │  Admin Role    │────────▶│   EKS Cluster      │ │
│  │  (Your IAM)    │  Access │   + OIDC Provider  │ │
│  └────────────────┘  Entry  │   + Node Group     │ │
│                              └────────────────────┘ │
└─────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Step 1: Create the EKS Cluster

Create main.tf:

# VPC for EKS
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.0"

  name = "demo-eks-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["us-east-1a", "us-east-1b", "us-east-1c"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

  enable_nat_gateway = true
  single_nat_gateway = true

  # Required tags for EKS
  public_subnet_tags = {
    "kubernetes.io/role/elb" = "1"
  }
  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = "1"
  }
}

# IAM Role for EKS Cluster
resource "aws_iam_role" "eks_cluster_role" {
  name = "demo-eks-cluster-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "eks.amazonaws.com"
      }
    }]
  })
}

# Attach required policies
resource "aws_iam_role_policy_attachment" "eks_cluster_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.eks_cluster_role.name
}

# EKS Cluster with Access Entry API
resource "aws_eks_cluster" "main" {
  name     = "demo-eks-cluster"
  role_arn = aws_iam_role.eks_cluster_role.arn
  version  = "1.28"

  vpc_config {
    subnet_ids = concat(
      module.vpc.private_subnets,
      module.vpc.public_subnets
    )
  }

  # 🎯 KEY: Enable the new Access Entry API
  access_config {
    authentication_mode = "API_AND_CONFIG_MAP"  # Supports both
    bootstrap_cluster_creator_admin_permissions = true
  }

  depends_on = [
    aws_iam_role_policy_attachment.eks_cluster_policy
  ]
}

# Node Group
resource "aws_eks_node_group" "main" {
  cluster_name    = aws_eks_cluster.main.name
  node_group_name = "demo-node-group"
  node_role_arn   = aws_iam_role.eks_node_role.arn
  subnet_ids      = module.vpc.private_subnets

  scaling_config {
    desired_size = 2
    max_size     = 4
    min_size     = 1
  }

  instance_types = ["t3.medium"]
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Create Admin Access

# IAM Role for Admin Access
resource "aws_iam_role" "cluster_admin" {
  name = "demo-eks-admin-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        AWS = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
      }
    }]
  })
}

# 🎯 NEW: Create Access Entry
resource "aws_eks_access_entry" "admin" {
  cluster_name  = aws_eks_cluster.main.name
  principal_arn = aws_iam_role.cluster_admin.arn
  type          = "STANDARD"
}

# 🎯 NEW: Associate Admin Policy
resource "aws_eks_access_policy_association" "admin" {
  cluster_name  = aws_eks_cluster.main.name
  principal_arn = aws_iam_role.cluster_admin.arn

  # Grants full cluster admin access
  policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"

  access_scope {
    type = "cluster"  # Full cluster access
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Deploy and Test

# Initialize and apply
terraform init
terraform apply

# Configure kubectl
aws eks update-kubeconfig --region us-east-1 --name demo-eks-cluster

# Test access
kubectl get nodes
kubectl get pods -A
Enter fullscreen mode Exit fullscreen mode

Why This Works:

  1. The cluster creator gets automatic admin access
  2. Additional admins are granted via Access Entry
  3. No manual ConfigMap editing required!

Full code: scenario-1-basic-eks/


Scenario 2: Developer Read-Only Access

Now let's grant a developer read-only access to specific namespaces.

Use Case

  • Junior developers need to view pods and logs
  • No permission to create/delete resources
  • Limited to dev and staging namespaces only

Architecture

┌──────────────────────────────────────────────────┐
│  Developer                                       │
│  (IAM User/Role)                                 │
└────────────┬─────────────────────────────────────┘
             │ sts:AssumeRole
             ▼
┌──────────────────────────────────────────────────┐
│  Developer IAM Role                              │
│  + EKS Describe permissions                      │
└────────────┬─────────────────────────────────────┘
             │ Access Entry
             ▼
┌──────────────────────────────────────────────────┐
│  EKS Cluster                                     │
│  Policy: AmazonEKSViewPolicy                     │
│  Scope: namespaces [dev, staging]                │
└──────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Implementation

# IAM Role for Developers
resource "aws_iam_role" "developer" {
  name = "demo-eks-developer-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        AWS = var.developer_principal_arns
      }
      # 🔒 Security: Require External ID
      Condition = {
        StringEquals = {
          "sts:ExternalId" = var.external_id
        }
      }
    }]
  })
}

# IAM Policy for EKS API Access
resource "aws_iam_policy" "eks_describe" {
  name = "demo-eks-developer-describe-policy"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "eks:DescribeCluster",
          "eks:ListClusters"
        ]
        Resource = "*"
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "developer_eks_describe" {
  role       = aws_iam_role.developer.name
  policy_arn = aws_iam_policy.eks_describe.arn
}

# 🎯 Access Entry for Developer
resource "aws_eks_access_entry" "developer" {
  cluster_name  = var.cluster_name
  principal_arn = aws_iam_role.developer.arn
  type          = "STANDARD"
}

# 🎯 Grant Read-Only Access (Namespace-Scoped)
resource "aws_eks_access_policy_association" "developer" {
  cluster_name  = var.cluster_name
  principal_arn = aws_iam_role.developer.arn

  # View-only policy
  policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy"

  access_scope {
    type       = "namespace"
    namespaces = ["dev", "staging"]  # Limited scope!
  }
}
Enter fullscreen mode Exit fullscreen mode

Testing Read-Only Access

# Assume developer role
aws sts assume-role \
  --role-arn arn:aws:iam::123456789012:role/demo-eks-developer-role \
  --role-session-name dev-session \
  --external-id my-external-id

# Export credentials
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_SESSION_TOKEN="..."

# Update kubeconfig
aws eks update-kubeconfig --region us-east-1 --name demo-eks-cluster

# These should WORK ✅
kubectl get pods -n dev
kubectl get pods -n staging
kubectl describe node <node-name>
kubectl logs <pod-name> -n dev

# These should FAIL ❌
kubectl create deployment test --image=nginx -n dev
# Error: deployments.apps is forbidden

kubectl delete pod <pod-name> -n dev
# Error: pods is forbidden

kubectl get pods -n production
# Error: pods is forbidden (wrong namespace)
Enter fullscreen mode Exit fullscreen mode

What Makes This Secure?

  1. External ID: Prevents confused deputy problem
  2. Namespace-scoped: Can't access production
  3. Read-only: Can't modify anything
  4. No long-lived credentials: Must assume role each time

Full code: scenario-2-developer-access/


Scenario 3: CI/CD Pipeline Access

Modern CI/CD systems need to deploy to EKS. Let's configure GitHub Actions, GitLab CI, and Jenkins.

Why OIDC for CI/CD?

Old way: Store long-lived AWS access keys as secrets
New way: Use OIDC tokens (no credentials stored!)

GitHub Actions with OIDC

Architecture

┌────────────────────────────────────────────────┐
│  GitHub Actions Workflow                       │
│  (Repository: org/repo)                        │
└──────────────┬─────────────────────────────────┘
               │ OIDC Token
               ▼
┌────────────────────────────────────────────────┐
│  GitHub OIDC Provider (AWS IAM)                │
│  token.actions.githubusercontent.com           │
└──────────────┬─────────────────────────────────┘
               │ Trust Relationship
               ▼
┌────────────────────────────────────────────────┐
│  GitHub Actions IAM Role                       │
│  + EKS Admin Access Entry                      │
└──────────────┬─────────────────────────────────┘
               │
               ▼
┌────────────────────────────────────────────────┐
│  EKS Cluster - Deploy Application              │
└────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Step 1: Create OIDC Provider

# GitHub OIDC Provider
resource "aws_iam_openid_connect_provider" "github_actions" {
  url = "https://token.actions.githubusercontent.com"

  client_id_list = ["sts.amazonaws.com"]

  # GitHub's SSL certificate thumbprints
  thumbprint_list = [
    "6938fd4d98bab03faadb97b34396831e3780aea1",
    "1c58a3a8518e8759bf075b76b750d4f2df264fcd"
  ]
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Create IAM Role

# IAM Role for GitHub Actions
resource "aws_iam_role" "github_actions" {
  name = "demo-eks-github-actions-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect = "Allow"
      Principal = {
        Federated = aws_iam_openid_connect_provider.github_actions.arn
      }
      Action = "sts:AssumeRoleWithWebIdentity"
      Condition = {
        StringEquals = {
          "token.actions.githubusercontent.com:aud" = "sts.amazonaws.com"
        }
        # 🔒 Restrict to specific repositories
        StringLike = {
          "token.actions.githubusercontent.com:sub" = [
            "repo:your-org/your-repo:*"
          ]
        }
      }
    }]
  })
}

# Grant EKS access
resource "aws_eks_access_entry" "github_actions" {
  cluster_name  = var.cluster_name
  principal_arn = aws_iam_role.github_actions.arn
  type          = "STANDARD"
}

resource "aws_eks_access_policy_association" "github_actions" {
  cluster_name  = var.cluster_name
  principal_arn = aws_iam_role.github_actions.arn
  policy_arn    = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"

  access_scope {
    type = "cluster"
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 3: GitHub Workflow

.github/workflows/deploy.yml:

name: Deploy to EKS

on:
  push:
    branches: [main]

# 🎯 KEY: Request OIDC token
permissions:
  id-token: write
  contents: read

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      # 🎯 Configure AWS credentials via OIDC (no keys!)
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
          aws-region: us-east-1

      # Now you have AWS access!
      - name: Update kubeconfig
        run: |
          aws eks update-kubeconfig \
            --region us-east-1 \
            --name demo-eks-cluster

      - name: Deploy
        run: |
          kubectl apply -f k8s/
          kubectl rollout status deployment/my-app
Enter fullscreen mode Exit fullscreen mode

GitLab CI with OIDC

.gitlab-ci.yml:

deploy:
  stage: deploy
  image: amazon/aws-cli:latest
  id_tokens:
    GITLAB_OIDC_TOKEN:
      aud: https://gitlab.com
  before_script:
    - >
      export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s"
      $(aws sts assume-role-with-web-identity
      --role-arn ${ROLE_ARN}
      --role-session-name "gitlab-${CI_PROJECT_ID}-${CI_PIPELINE_ID}"
      --web-identity-token ${GITLAB_OIDC_TOKEN}
      --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]'
      --output text))
    - aws eks update-kubeconfig --region us-east-1 --name demo-eks-cluster
  script:
    - kubectl apply -f k8s/
  only:
    - main
Enter fullscreen mode Exit fullscreen mode

Jenkins (Traditional Approach)

For Jenkins, use role assumption with External ID:

pipeline {
    agent any

    environment {
        ROLE_ARN = 'arn:aws:iam::123456789012:role/jenkins-eks-role'
        EXTERNAL_ID = 'jenkins-external-id'
    }

    stages {
        stage('Deploy to EKS') {
            steps {
                script {
                    // Assume role
                    def creds = sh(
                        script: """
                            aws sts assume-role \
                                --role-arn ${ROLE_ARN} \
                                --role-session-name jenkins-${BUILD_NUMBER} \
                                --external-id ${EXTERNAL_ID} \
                                --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' \
                                --output text
                        """,
                        returnStdout: true
                    ).trim().split()

                    env.AWS_ACCESS_KEY_ID = creds[0]
                    env.AWS_SECRET_ACCESS_KEY = creds[1]
                    env.AWS_SESSION_TOKEN = creds[2]
                }

                sh '''
                    aws eks update-kubeconfig --name demo-eks-cluster
                    kubectl apply -f k8s/
                '''
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Full code: scenario-3-cicd-access/


Scenario 4: IRSA for Pods

The crown jewel of EKS IAM: IAM Roles for Service Accounts (IRSA).

The Problem

How do pods access AWS services (S3, DynamoDB, etc.) securely?

Bad: Put credentials in environment variables
Bad: Use EC2 instance profile (all pods share permissions)
Good: IRSA (each pod gets its own IAM role)

How IRSA Works

┌───────────────────────────────────────────────────┐
│  Pod                                              │
│  ServiceAccount: s3-access-sa                     │
│  ↓                                                │
│  Environment Variables (injected by EKS):         │
│  - AWS_ROLE_ARN=arn:aws:iam::123456789012:role/s3│
│  - AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets...│
└────────────────┬──────────────────────────────────┘
                 │
                 ▼
┌──────────────────────────────────────────────────┐
│  EKS OIDC Provider                               │
│  oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E  │
└────────────────┬──────────────────────────────────┘
                 │ Validates token
                 ▼
┌──────────────────────────────────────────────────┐
│  IAM Role (s3-access-role)                       │
│  Trust Policy: Only this ServiceAccount          │
└────────────────┬──────────────────────────────────┘
                 │
                 ▼
┌──────────────────────────────────────────────────┐
│  S3 Bucket                                       │
│  ✅ Pod can access!                              │
└──────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Step 1: Create OIDC Provider

# Get OIDC provider details from EKS
data "tls_certificate" "cluster" {
  url = data.aws_eks_cluster.cluster.identity[0].oidc[0].issuer
}

# Create OIDC provider
resource "aws_iam_openid_connect_provider" "cluster" {
  client_id_list  = ["sts.amazonaws.com"]
  thumbprint_list = [data.tls_certificate.cluster.certificates[0].sha1_fingerprint]
  url             = data.aws_eks_cluster.cluster.identity[0].oidc[0].issuer
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Create S3 Bucket and IAM Role

# S3 bucket for app
resource "aws_s3_bucket" "app_data" {
  bucket = "demo-eks-app-data"
}

# IAM Role for pods
resource "aws_iam_role" "s3_access" {
  name = "demo-eks-s3-access-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect = "Allow"
      Principal = {
        Federated = aws_iam_openid_connect_provider.cluster.arn
      }
      Action = "sts:AssumeRoleWithWebIdentity"
      Condition = {
        StringEquals = {
          # 🔒 Only allow specific ServiceAccount
          "${local.oidc_provider_url}:sub" = "system:serviceaccount:app:s3-access-sa"
          "${local.oidc_provider_url}:aud" = "sts.amazonaws.com"
        }
      }
    }]
  })
}

# S3 access policy
resource "aws_iam_policy" "s3_access" {
  name = "demo-eks-s3-access-policy"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect = "Allow"
      Action = [
        "s3:GetObject",
        "s3:PutObject",
        "s3:ListBucket"
      ]
      Resource = [
        aws_s3_bucket.app_data.arn,
        "${aws_s3_bucket.app_data.arn}/*"
      ]
    }]
  })
}

resource "aws_iam_role_policy_attachment" "s3_access" {
  role       = aws_iam_role.s3_access.name
  policy_arn = aws_iam_policy.s3_access.arn
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Create Kubernetes ServiceAccount

resource "kubernetes_service_account" "s3_access" {
  metadata {
    name      = "s3-access-sa"
    namespace = "app"

    # 🎯 This annotation links SA to IAM role
    annotations = {
      "eks.amazonaws.com/role-arn" = aws_iam_role.s3_access.arn
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 4: Deploy Pod Using ServiceAccount

apiVersion: v1
kind: Pod
metadata:
  name: s3-test-pod
  namespace: app
spec:
  serviceAccountName: s3-access-sa  # 🎯 Use the ServiceAccount
  containers:
  - name: app
    image: amazon/aws-cli:latest
    command: ["sleep", "3600"]
Enter fullscreen mode Exit fullscreen mode

Step 5: Test IRSA

# Deploy the pod
kubectl apply -f pod.yaml

# Check environment variables (injected by EKS)
kubectl exec -it s3-test-pod -n app -- env | grep AWS

# Output:
# AWS_ROLE_ARN=arn:aws:iam::123456789012:role/demo-eks-s3-access-role
# AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token

# Test S3 access
kubectl exec -it s3-test-pod -n app -- \
  aws s3 ls s3://demo-eks-app-data/

# Upload a file
kubectl exec -it s3-test-pod -n app -- \
  aws s3 cp /etc/hostname s3://demo-eks-app-data/test.txt

# ✅ It works!

# Try to access a different bucket (should fail)
kubectl exec -it s3-test-pod -n app -- \
  aws s3 ls s3://some-other-bucket/
# ❌ Error: Access Denied (as expected!)
Enter fullscreen mode Exit fullscreen mode

Real Application Example

Here's a Flask app that uses IRSA:

import boto3
from flask import Flask, jsonify

app = Flask(__name__)

# boto3 automatically uses IRSA credentials!
s3_client = boto3.client('s3')
BUCKET = 'demo-eks-app-data'

@app.route('/files')
def list_files():
    """List files in S3 bucket"""
    response = s3_client.list_objects_v2(Bucket=BUCKET)
    files = [obj['Key'] for obj in response.get('Contents', [])]
    return jsonify({'files': files})

@app.route('/upload/<filename>')
def upload_file(filename):
    """Upload a file to S3"""
    s3_client.put_object(
        Bucket=BUCKET,
        Key=filename,
        Body=b'Hello from IRSA!'
    )
    return jsonify({'message': f'Uploaded {filename}'})

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=8080)
Enter fullscreen mode Exit fullscreen mode

Key points:

  • No AWS credentials in code!
  • boto3 automatically uses IRSA
  • Each pod can have different permissions

Full code: scenario-4-irsa/


Best Practices

1. Use the New Access Entry API

Do:

resource "aws_eks_access_entry" "developer" {
  cluster_name  = aws_eks_cluster.main.name
  principal_arn = aws_iam_role.developer.arn
}
Enter fullscreen mode Exit fullscreen mode

Don't:

# Manual aws-auth ConfigMap editing
Enter fullscreen mode Exit fullscreen mode

2. Principle of Least Privilege

Do:

# Specific resources only
Resource = [
  aws_s3_bucket.app.arn,
  "${aws_s3_bucket.app.arn}/*"
]
Enter fullscreen mode Exit fullscreen mode

Don't:

# Wildcard access
Resource = "*"
Enter fullscreen mode Exit fullscreen mode

3. Use Namespace-Scoped Access

Do:

access_scope {
  type       = "namespace"
  namespaces = ["dev", "staging"]
}
Enter fullscreen mode Exit fullscreen mode

Don't:

# Give everyone cluster-wide access
access_scope {
  type = "cluster"
}
Enter fullscreen mode Exit fullscreen mode

4. Enable CloudTrail Logging

resource "aws_cloudtrail" "main" {
  name           = "eks-audit-trail"
  s3_bucket_name = aws_s3_bucket.cloudtrail.id

  event_selector {
    read_write_type           = "All"
    include_management_events = true
  }
}
Enter fullscreen mode Exit fullscreen mode

5. Use OIDC for CI/CD

Do: GitHub Actions/GitLab with OIDC
Don't: Store long-lived access keys

6. Implement Session Limits

resource "aws_iam_role" "developer" {
  # Limit session duration
  max_session_duration = 3600  # 1 hour
}
Enter fullscreen mode Exit fullscreen mode

7. Require MFA for Production

Condition = {
  Bool = {
    "aws:MultiFactorAuthPresent" = "true"
  }
}
Enter fullscreen mode Exit fullscreen mode

8. Use External IDs

Condition = {
  StringEquals = {
    "sts:ExternalId" = "unique-external-id"
  }
}
Enter fullscreen mode Exit fullscreen mode

Common Pitfalls

1. Forgetting to Bootstrap Cluster Creator

Problem:

# Cluster creator has no access!
access_config {
  bootstrap_cluster_creator_admin_permissions = false
}
Enter fullscreen mode Exit fullscreen mode

Solution:

access_config {
  bootstrap_cluster_creator_admin_permissions = true
}
Enter fullscreen mode Exit fullscreen mode

2. Wrong OIDC Provider Thumbprint

Problem:

Error: InvalidIdentityToken
Enter fullscreen mode Exit fullscreen mode

Solution:

# Get correct thumbprint
openssl s_client -servername oidc.eks.us-east-1.amazonaws.com \
  -showcerts -connect oidc.eks.us-east-1.amazonaws.com:443 2>&- \
  | openssl x509 -fingerprint -sha1 -noout -in /dev/stdin
Enter fullscreen mode Exit fullscreen mode

3. ServiceAccount Annotation Typo

Problem:

annotations:
  eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/wrong-role
Enter fullscreen mode Exit fullscreen mode

Solution:

# Verify with Terraform output
terraform output s3_role_arn
Enter fullscreen mode Exit fullscreen mode

4. Overly Permissive Policies

Problem:

Action = "*"
Resource = "*"
Enter fullscreen mode Exit fullscreen mode

Solution:

Action = [
  "s3:GetObject",
  "s3:PutObject"
]
Resource = "${aws_s3_bucket.app.arn}/*"
Enter fullscreen mode Exit fullscreen mode

5. Not Testing Denied Actions

Always test that restricted actions fail:

# Should succeed
kubectl get pods -n dev

# Should fail
kubectl delete pod test -n dev
Enter fullscreen mode Exit fullscreen mode

Troubleshooting Guide

Issue: "You must be logged in to the server (Unauthorized)"

Check:

# Verify AWS credentials
aws sts get-caller-identity

# Update kubeconfig
aws eks update-kubeconfig --region us-east-1 --name demo-eks-cluster

# Check access entry exists
aws eks describe-access-entry \
  --cluster-name demo-eks-cluster \
  --principal-arn <your-role-arn>
Enter fullscreen mode Exit fullscreen mode

Issue: IRSA Not Working

Check:

# Verify OIDC provider exists
aws iam list-open-id-connect-providers

# Check ServiceAccount annotation
kubectl get sa s3-access-sa -n app -o yaml

# Verify pod has environment variables
kubectl exec -it <pod> -n app -- env | grep AWS

# Check token file exists
kubectl exec -it <pod> -n app -- cat /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Enter fullscreen mode Exit fullscreen mode

Issue: Access Denied

Check:

# Get caller identity from pod
kubectl exec -it <pod> -- aws sts get-caller-identity

# Check IAM role trust policy
aws iam get-role --role-name <role-name>

# Check attached policies
aws iam list-attached-role-policies --role-name <role-name>
Enter fullscreen mode Exit fullscreen mode

Conclusion

We've covered a lot! Here's what you learned:

Understanding: How IAM works with EKS (authentication vs authorization)
New API: EKS Access Entries vs aws-auth ConfigMap
Scenarios: 4 practical, production-ready examples
IRSA: The secure way for pods to access AWS services
CI/CD: OIDC-based authentication for GitHub, GitLab, Jenkins
Best Practices: Security, least privilege, and troubleshooting

Next Steps

  1. Clone the repo: eks-iam-hands-on
  2. Start with Scenario 1: Get a cluster running
  3. Experiment: Try different access policies
  4. Deploy the sample app: See IRSA in action
  5. Adapt for production: Use these patterns in real projects

Additional Resources

What challenges have you faced with EKS IAM?

Share your experiences in the comments! If this guide helped you, give it a ❤️ and share with your team.

Happy Kuberneting! 🚀


Found this helpful? Follow me for more AWS and Kubernetes content!

Connect:

Top comments (0)