DEV Community

Cover image for Configure IRSA using EKS to access S3 from a POD in terraform
Gerson Morales
Gerson Morales

Posted on • Edited on

Configure IRSA using EKS to access S3 from a POD in terraform

This post will provide a detailed, step-by-step guide for configuring IRSA using terraform which would allow a POD running in EKS to connect to S3 service.

Requirements

  1. AWS Account.
  2. S3 bucket.
  3. Terraform.
  4. EKS cluster.

Scope

The final goal would be to allowed a pod to copy or put files to and from an S3 bucket called krakenmoto.

Initial steps

💻Install terraform

OSX

brew install hashicorp/tap/terraform

Windows

choco install terraform

Linux

sudo apt-get install terraform

Terraform module configuration

This module would create necessary resources to get IRSA working

  • S3 Bucket.
  • IAM Role with trust-relationship for eks service-account and namespace.
  • IAM Policy for S3 access.
  • Data resource to get EKS access.

Module would be save into modules/irsa folder and it would contain 3 files data.tf, main.tf, and variables.tf.

data.tf

data "aws_eks_cluster" "eks" {
  count = var.eks_cluster_id == null ? 0 : 1
  name  = var.eks_cluster_id
}

data "aws_partition" "current" {}
Enter fullscreen mode Exit fullscreen mode

main.tf

locals {
  application = "gersonplace-irsa"
  name        = var.project_name

  eks_oidc_issuer                       = var.eks_cluster_id == null ? "" : join("/", slice(split("/", one(data.aws_eks_cluster.eks).identity[0].oidc[0].issuer), 2, 5))
  eks_cluster_oidc_arn                  = "arn:${data.aws_partition.current.partition}:iam::${var.aws_account}:oidc-provider/${local.eks_oidc_issuer}"
  eks_namespace                         = "gersonplace"
  service_account                       = "${local.eks_namespace}-sa"
  common_tags = {
    application = local.application
  }
}

module "s3_bucket" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "4.3.0"
  bucket  = var.bucket_name

  control_object_ownership = true
  object_ownership         = "BucketOwnerPreferred"

  acl = "private"

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true

  server_side_encryption_configuration = {
    rule = {
      apply_server_side_encryption_by_default = {
        sse_algorithm = "AES256"
      }
    }
  }

  tags = local.common_tags
}

resource "aws_iam_role" "irsa" {
  name        = "gersonplace-irsa-role"
  description = "${local.name} EKS IRSA role"

  assume_role_policy = <<-EOT
  {
    "Version": "2012-10-17",
    "Statement": [
      {
        "Action": "sts:AssumeRoleWithWebIdentity",
        "Condition": {
          "StringLike": {
            "${local.eks_oidc_issuer}:sub": "system:serviceaccount:${local.eks_namespace}:${local.service_account}",
            "${local.eks_oidc_issuer}:aud": "sts.amazonaws.com"
          }
        },
        "Principal": {
          "Federated": "${local.eks_cluster_oidc_arn}"
        },
        "Effect": "Allow",
        "Sid": ""
      }
    ]
  }
  EOT

  tags = local.common_tags
}

resource "aws_iam_policy" "irsa" {
  name        = "${local.name}-irsa-policy"
  description = "${local.name}-integration with EKS Pods"
  policy = jsonencode(
    {
      Version = "2012-10-17"
      Statement = [
        {
          Action = [
            "s3:*",
          ]
          Effect = "Allow"
          Resource = [
            "arn:aws:s3:::${var.bucket_name}",
            "arn:aws:s3:::${var.bucket_name}/*"
          ],
        }
      ]
    }
  )
  tags = local.common_tags
}

resource "aws_iam_role_policy_attachment" "irsa" {
  role       = aws_iam_role.irsa.name
  policy_arn = aws_iam_policy.irsa.arn
}
Enter fullscreen mode Exit fullscreen mode

variables.tf

variable "aws_account" {
  description = "AWS account ID"
  type        = string
}

variable "region" {
  description = "AWS Region"
  type        = string
}

variable "bucket_name" {
  type = string
}

variable "project_name" {
  type = string
}

variable "eks_cluster_id" {
  type = string
}
Enter fullscreen mode Exit fullscreen mode

These 3 files are going to create infrastructure necessary to be able to access S3 from EKS pod in namespace gersonplace bucket-name would be configure as an input in main.tf file that would call the module to create the whole infrastructure.

Image description

Terraform code that would set variables to call the module

main.tf

module "IRSA" {
  source = "./modules/irsa"
  aws_account    = "112223334445"
  region         = "us-east-1"
  bucket_name    = "krakenmoto"
  project_name   = "gersonplace-irsa"
  eks_cluster_id = "gersonplace-eks-project"
}
Enter fullscreen mode Exit fullscreen mode

Now we are ready to run terraform, place your self in folder where main.tf is located and run the following commands

terraform init -reconfigure -upgrade
terraform validate
terraform plan
terraform apply

At this point we should have a new S3 bucket named krakenmoto and IAM role and Iam policy with a trust-relationship which contains the something like

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::112223334445:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/8DF14F971F8dfdDSJDSJDJSJDJA"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringLike": {
                    "oidc.eks.us-east-1.amazonaws.com/id/8DF14F971F8dfdDSJDSJDJSJDJA:sub": "system:serviceaccount:gersonplace:gersonplace-sa",
                    "oidc.eks.us-east-1.amazonaws.com/id/8DF14F971F8dfdDSJDSJDJSJDJA:aud": "sts.amazonaws.com"
                }
            }
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

This configuration enables access for the gersonplace-sa service account in the gersonplace namespace. By associating the service account with the role through the appropriate annotation, any resource using this service account will be granted permissions to access the resources specified in the role’s policy. This approach ensures that the Deployment/Pod can securely access the permitted resources in alignment with the role's policy. I'll explain this next with Kubernetes.

Kubernetes

Now that we have the required infrastructure and permissions in AWS it is time to deploy EKS resources and test connectivity to S3 from pod. For this example I would configure a Deployment running one pod in gersonplace namespace with a service account that includes the annotation for the role with necessary permissions for S3.

EKS Resources

  • Deployment: gersonplace-irsa
  • Service-Account: gersonplace-sa
  • Namespace: gersonplace

namespace.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: gersonplace
Enter fullscreen mode Exit fullscreen mode

service-account.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: gersonplace-sa
  namespace: gersonplace
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::112223334445:role/gersonplace-role
Enter fullscreen mode Exit fullscreen mode

✅ Annotation eks.amazonaws.com/role-arn: with the role arn has been added.

Deployment.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: gersonplace-irsa
  namespace: gersonplace
  labels:
    app: gersonplace
spec:
  selector:
    matchLabels:
      app: gersonplace
  replicas: 1
  template:
    metadata:
      labels:
        app: gersonplace
    spec:
      containers:
      - name: gersonplace-irsa
        image: amazon/aws-cli:latest
        imagePullPolicy: Always
        command: ["sleep", "infinity"]
      serviceAccountName: gersonplace-sa
Enter fullscreen mode Exit fullscreen mode

✅ Service-Account has been added to the deployment.

âš Important:
serviceAccountName should be set to service-account set in the trust-policy.

service-account should have the annotation eks.amazonaws.com/role-arn: with the role that contains the policy that allows access to S3.

At this point you can apply k8s manifest using kubectl and login into the pod and test access to S3 bucket krakenmoto and list files inside of it.

💻 ✅## Validation
Shell into the deployment pod and run aws s3 ls s3://krakenmoto you should be able to list files, put and copy files as policy allows to do it.

Image description

Top comments (0)