DEV Community

Cover image for Manage EKS aws-auth configmap with terraform
Takashi Narikawa
Takashi Narikawa

Posted on • Edited on

16 4

Manage EKS aws-auth configmap with terraform

⚠️Caution

Recently, managing EKS authentication via the API is preferred over using the aws-auth ConfigMap. Set authentication_mode to 'API' in the access configuration of the aws_eks_cluster resource, and streamline authentication management in Terraform by linking IAM roles with aws_eks_access_entry and aws_eks_access_policy_association.

REF

resource "aws_eks_cluster" "example" {
  name     = "example-cluster"
  ....
  access_config {
    authentication_mode                         = "API"
    bootstrap_cluster_creator_admin_permissions = false
  }
}

resource "aws_eks_access_entry" "example" {
  cluster_name      = aws_eks_cluster.example.name
  principal_arn     = aws_iam_role.example.arn
  kubernetes_groups = ["group-1", "group-2"]
  type              = "STANDARD"
}

resource "aws_eks_access_policy_association" "example" {
  cluster_name  = aws_eks_cluster.example.name
  policy_arn    = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy"
  principal_arn = aws_iam_user.example.arn

  access_scope {
    type       = "namespace"
    namespaces = ["example-namespace"]
  }
}
Enter fullscreen mode Exit fullscreen mode

Introduction

Hi, everyone.
I would like to leave a memorandum about how to manage Kubernetes's configmap AWS auto-generated with terraform.

What Trouble

  • If we want to add iam user/role for eks cluster operation, we need to fix auto-generated aws-auth configmap(namespace:kube-system)
  • If we follow the official manual: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html), we should manage it with kubernetes manifest yml, but we want to manage it with terraform.
    • Because at first we can access our eks cluster only with IAM user/role used when creating cluster(with ~/.kube/config as below) and our cluster generated role is terraform user/role
    • Therefore, We want to add user/role to aws-auth configmap with terraform user/role and manage aws-auth configmap with terraform.
    • Caution: According to EKS Best Practices Guides - Security, we should create the cluster with a dedicated IAM role
# ~/.kube/config
- name: arn:aws:eks:ap-northeast-1:9999999999:cluster/eks-example
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - ap-northeast-1
      - eks
      - get-token
      - --cluster-name
      - eks-example
      command: aws
      env: null
      provideClusterInfo: false
Enter fullscreen mode Exit fullscreen mode

How to resolve the trouble

1. Add terraform aws-auth configmap resouece and Use terraform import command

1.0 Prepare terraform kubernetes provider

provider "kubernetes" {
  host                   = data.aws_eks_cluster.eks.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.eks.token
} 

Enter fullscreen mode Exit fullscreen mode

1.1 Prapare aws-auth configmap tf resource for importing

# aws-auth.tf
resource "kubernetes_config_map" "aws-auth" {
  data = {
    "mapRoles" = ""
  }

  metadata {
    name      = ""
    namespace = ""
  }
}
Enter fullscreen mode Exit fullscreen mode

1.2 Execute terraoform import cmd

terraform import kubernetes_config_map.aws-auth kube-system/aws-auth
Enter fullscreen mode Exit fullscreen mode

1.3 terraform plan and remove diff from real resource state to resource config

resource "kubernetes_config_map" "aws-auth" {
  data = {
    "mapRoles" = <<EOT
- rolearn: arn:aws:iam::99999999999:role/hoge-role
  username: system:node:{{EC2PrivateDNSName}}
  groups:
    - system:bootstrappers
    - system:nodes
      # Therefore, before you specify rolearn, remove the path. For example, change arn:aws:iam::<123456789012>:role/<team>/<developers>/<eks-admin> to arn:aws:iam::<123456789012>:role/<eks-admin>. FYI:https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting_iam.html#security-iam-troubleshoot-ConfigMap
EOT
  }

  metadata {
    name      = "aws-auth"
    namespace = "kube-system"
  }
}
Enter fullscreen mode Exit fullscreen mode

2. Fix aws-auth configmap resource we imported and add iam user/role

resource "kubernetes_config_map" "aws-auth" {
  data = {
    "mapRoles" = <<EOT
- rolearn: arn:aws:iam::99999999999:role/hoge-role
  username: system:node:{{EC2PrivateDNSName}}
  groups:
    - system:bootstrappers
    - system:nodes
      # Therefore, before you specify rolearn, remove the path. For example, change arn:aws:iam::<123456789012>:role/<team>/<developers>/<eks-admin> to arn:aws:iam::<123456789012>:role/<eks-admin>. FYI:https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting_iam.html#security-iam-troubleshoot-ConfigMap
# Add as below 
- rolearn: hoge
  username: hoge
  groups: # REF: https://kubernetes.io/ja/docs/reference/access-authn-authz/rbac/
    - hoge
EOT
  }

  metadata {
    name      = "aws-auth"
    namespace = "kube-system"
  }
}
Enter fullscreen mode Exit fullscreen mode

References

Heroku

This site is built on Heroku

Join the ranks of developers at Salesforce, Airbase, DEV, and more who deploy their mission critical applications on Heroku. Sign up today and launch your first app!

Get Started

Top comments (1)

Collapse
 
nishantn3 profile image
Nishant Nath • Edited

Hi,
Thanks for the post.

Do you know a way to deploy k8s resources in a fully private eks cluster(control plane in private subnet) from terraform/ terraform cloud? So that I can edit the configmap from terraform itself

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs

👋 Kindness is contagious

Please leave a ❤️ or a friendly comment on this post if you found it helpful!

Okay