DEV Community

POTHURAJU JAYAKRISHNA YADAV
POTHURAJU JAYAKRISHNA YADAV

Posted on

πŸš€ Karpenter Setup on AWS EKS – Practical Notes from Real Setup

Karpenter is a modern Kubernetes node autoscaler designed to replace traditional Cluster Autoscaler limitations.
Instead of managing node groups, Karpenter provisions right-sized EC2 instances dynamically, based on actual pod requirements.

In this post, I’m documenting a complete Karpenter setup on AWS EKS, including IAM roles, CRDs, Helm installation, NodePool, and EC2NodeClass β€” written as real setup notes, not just theory.

🧠 Why Karpenter?

Compared to Cluster Autoscaler, Karpenter:

Launches nodes faster

Chooses optimal instance types

Reduces cost with smarter consolidation

Eliminates the need for static node groups

1️⃣ Pre-requisites

Before installing Karpenter, make sure you have:

An existing EKS cluster

AWS CLI configured (aws configure)

kubectl configured for the cluster

Helm installed

jq and sed installed

(Optional) eksctl for OIDC setup

2️⃣ Set Environment Variables

These variables are reused across IAM, Helm, and manifests.

export KARPENTER_NAMESPACE=kube-system
export CLUSTER_NAME=eks-cluster
export AWS_PARTITION="aws"
export AWS_REGION=$(aws configure get region)
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
export K8S_VERSION=$(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.version" --output text)

export ALIAS_VERSION=$(aws ssm get-parameter \
  --name "/aws/service/eks/optimized-ami/$K8S_VERSION/amazon-linux-2023/x86_64/standard/recommended/image_id" \
  --query Parameter.Value | xargs aws ec2 describe-images \
  --query 'Images[0].Name' --image-ids | sed -r 's/^.*(v[[:digit:]]+).*$/\1/')

export OIDC_ENDPOINT=$(aws eks describe-cluster \
  --name $CLUSTER_NAME \
  --query "cluster.identity.oidc.issuer" --output text)
Enter fullscreen mode Exit fullscreen mode

3️⃣ IAM Roles for Karpenter

Karpenter requires two IAM roles:

Node role (for EC2 instances)

Controller role (for Karpenter itself)

3a. Karpenter Node IAM Role

Create trust policy:

`{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": { "Service": "ec2.amazonaws.com" },
      "Action": "sts:AssumeRole"
    }
  ]
}`
Enter fullscreen mode Exit fullscreen mode

Create role and attach policies:

aws iam create-role \
  --role-name KarpenterNodeRole-$CLUSTER_NAME \
  --assume-role-policy-document file://node-trust-policy.json

aws iam attach-role-policy --role-name KarpenterNodeRole-$CLUSTER_NAME \
  --policy-arn arn:$AWS_PARTITION:iam::aws:policy/AmazonEKSWorkerNodePolicy

aws iam attach-role-policy --role-name KarpenterNodeRole-$CLUSTER_NAME \
  --policy-arn arn:$AWS_PARTITION:iam::aws:policy/AmazonEKS_CNI_Policy

aws iam attach-role-policy --role-name KarpenterNodeRole-$CLUSTER_NAME \
  --policy-arn arn:$AWS_PARTITION:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly

aws iam attach-role-policy --role-name KarpenterNodeRole-$CLUSTER_NAME \
  --policy-arn arn:$AWS_PARTITION:iam::aws:policy/AmazonSSMManagedInstanceCore
Enter fullscreen mode Exit fullscreen mode

3b. Karpenter Controller IAM Role (OIDC)

Create trust policy using the cluster OIDC provider:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/<OIDC_PROVIDER>"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "<OIDC_PROVIDER>:aud": "sts.amazonaws.com",
          "<OIDC_PROVIDER>:sub": "system:serviceaccount:kube-system:karpenter"
        }
      }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Attach the Karpenter controller policy (EC2, IAM, SQS, SSM permissions).

4️⃣ Tag VPC Resources for Discovery

Karpenter discovers subnets and security groups using tags.

Tag nodegroup subnets:

for NODEGROUP in $(aws eks list-nodegroups --cluster-name $CLUSTER_NAME --query 'nodegroups' --output text); do
  aws ec2 create-tags \
    --tags Key=karpenter.sh/discovery,Value=$CLUSTER_NAME \
    --resources $(aws eks describe-nodegroup \
      --cluster-name $CLUSTER_NAME \
      --nodegroup-name $NODEGROUP \
      --query 'nodegroup.subnets' --output text)
done
Enter fullscreen mode Exit fullscreen mode

Tag cluster security group:

SECURITY_GROUP=$(aws eks describe-cluster \
  --name $CLUSTER_NAME \
  --query "cluster.resourcesVpcConfig.clusterSecurityGroupId" --output text)

aws ec2 create-tags \
  --tags Key=karpenter.sh/discovery,Value=$CLUSTER_NAME \
  --resources $SECURITY_GROUP
Enter fullscreen mode Exit fullscreen mode

5️⃣ Namespace, OIDC & CRDs

kubectl create namespace kube-system || true

eksctl utils associate-iam-oidc-provider \
  --cluster $CLUSTER_NAME \
  --region $AWS_REGION \
  --approve
Enter fullscreen mode Exit fullscreen mode

Install CRDs:

kubectl apply -f https://raw.githubusercontent.com/aws/karpenter-provider-aws/v1.6.3/pkg/apis/crds/karpenter.sh_nodepools.yaml
kubectl apply -f https://raw.githubusercontent.com/aws/karpenter-provider-aws/v1.6.3/pkg/apis/crds/karpenter.k8s.aws_ec2nodeclasses.yaml
kubectl apply -f https://raw.githubusercontent.com/aws/karpenter-provider-aws/v1.6.3/pkg/apis/crds/karpenter.sh_nodeclaims.yaml
Enter fullscreen mode Exit fullscreen mode

6️⃣ Install Karpenter Using Helm

export KARPENTER_VERSION=1.6.3

helm template karpenter oci://public.ecr.aws/karpenter/karpenter \
  --version $KARPENTER_VERSION \
  --namespace kube-system \
  --set settings.clusterName=$CLUSTER_NAME \
  --set settings.interruptionQueue=$CLUSTER_NAME \
  --set serviceAccount.annotations.eks\.amazonaws\.com/role-arn=arn:aws:iam::$AWS_ACCOUNT_ID:role/KarpenterControllerRole-$CLUSTER_NAME \
  --set controller.resources.requests.cpu=1 \
  --set controller.resources.requests.memory=1Gi \
  --set controller.resources.limits.cpu=1 \
  --set controller.resources.limits.memory=1Gi > karpenter.yaml

kubectl apply -f karpenter.yaml
Enter fullscreen mode Exit fullscreen mode

⚠️ Ensure interruptionQueue matches the cluster name to avoid SQS errors.

7️⃣ Create NodePool & EC2NodeClass

NodePool + EC2NodeClass Example
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: default
spec:
  template:
    spec:
      requirements:
        - key: kubernetes.io/arch
          operator: In
          values: ["amd64"]
        - key: kubernetes.io/os
          operator: In
          values: ["linux"]
        - key: karpenter.sh/capacity-type
          operator: In
          values: ["on-demand"]
      nodeClassRef:
        group: karpenter.k8s.aws
        kind: EC2NodeClass
        name: default
      expireAfter: 720h
  limits:
    cpu: 1000
  disruption:
    consolidationPolicy: WhenEmptyOrUnderutilized
    consolidateAfter: 1m
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
  name: default
spec:
  role: KarpenterNodeRole-eks-cluster
  amiSelectorTerms:
    - alias: al2023@v2024
  subnetSelectorTerms:
    - tags:
        karpenter.sh/discovery: eks-cluster
  securityGroupSelectorTerms:
    - tags:
        karpenter.sh/discovery: eks-cluster
Enter fullscreen mode Exit fullscreen mode

Apply:

kubectl apply -f nodepool.yml

Enter fullscreen mode Exit fullscreen mode

8️⃣ Validate Karpenter

kubectl get pods -n kube-system
kubectl get nodes
kubectl logs -n kube-system -l app.kubernetes.io/name=karpenter -f
Enter fullscreen mode Exit fullscreen mode

Deploy a workload and verify that nodes are created automatically.

9️⃣ Common Debug Commands

kubectl describe node <node-name>
kubectl describe pod <pod-name> -n kube-system
kubectl get events -A | grep karpenter
aws ec2 describe-security-groups \
  --filters Name=tag:karpenter.sh/discovery,Values=$CLUSTER_NAME
Enter fullscreen mode Exit fullscreen mode

βœ… Final Thoughts

This setup:

Removes dependency on managed node groups

Improves scaling speed

Reduces cost through consolidation

Top comments (0)