Amazon Web Services (AWS) EKS team has introduced EKS capabilities on Dec 1, 2025 to help developers for writing their code by offloading Kubernetes management of resources like Argo CD, ACK, kro by a simple turn/off switch.
Why does it matter to you?
Amazon EKS Capabilities is a new set of fully managed features designed to make running Kubernetes easier and faster for developers. Think of it as a on-off button on top of your EKS cluster that removes a lot of the heavy lifting you normally deal with.
These capabilities give you Kubernetes-native tools for things like:
- deploying your apps continuously (Argo CD)
- managing AWS resources directly from Kubernetes (ACK)
- creating and organizing your Kubernetes objects (kro) And the best part? AWS manages all of this for you.
Instead of installing, updating, and scaling these tools on your worker nodes, EKS now runs them for you. That means less time fighting with cluster operations and more time actually building and scaling the applications you care about.
I tried this out today so let me show how I turned on Argo CD in my cluster and deployed s3 buckets using ACK & Argo CD (GitOps) using EKS capabilities
Bonus
I have found a bug which does not allow ArgoCD to get in healthy state, follow along to find out what that bug is and how to fix that bug.
Note: This blog assumes that you have fundamental knowledge on how Argo CD, ACK and kro works conceptually.
Architecture
Available capabilities
ACK: ACK enables the management of AWS resources using Kubernetes APIs, allowing you to create and manage S3 buckets, RDS databases, IAM roles, and other AWS resources using Kubernetes custom resources.
Argo CD: GitOps-based continuous deployment for your applications, using Git repositories as the source of truth for your workloads and system state.
kro: Create custom Kubernetes APIs that compose multiple resources into higher-level abstractions, allowing platform teams to define reusable patterns for common resource combinations-cloud building blocks
Pricing [IMP]
Well the pricing is tricky. You pay for EKS Capabilities based on two components that are both billed hourly: a base hourly rate for each enabled capability and hourly usage charges based on the quantity of resources managed by each capability.
In short base charge + usage charge.
For Argo CD, you pay hourly for each Argo CD Application managed. For AWS Controllers for Kubernetes (ACK), you pay hourly for each ACK resource managed. For Kubernetes Resource Orchestrator (KRO), you pay hourly for each KRO Resource Graph Definition (RGD) instance managed.
What is EKS Capability
It is basically an AWS resource managed by AWS including its scaling, lifecycle, security but runs inside your EKS cluster but not on your worker nodes.
Basically Capabilities run in EKS, eliminating the need to install and maintain controllers and other operational components on your worker nodes.
Resources That can be created by Capability
-
For ArgoCD
- Application
- ApplicationSet
- AppProject
-
For kro
- ResourceGraphDefinition (RGD)
- Custom resource instances
-
For ACK
- When you enable the ACK capability, you can create and manage AWS resources using Kubernetes custom resources. ACK provides over 200 CRDs for more than 50 AWS services,
Capability IAM Role [IMP]
- Each EKS capability resource has a configured capability IAM role.
The capability role is used to grant AWS service permissions for EKS capabilities to act on your behalf.
For example, to use the EKS Capability for ACK to manage Amazon S3 Buckets, you will grant S3 Bucket administrative permissions to the capability, enabling it to create and manage buckets.
Prerequisites
In order to follow along make sure you have following prerequisites met.
Common to all 3 capabilities
- An EKS cluster
- An IAM Capability Role with permissions for ACK, ArgoCD, kro
- Sufficient IAM permissions to create capability resources on EKS clusters
- (For CLI/eksctl) The appropriate CLI tool installed and configured
Argo CD capability specific
- AWS Identity Center configured - Required for Argo CD authentication (local users are not supported)
- kubectl configured to communicate with your cluster
Terraform Code
Terraform code for this repo lies in this repo.
- Just run
terraform apply
Demo: Creating S3 buckets with ACK and ArgoCD using EKS Capabilities
The whole process of giving this managed platform capabilities is simple.
Create IAM Capability Role -> Enable Capabilities (ACK and ArgoCD for this Demo) -> Register cluster to ArgoCD -> add ArgoCD applications to track Git repo
- We create an capability IAM role with trust policies as mentioned in the docs
- We add permissions for the role like if we want to create s3 bucket using ACK we add s3 policy to the role so that ack can talk to AWS Services.
For kro and ArgoCD the role does not need any permissions because when you enable capability adds access entries policies which add permissions for these capabilities to interact with EKS cluster
The Capability Role
For this we create a trust relationship which allows capabilities to assume role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "capabilities.eks.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
}
Now based on the capability you add permissions to it, for the brevity of this blog ACK and ArgoCD capability will use the same role and in the tf code I have added full s3 permissions for ACK to create S3 bucket.
ArgoCD IAM Identity Center Integration
The Argo CD managed capability integrates with AWS Identity Center for authentication and uses built-in RBAC roles for authorization
That is why IAM identity Center is a prerequisite
This is how permissions work with ArgoCD
When a user accesses Argo CD UI:
- They authenticate using AWS Identity Center (which can federate to your corporate identity provider)
- AWS Identity Center provides user and group information to Argo CD
- Argo CD maps users and groups to RBAC roles based on your configuration
- Users see only the applications and resources they have permission to access
This is exactly what I have done in my terraform code, accessing IAM identity center instance and user details and passing to the argoCD capability and configuring RBAC role for my user as ADMIN role
data "aws_ssoadmin_instances" "main" {}
data "aws_identitystore_user" "admin" {
identity_store_id = tolist(data.aws_ssoadmin_instances.main.identity_store_ids)[0]
alternate_identifier {
unique_attribute {
attribute_path = "UserName"
attribute_value = var.idc_username
}
}
}
resource "null_resource" "eks_capability_argocd" {
depends_on = [module.eks]
provisioner "local-exec" {
command = <<-EOT
aws eks create-capability \
--region ${local.region} \
--cluster-name ${local.cluster_name} \
--capability-name my-argocd \
--type ARGOCD \
--role-arn ${aws_iam_role.eks_capability_role.arn} \
--delete-propagation-policy RETAIN \
--configuration '{
"argoCd": {
"awsIdc": {
"idcInstanceArn": "${tolist(data.aws_ssoadmin_instances.main.arns)[0]}",
"idcRegion": "${local.region}"
},
"rbacRoleMappings": [{
"role": "ADMIN",
"identities": [{
"id": "${data.aws_identitystore_user.admin.user_id}",
"type": "SSO_USER"
}]
}]
}
}' \
--profile jj
EOT
}
}
To restrict for your use case see built-in RBAC roles
EKS cluster and installing capabilities]
I am using EKS auto mode for this blog. As this is fresh update I am using aws cli commands to enable capabilities.
Note Capabilities are not ACTIVE instantly so we need to wait until they become ACTIVE for next operation.
resource "null_resource" "eks_capability_ack" {
depends_on = [module.eks]
provisioner "local-exec" {
command = <<-EOT
aws eks create-capability \
--region ${local.region} \
--cluster-name ${local.cluster_name} \
--capability-name my-ack \
--type ACK \
--role-arn ${aws_iam_role.eks_capability_role.arn} \
--delete-propagation-policy RETAIN \
--profile jj
EOT
}
}
resource "null_resource" "eks_capability_argocd" {
depends_on = [module.eks]
provisioner "local-exec" {
command = <<-EOT
aws eks create-capability \
--region ${local.region} \
--cluster-name ${local.cluster_name} \
--capability-name my-argocd \
--type ARGOCD \
--role-arn ${aws_iam_role.eks_capability_role.arn} \
--delete-propagation-policy RETAIN \
--configuration '{
"argoCd": {
"awsIdc": {
"idcInstanceArn": "${tolist(data.aws_ssoadmin_instances.main.arns)[0]}",
"idcRegion": "${local.region}"
},
"rbacRoleMappings": [{
"role": "ADMIN",
"identities": [{
"id": "${data.aws_identitystore_user.admin.user_id}",
"type": "SSO_USER"
}]
}]
}
}' \
--profile jj
EOT
}
}
Registering the eks cluster to ArgoCD to deploy applications
Docs suggest to use argocd cli but it never succeeded as it always timed out so I register the secret using K8 Secret. This is possible because of the new access entry policy which ArgoCD capability to read the K8 secrets
resource "null_resource" "argocd_add_cluster" {
depends_on = [null_resource.wait_for_argocd]
provisioner "local-exec" {
command = <<-EOT
aws eks update-kubeconfig \
--name ${local.cluster_name} \
--region ${local.region} \
--profile jj
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: ${local.cluster_name}-cluster
namespace: argocd
labels:
argocd.argoproj.io/secret-type: cluster
stringData:
name: ${local.cluster_name}
server: ${module.eks.cluster_arn}
project: default
EOF
EOT
}
}
Create an ArgoCD Application
This demo uses a public GitHub repository, so no repository configuration is required. For private repositories, configure access using AWS Secrets Manager, CodeConnections, or Kubernetes Secrets
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: eks-capability
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/jatinmehrotra/aws-reinvent-2025
targetRevision: HEAD
path: eks-capabilities/ack_yaml
destination:
name: reinvent-2025
namespace: ack
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Over this path there yaml file to create S3 bucket using ACK which will be synced and applied to cluster and then ACK will create this bucket in our AWS Account
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
name: my-test-bucket
namespace: default
spec:
name: jj-bucket-name-12345
Fixing the Bug
Even after doing all this or following AWS docs this won't run because the access entries(AmazonEKSArgoCDClusterPolicy) policies created by AWS for ArgoCD Capability lacks permissions to list the cluster resource.
SO added cluster admin permission for the blog
horizontalpodautoscalers.autoscaling is forbidden:
User "arn:aws:sts::xxxxxxx:assumed-role/eks-capability-role/aws-go-sdk-1764587162523491382"
cannot list resource "horizontalpodautoscalers" in API group "autoscaling" at the cluster scope
So to fix that we need to import the access entry created by AWS and add the Cluster Admin Access entry
# Import the access entry created by EKS capability
resource "null_resource" "import_access_entry" {
depends_on = [null_resource.wait_for_argocd]
provisioner "local-exec" {
command = <<-EOT
terraform import -input=false \
aws_eks_access_entry.capability_role \
"${local.cluster_name}:${aws_iam_role.eks_capability_role.arn}" || true
EOT
}
}
resource "aws_eks_access_entry" "capability_role" {
cluster_name = module.eks.cluster_name
principal_arn = aws_iam_role.eks_capability_role.arn
type = "STANDARD"
lifecycle {
ignore_changes = [kubernetes_groups]
}
depends_on = [null_resource.import_access_entry]
}
# Add ClusterAdmin policy to fix insufficient ArgoCD permissions
resource "aws_eks_access_policy_association" "capability_role_admin" {
cluster_name = module.eks.cluster_name
principal_arn = aws_iam_role.eks_capability_role.arn
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
access_scope {
type = "cluster"
}
depends_on = [aws_eks_access_entry.capability_role]
}
You can access ArgoCD UI from console -> Capabilities and Under Argo CD
You ca login using your IAM identity Center
Argo CD successfully sync the application for Git repository and then ACK creates the S3 bucket
Cleanup
For cleanup before running terraform destroy. Read the [delete docs](https://docs.aws.amazon.com/eks/latest/userguide/working-with-capabilities.html#_delete_a_capability
Delete the s3 bucket first
run the cleanup script
Then run terraform destroy.Because cluster won't be deleted if capabilities are not deleted.
From DevOps and Platform Engineering perspective
- In this blog we saw the Use case of GitOps for Applications and Infrastructure where Use Argo CD to deploy applications and ACK to provision infrastructure, both from Git repositories. Your entire stack—applications, databases, storage, and networking—is defined as code and automatically deployed
We just push the AWS infra yaml files to Git and ArgoCD deploys the updated application, and ACK provisions a new S3 bucket with the correct configuration.
All changes are auditable, reversible, and consistent across environments.
There can be many use cases for these capabilities like Account and regional Bootstrapping, Modernization of EKS resources
Its very interesting to see how they are compared to self managed versions ack,argocd,kro. Ofcourse there are limitations like ArgoCD right now supper single namespace deployment or Notifications Controller in ArgoCD isn't supported
But overall a great update for Platform engineering which offloads the setup, complexity and management of Kubernetes resources to AWS and lets developers focus on their productivity and applications.
I share such amazing AWS updates on DevOps, Kubernetes and GenAI daily over Linkedin, X. Follow me over there so that I can make your life more easy.










Top comments (0)