In November AWS announced a new feature for EKS (Elastic Kubernetes Service), Pod Identity, the easiest way to grant access to Kubernetes pods to AWS services.
IRSA (IAM Role for Service Accounts) solves more or less the same problem and achieves the same results but with a little more coupling with the Kubernetes serviceAccount resource that, with specific annotation, can be mapped to an IAM role and obtain the relative permission.
However, IRSA suffers from some limitations or complexity due to the hard limit of the IAM roles per account and from multi-cluster management. It requires the creation of an OIDC provider for each cluster and a trust policy on the roles with the OIDC URL.
Pod identity solves those issues in a very elegant way and with a simplified procedure.
With pod identity is possible to assign a role to a service account with fewer steps, using the same IAM principal for all role's trust policies and it can be managed without any annotation on the service account manifest.
Like IRSA, you can follow the principle of least privilege and achieve credential isolations but also have better scalability, since, unlike IRSA, it requires an agent installed on the cluster that is responsible for injecting credentials into the pods.
You can read more on how Pod Identity works here
Getting started with pod identity
To use Pod Identity in your cluster you have to install the Pod Identity agent. It can be installed directly in the add-ons section in EKS in the console or through CLI but keep note that there are some configuration steps before proceeding to the installation.
With Terraform instead, the easiest way to install the agent on your cluster is to use the eks blueprints add-on module (that probably you already use)
To activate the addon, add these lines in eks_addons block
eks_addons = {
...
eks-pod-identity-agent = {
most_recent = true
}
...
}
You should have something similar to this:
module "eks_blueprints_addons" {
source = "aws-ia/eks-blueprints-addons/aws"
version = "~> 1.0" #ensure to update this to the latest/desired version
cluster_name = module.eks.cluster_name
cluster_endpoint = module.eks.cluster_endpoint
cluster_version = module.eks.cluster_version
oidc_provider_arn = module.eks.oidc_provider_arn
eks_addons = {
coredns = {
most_recent = true
}
vpc-cni = {
most_recent = true
}
kube-proxy = {
most_recent = true
}
eks-pod-identity-agent = {
most_recent = true
}
}
enable_aws_load_balancer_controller = true
...
tags = {
Environment = "dev"
}
}
If you have read the pod identity documentation, you may have noticed that to run the agent on your cluster, an "eks-auth:AssumeRoleForPodIdentity" permission must be added to the node's role.
If you have used Terraform to create the cluster, with the AWS module, you should already have the managed policy AmazonEKSWorkerNodePolicy associated with the node's role with the eks-auth:AssumeRoleForPodIdentity permission.
Associate role and service account
Creating an association is simple and unlike IRSA and does not require any intervention on the EKS service account manifest, keeping this configuration "outside" the cluster.
The association can be done directly in the console in the EKS cluster details under the "Access" tab or via CLI as usual.
For Terraform, instead, a new version of the AWS module supports a dedicated resource.
First, we need an IAM role with a trust policy.
In this case, the principal does not have any reference to the cluster, which is another difference to IRSA but instead will be always the same pods.eks.amazonaws.com, so it can work for all the clusters in the account without the need to change the trust policy.
We need then 2 different allowed actions, the sts:AssumeRole ** and **sts:TagSession in order to pass session tags during the assume role operation, which can be used to filter the services access later.
data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["pods.eks.amazonaws.com"]
}
actions = [
"sts:AssumeRole",
"sts:TagSession"
]
}
}
resource "aws_iam_role" "example" {
name = "eks-pod-identity-example"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
}
At this point, we can create an aws_eks_pod_indentity_association resource that associates the role to a specific service account.
There are no requirements that a service account already exists on the cluster, a behavior that can help a lot in automation.
resource "aws_eks_pod_identity_association" "association" {
cluster_name = aws_eks_cluster.example.name
namespace = var.namespace
service_account = var.service_account_name
role_arn = aws_iam_role.example.arn
}
That's it. When a pod that uses the service account with that name in the appropriate namespace is identified, the pod identity agent will add some environment variables to the pods that AWS SDK will use to retrieve the credentials.
Using session tags
During the assumeRole operation, the eks pod identity agent will attach a set of tags like service account name, cluster name, and namespace that can be used to grant access to AWS resources in the role's policies in a more granular way, allowing the reuse same role with multiple service accounts and reducing the proliferation.
In the same way, every tag associated with the IAM role can be accessed in this format ${aws:PrincipalTag/tagname}
An example policy that limit the access to some parameters in system manager service based on the cluster name may appear like this
data "aws_iam_policy_document" "read_parameters_store" {
statement {
effect = "Allow"
condition {
test = "StringEquals"
variable = "ssm:ResourceTag/eks-cluster-name"
values = ["${aws:PrincipalTag/eks-cluster-name}"]
}
resources = [
"parameterarn", ...
]
actions = [
"ssm:GetParameter",
"ssm:GetParameters"
]
}
}
Under the hood
When using the AWS SDK to get temporary credentials to access AWS services, the SDK itself uses a Credential provider chain to retrieve configuration in many different places.
For Pod Identity the Container Credentials Provider is used by the Sdk.
Note that sdk will follow the provider's chain, in a defined order, and if the credentials are found the chain will be stopped.
This means that, if necessary, it is always possible to define credentials explicitly, with environment variables for example.
In that case, the pod identity will never be used.
That is true for any credentials provider that runs before the container one.
When a pod starts on the cluster that uses the pod identity agent , the manifest of the pod itself is updated and 2 environment variables are set AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE and AWS_CONTAINER_CREDENTIALS_FULL_URI that sdk use to retrieve the temporary credentials.
Notice that some older sdk versions does not support pod identity.
If you want to know more , other than official documentation there is a very good article to read from Datadog security lab
Attention points
Pod identity is a smart solution to simplify some topics that with IRSA are not so easy to manage.
However, there are some limitations at the moment that make IRSA still necessary and must be considered before start to implement pod identity everywhere in your cluster.
Two are particularly important to keep in mind:
- Some EKS Addons commonly used that require credentials cannot use pod identity: Amazon VPC CNI plugin for Kubernetes, AWS Load Balancer Controller, and The CSI storage drivers (when installed via addons)
- At the moment also AWS Provider for Secret Store CSI Driver does not support pod identity and must be used with IRSA at the moment.
Fortunately are not mutually exclusive so it's possible to start to migrate all other pods to the new pod identity and take advantage of that simplicity and new characteristics.
Top comments (0)