DEV Community

Cover image for Part1: Kubernetes AWS Resource Access: kube2Iam
Dmitriy A. for appfleet

Posted on • Originally published at appfleet.com

Part1: Kubernetes AWS Resource Access: kube2Iam

Introduction

One of the major benefits of using containers for application, and Kubernetes for their orchestration, is that you can get the most out of the underlying virtual machines.This however, gives rise to a unique problem of managing access for PODs to various AWS services.

For example:

A Kubernetes node is hosting an application POD which needs access to AWS DynamoDB tables, and the Kubernetes scheduler now schedules another POD on the same node which needs access to an AWS S3 bucket.So for the applications to work properly, the Kubernetes worker node should be able to access both the S3 Bucket and DynamoDB tables. Now scale this scenario to 100s of PODs which needs access to various AWS resources and are being constantly scheduled over a Kubernetes cluster. One approach to resolve this would be to let the Kubernetes node access all AWS resources, and thereby enabling access for all the PODs. However, this exposes a very large attack surface for any potential attacker.

Even if a single node or a POD gets compromised, the attacker can gain access to your entire AWS Infrastructure. In short, hell will break loose. There will be FIRE.! 🔥

The good thing is this can be very well prevented, and there are tools which when integrated carefully, can help you fine grain access from Kubernetes PODs to your AWS Resources.

In this series we will go over the following tools to manage fine grained access from Kubernetes PODs to AWS Resources:

  1. kube2iam
  2. kiam
  3. IAM Roles for Service Accounts (IRSA)

This blog will focus on kube2iam's implementation

Deep Dive into kube2iam Implementation

Overall Architecture

kube2iam is deployed as a Demonset in your cluster. Therefore, a POD of kube2iam will be scheduled to run on every worker node of your Kubernetes cluster. Whenever any other POD tries to make an AWS API call in order to access some resources, that call will be intercepted by the kube2iam POD running on that node. It is now kube2iam’s role to make sure that the POD is assigned appropriate credentials to access the resource.

We also specify an IAM role in the POD spec. Under the hood, the kube2iam POD will retrieve temporary credentials for the IAM role of the caller and return them to the caller. Basically, all the EC2 metadata API calls will be proxied. It should be noted that kube2iam POD should run with host networking enabled so that it can make the EC2 metadata API calls.

Implementation

Creating and Attaching IAM roles

Firstly, you should create an IAM role named my-role which has access to the required AWS resources (say some AWS S3 bucket).

1.Secondly, you now need to enable trust relationship between this role and the role attached to Kubernetes worker nodes. We should make sure that the role attached to Kubernetes worker nodes has very limited permissions. This is because all the API calls or access requests are essentially made by containers running on the node and they will be receiving credentials using kube2iam. Therefore the worker node IAM roles need not access a large number of AWS Resources. In order to do so:

2.Go to the newly created role in AWS console, Select Trust Relationships taband Click on Edit Trust relationship.

Add the following content to the policy:

{
  "Sid": "",
  "Effect": "Allow",
  "Principal": {
    "AWS": "<ARN_KUBERNETES_NODES_IAM_ROLE>"
  },
  "Action": "sts:AssumeRole"
}

Enable Assume Role for Node Pool IAM roles. Add the following content to Nodes IAM policy:

{
    "Sid": "",
    "Effect": "Allow",
    "Action": [
        "sts:AssumeRole"
    ],
    "Resource": [
        "arn:aws:iam::810085094893:instance-profile/*"
    ]
}

3.Add the IAM role's name to Deployment as an annotation

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mydeployment
  namespace: default
spec:
...
  minReadySeconds: 5
  template:
      annotations:
        iam.amazonaws.com/role: my-role
    spec:
      containers:
...

Deploying kube2iam

Create the Service Account, ClusteRrole and ClusterRoleBinding to be used by kube2iam PODs. The ClusterRole should have get, watch and list access to namespaces and PODs under all API groups. You can use the manifest below to create them.

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube2iam
  namespace: kube-system
---
apiVersion: v1
kind: List
items:
  - apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRole
    metadata:
      name: kube2iam
    rules:
      - apiGroups: [""]
        resources: ["namespaces","pods"]
        verbs: ["get","watch","list"]
  - apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: kube2iam
    subjects:
    - kind: ServiceAccount
      name: kube2iam
      namespace: kube-system
    roleRef:
      kind: ClusterRole
      name: kube2iam
      apiGroup: rbac.authorization.k8s.io
---

Deploy the kube2iam demonset using the manifest below -

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube2iam
  labels:
    app: kube2iam
  namespace: kube-system
spec:
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        name: kube2iam
    spec:
      hostNetwork: true
      serviceAccount: kube2iam
      containers:
        - image: jtblin/kube2iam:latest
          name: kube2iam
          args:
            - "--auto-discover-base-arn"
            - "--iptables=true"
            - "--host-ip=$(HOST_IP)"
            - "--host-interface=cali+"
            - "--verbose"
            - "--debug"
          env:
            - name: HOST_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          ports:
            - containerPort: 8181
              hostPort: 8181
              name: http
          securityContext:
            privileged: true
---

It should be noted that the kube2iam container is being run with the arguments --iptables=true and --host-ip=$(HOST_IP) and in privileged mode as true

...
    securityContext:
        privileged: true
...

The following settings prevent containers running in other PODs from directly accessing the EC2 metadata API and gaining unwanted access to AWS resources. The traffic to 169.254.169.254 must be proxied for Docker containers. This can be alternatively applied by running the following command on each of the Kubernetes worker node.

iptables \
  --append PREROUTING \
  --protocol tcp \
  --destination 169.254.169.254 \
  --dport 80 \
  --in-interface docker0 \
  --jump DNAT \
  --table nat \
--to-destination `curl 169.254.169.254/latest/meta-data/local-ipv4`:8181

Testing access from a test-pod
To check whether your kube2iam deployment and IAM settings work fine or not, you can deploy a test POD which has your IAM role specified as an annotation.

If everything works fine we should be able to check which IAM node gets attached to our POD. This can be easily verified by querying the EC2 Metadata API.

Let us deploy a test-pod using the manifest below:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: access-test
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: access-test
  minReadySeconds: 5
  template:
    metadata:
      labels:
        app: access-test
      annotations:
        iam.amazonaws.com/role: my-role
    spec:
      containers:
      - name: access-test
        image: "iotapi322/worker:v4"

In the test-pod, run the following command -

curl 169.254.169.254/latest/meta-data/iam/security-credentials/

You should get myrole as the response of this API.

It is highly recommend to tail the logs of the kube2iam POD running on that node to gain a deeper understanding of how and when the API calls are being intercepted. Once the set-up works as per expectation, you should turn off verbosity in the kube2iam deployment in order to avoid bombardment of your logging backend.

Conclusion

This concludes Part-1 of the two part series. You should now be able to deploy kube2iam and enable AWS access for your Kubernetes workloads. In the next part we will learn to deploy KIAM to achieve the same and draw a contrast between the two technologies.

Top comments (0)