<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shardul Srivastava</title>
    <description>The latest articles on DEV Community by Shardul Srivastava (@shardulsrivastava).</description>
    <link>https://dev.to/shardulsrivastava</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shardulsrivastava"/>
    <language>en</language>
    <item>
      <title>Getting Started with Contributing to CNCF: A Beginner's Guide</title>
      <dc:creator>Shardul Srivastava</dc:creator>
      <pubDate>Sat, 08 Feb 2025 21:46:28 +0000</pubDate>
      <link>https://dev.to/shardulsrivastava/getting-started-with-contributing-to-cncf-a-beginners-guide-4gp3</link>
      <guid>https://dev.to/shardulsrivastava/getting-started-with-contributing-to-cncf-a-beginners-guide-4gp3</guid>
      <description>&lt;p&gt;Contributing to CNCF is not difficult, but let me tell you what's difficult..&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finding out where to start contributing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5eik3mt13avmk1f1z1vj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5eik3mt13avmk1f1z1vj.png" alt="Image description" width="477" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So Let's start to peel the onion, the first thing you should do is to go through this amazing &lt;a href="https://landscape.cncf.io/" rel="noopener noreferrer"&gt;CNCF Landscape&lt;/a&gt;. This landscape covers not just the CNCF projects but also member projects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7r3wi6o07rarjbc8jd50.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7r3wi6o07rarjbc8jd50.gif" alt="Image description" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find an exhaustive list of all the CNCF Graduated and Incubating projects &lt;a href="https://www.cncf.io/projects/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Did I say Gradudated and Incubating... What's that ??&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4mfu9vddtd6unu33c9s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4mfu9vddtd6unu33c9s.png" alt="Image description" width="500" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CNCF has a lot of projects under its umbrella, they are categorized based on their maturity levels, here are three different categories:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Graduated - CNCF Graduated projects are mature, widely adopted, and have a mind-boggling community behind them. Kubernetes is one of the examples of Graduated CNCF projects.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Incubating - CNCF Incubating projects are on their way to graduation but are still maturing. They have proven technical merit and community growth but may need to enhance their processes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sandbox - CNCF Sandbox projects are early-stage projects that are in the process of establishing their community and technical merit&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://contribute.cncf.io/contributors/projects/" rel="noopener noreferrer"&gt;Here&lt;/a&gt; is a very handy link to all the CNCF projects across maturity levels.&lt;/p&gt;

&lt;p&gt;This would help you to understand what are the projects and you will find that most of the cloud-related open-source projects are maintained by CNCF.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to find the right issues
&lt;/h2&gt;

&lt;p&gt;Use these Github queries to find the beginner-friendly issues from CNCF graduated, incubating and sandbox projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  CNCF Graduated Projects
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;is:issue is:open  label:"good first issue"  repo:kubernetes/kubernetes repo:prometheus/prometheus repo:etcd-io/etcd repo:containerd/containerd repo:linkerd/linkerd2 repo:fluent/fluent-bit repo:helm/helm repo:opencontainers/runc repo:cni-org/cni repo:envoyproxy/envoy repo:jaegertracing/jaeger repo:grpc/grpc repo:vitessio/vitess repo:tuf/tuf repo:opentelemetry/opentelemetry-collector repo:tikv/tikv repo:dragonflyoss/Dragonfly2 repo:argoproj/argo-cd repo:falcosecurity/falco repo:openpolicyagent/opa repo:thanos-io/thanos repo:fluxcd/flux2 repo:longhorn/longhorn repo:cert-manager/cert-manager repo:contour-org/contour repo:emissary-ingress/emissary repo:harvester/harvester repo:k3s-io/k3s repo:kubeedge/kubeedge repo:kubeflow/kubeflow repo:kubewarden/kubewarden-controller repo:kyverno/kyverno repo:openservicemesh/osm repo:paralus/paralus repo:serverlessworkflow/specification repo:spire/spire repo:tremor-rs/tremor-runtime repo:wasmCloud/wasmcloud repo:keptn/keptn repo:backstage/backstage repo:openfeature/spec repo:cloudevents/spec repo:knative/serving repo:crossplane/crossplane repo:openebs/openebs repo:opencost/opencost repo:open-cluster-management-io/OCM repo:operator-framework/operator-sdk repo:porter-dev/porter repo:pravega/pravega repo:submariner-io/submariner repo:telepresenceio/telepresence repo:tricksterproxy/trickster repo:virtual-kubelet/virtual-kubelet repo:volcano-sh/volcano repo:weaveworks/weave repo:openyurtio/openyurt repo:meshery/meshery repo:chaos-mesh/chaos-mesh repo:openkruise/kruise repo:openelb/openelb repo:sealer/sealer repo:devfile/api repo:antrea-io/antrea repo:litmuschaos/litmus repo:karmada-io/karmada repo:openfunction/openfunction repo:open-gateway/gateway-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  CNCF Incubating Projects
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;is:issue is:open label:"good first issue"
repo:artifacthub/hub repo:backstage/backstage repo:buildpacks/pack repo:chaos-mesh/chaos-mesh repo:cloud-custodian/cloud-custodian repo:containernetworking/cni repo:projectcontour/contour repo:cortexproject/cortex repo:crossplane/crossplane repo:dragonflyoss/Dragonfly2 repo:emissary-ingress/emissary repo:flatcar/flatcar repo:grpc/grpc repo:karmada-io/karmada repo:keptn/keptn repo:keycloak/keycloak repo:knative/serving repo:kubeflow/kubeflow repo:kubescape/kubescape repo:kubevela/kubevela repo:kubevirt/kubevirt repo:kyverno/kyverno repo:litmuschaos/litmus repo:longhorn/longhorn repo:nats-io/nats-server repo:notaryproject/notary repo:opencost/opencost repo:open-feature/spec repo:openkruise/kruise repo:opentelemetry/opentelemetry-collector repo:openyurtio/openyurt repo:operator-framework/operator-sdk repo:strimzi/strimzi-kafka-operator repo:thanos-io/thanos repo:volcano-sh/volcano repo:wasmCloud/wasmcloud

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  CNCF Sandbox Projects
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;is:issue is:open label:"good first issue"
repo:aeraki-mesh/aeraki repo:akri-sh/akri repo:antrea-io/antrea repo:armadaproject/armada repo:yahoo/athenz repo:runatlantis/atlantis repo:banzaicloud/bank-vaults repo:bfenetworks/bfe repo:bpfman/bpfman repo:projectcapsule/capsule repo:carina-io/carina repo:cartography-cncf/cartography repo:carvel-dev/ytt repo:cdk8s-team/cdk8s repo:chaosblade-io/chaosblade repo:cloudnative-pg/cloudnative-pg repo:clusternet/clusternet repo:clusterpedia-io/clusterpedia repo:confidential-containers/confidential-containers repo:connectrpc/connect-go repo:containerssh/containerssh repo:project-copacetic/copacetic repo:cozystack/cozystack repo:devfile/api repo:devspace-sh/devspace repo:devstream-io/devstream repo:dexidp/dex repo:distribution/distribution repo:easegress-io/easegress repo:eraser-dev/eraser repo:external-secrets/external-secrets repo:fluid-cloudnative/fluid repo:Project-HAMi/HAMi repo:headlamp-k8s/headlamp repo:hexa-org/policy-orchestrator repo:hwameistor/hwameistor repo:hyperlight-dev/hyperlight repo:inclavare-containers/inclavare-containers repo:inspektor-gadget/inspektor-gadget repo:interTwin-eu/interLink repo:k0sproject/k0s repo:k3s-io/k3s repo:k8gb-io/k8gb repo:k8sgpt-ai/k8sgpt repo:k8up-io/k8up repo:kairos-io/kairos repo:kanisterio/kanister repo:kcl-lang/kcl repo:kcp-dev/kcp repo:sustainable-computing-io/kepler repo:keylime/keylime repo:kgateway-dev/kgateway repo:kitops-ml/kitops repo:kmesh-net/kmesh repo:ko-build/ko repo:konveyor/operator repo:koordinator-sh/koordinator repo:kptdev/kpt repo:krkn-chaos/krkn repo:kuadrant/kuadrant-operator repo:kuasar-io/kuasar repo:kube-burner/kube-burner repo:kubeovn/kube-ovn repo:kube-rs/kube repo:kube-vip/kube-vip repo:kubean-io/kubean repo:kubearmor/kubearmor repo:kubeclipper/kubeclipper repo:kuberhealthy/kuberhealthy repo:kubeslice/kubeslice repo:kubestellar/kubestellar repo:kubewarden/kubewarden-controller repo:kudobuilder/kudo repo:kumahq/kuma repo:kubereboot/kured repo:KusionStack/kusion repo:lima-vm/lima repo:kube-logging/logging-operator repo:loxilb-io/loxilb repo:merbridge/merbridge repo:meshery/meshery repo:metallb/metallb repo:metal3-io/baremetal-operator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to start doing the setup.
&lt;/h2&gt;

&lt;p&gt;Well every project has it's own guide on how to get started, reading through CONTRIBUTING.md would help you with setting up the project local.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to check your contributions
&lt;/h2&gt;

&lt;p&gt;CNCF has a score system that tells how much you have contributed to CNCF projects, you can check your score here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://devstats.cluster.fun/" rel="noopener noreferrer"&gt;https://devstats.cluster.fun/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you came this far, let me share something that will help you to get up to date with what's going on&lt;/p&gt;

&lt;h2&gt;
  
  
  How to interact with the community behind the projects
&lt;/h2&gt;

&lt;p&gt;If you are interested in contributing into any projects, always join their slack channel in CNCF, here is how you can get invited to the CNCF slack - &lt;a href="https://communityinviter.com/apps/cloud-native/cncf" rel="noopener noreferrer"&gt;https://communityinviter.com/apps/cloud-native/cncf&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Running Apache Kafka on Containers</title>
      <dc:creator>Shardul Srivastava</dc:creator>
      <pubDate>Sun, 30 Oct 2022 18:12:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/running-apache-kafka-on-containers-2ing</link>
      <guid>https://dev.to/aws-builders/running-apache-kafka-on-containers-2ing</guid>
      <description>&lt;p&gt;Apache Kafka is one of the most famous data stores. It's a go-to tool to collect streaming data at scale and process them with either &lt;a href="https://kafka.apache.org/documentation/streams/"&gt;Kafka streams&lt;/a&gt; or &lt;a href="https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html"&gt;Apache Spark&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Getting started with Kafka is challenging as it involves familiarising a lot of new concepts such as topics, replication, and offsets, and then you have to understand what a Zookeeper is.&lt;/p&gt;

&lt;p&gt;Confluent is a company specialising in Kafka with their cloud-based offering called &lt;strong&gt;Confluent cloud&lt;/strong&gt;. Confluent is one of the biggest contributors to the Kafka open-source project. they have created some great tools to help with Kafka such as &lt;a href="https://www.confluent.io/product/ksqldb/"&gt;KsqlDB&lt;/a&gt; that allows us to query streaming data (It's amazing, try it).&lt;/p&gt;

&lt;p&gt;Apart from that Confluent has great articles on understanding Kafka internals.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kafka with Docker
&lt;/h2&gt;

&lt;p&gt;To get started with Kafka on Docker, we are going to use confluent Kafka images.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a docker-compose.yaml file with one zookeeper and one Kafka container:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;zookeeper&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;confluentinc/cp-zookeeper:latest&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zookeeper&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;ZOOKEEPER_CLIENT_PORT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2181&lt;/span&gt;
      &lt;span class="na"&gt;ZOOKEEPER_TICK_TIME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2000&lt;/span&gt;

  &lt;span class="na"&gt;kafka&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;confluentinc/cp-kafka:latest&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;broker&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9092:9092"&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;zookeeper&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_BROKER_ID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_ZOOKEEPER_CONNECT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;zookeeper:2181'&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_LISTENER_SECURITY_PROTOCOL_MAP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_ADVERTISED_LISTENERS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://kafka:29092&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start both containers in detached mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;docker-compose starts zookeeper on port &lt;code&gt;2181&lt;/code&gt; and Kafka on port &lt;code&gt;9092&lt;/code&gt; along with some configurations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Zookeeper&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ZOOKEEPER_CLIENT_PORT&lt;/code&gt;&lt;/strong&gt; - Port where Zookeeper would be available.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ZOOKEEPER_TICK_TIME&lt;/code&gt;&lt;/strong&gt;- the length of a single tick.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Kafka&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;KAFKA_ZOOKEEPER_CONNECT&lt;/code&gt;&lt;/strong&gt; - Instructs Kafka how to connect to ZooKeeper.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;KAFKA_LISTENER_SECURITY_PROTOCOL_MAP&lt;/code&gt;&lt;/strong&gt; - Defines key/value pairs for the security protocol to use, per listener name.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;KAFKA_ADVERTISED_LISTENERS&lt;/code&gt;&lt;/strong&gt; - A comma-separated list of listeners with their host/IP and port. Read more about kafka listeners &lt;a href="https://www.confluent.io/blog/kafka-listeners-explained/"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR&lt;/code&gt;&lt;/strong&gt; - Equivalent of broker configuration &lt;code&gt;offsets.topic.replication.factor&lt;/code&gt; which is the replication factor for the offsets topic. Since we are running with just one Kafka node, we need to set this to 1.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Read more about how the connection to kafka broker works in a docker container &lt;a href="https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Create Topics in Kafka
&lt;/h2&gt;

&lt;p&gt;Kafka topics are like database tables. Just like a database, we have to create a table to start storing the data, for Kafka we have to create a topic.&lt;/p&gt;

&lt;p&gt;Unlike a database that has a command to create a database, Kafka comes with some utility scripts, one of which is to create a topic that requires mandatory input as the topic name and a few other optional arguments.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Log in to the Kafka container&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose &lt;span class="nb"&gt;exec &lt;/span&gt;kafka bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a topic by the name &lt;code&gt;kafka-test&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/usr/bin/kafka-topics &lt;span class="se"&gt;\&lt;/span&gt;
             &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; broker:9092 &lt;span class="se"&gt;\&lt;/span&gt;
             &lt;span class="nt"&gt;--create&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
             &lt;span class="nt"&gt;--topic&lt;/span&gt; kafka-test
Created topic kafka-test.
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Try running this command again and you will get this error: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Error while executing the topic command: Topic 'kafka-test' already exists.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not so much CI friendly right ?? &lt;code&gt;--if-not-exists&lt;/code&gt; allows you to create a topic if it doesn't exist and retuns exit code 0.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/usr/bin/kafka-topics &lt;span class="se"&gt;\&lt;/span&gt;
             &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; broker:9092 &lt;span class="se"&gt;\&lt;/span&gt;
             &lt;span class="nt"&gt;--create&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
             &lt;span class="nt"&gt;--if-not-exists&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
             &lt;span class="nt"&gt;--topic&lt;/span&gt; kafka-test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are a couple of other arguments that are essential for a good understanding of Kafka:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;--partitions&lt;/code&gt;&lt;/strong&gt; - Kafka topics are partitioned i.e the data of topics are spread across multiple brokers for scalability.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;--replication-factor&lt;/code&gt;&lt;/strong&gt; - To make data in a topic fault-tolerant and highly-available, every topic can be replicated, so that there are always multiple brokers that have a copy of the data.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What's a Kafka Message
&lt;/h2&gt;

&lt;p&gt;Once we have the topic created, we can start sending messages to the topic. A Message consists of headers, a key, and a value. Let's talk about each of these.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;Headers&lt;/code&gt;&lt;/strong&gt; - Headers are key-value pairs and give the ability to add some metadata about the kafka message. Read the original KIP(Kafka Improvement Proposals) proposing headers &lt;a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-82+-+Add+Record+Headers"&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;Key&lt;/code&gt;&lt;/strong&gt; - Key for the kafka message. The key value can be null. Randomly chosen keys (i.e. serial numbers and UUID) are the best example of message keys. Read more about when you should use a key &lt;a href="https://forum.confluent.io/t/what-should-i-use-as-the-key-for-my-kafka-message/312/2"&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;Value&lt;/code&gt;&lt;/strong&gt; - Actual data to be stored in kafka. Could be a string, json, Protobuf, or AVRO data format.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Writing to a Kafka Topic
&lt;/h2&gt;

&lt;p&gt;Kafka provides a &lt;a href="https://docs.confluent.io/platform/current/clients/producer.html"&gt;Producer API&lt;/a&gt; to send a message to the Kafka topic. This API is available in java with &lt;a href="https://kafka.apache.org/33/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html"&gt;kafka-clients&lt;/a&gt; library and python with &lt;a href="https://kafka-python.readthedocs.io/en/master/apidoc/KafkaProducer.html"&gt;kafka-python&lt;/a&gt; package.&lt;/p&gt;

&lt;p&gt;Luckily for us, we don't have to use any of these. Kafka comes with an out of box script &lt;code&gt;kafka-console-producer&lt;/code&gt; that allows us to write data to a kafka topic. &lt;/p&gt;

&lt;p&gt;Run the command and as soon as the command returns &lt;code&gt;&amp;gt;&lt;/code&gt; with a new line, enter the Json message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/usr/bin/kafka-console-producer &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; kafka:9092 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--topic&lt;/span&gt; kafka-test
&lt;span class="o"&gt;&amp;gt;{&lt;/span&gt;&lt;span class="s2"&gt;"tickers"&lt;/span&gt;: &lt;span class="o"&gt;[{&lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"AMZN"&lt;/span&gt;, &lt;span class="s2"&gt;"price"&lt;/span&gt;: 1902&lt;span class="o"&gt;}&lt;/span&gt;, &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"MSFT"&lt;/span&gt;, &lt;span class="s2"&gt;"price"&lt;/span&gt;: 107&lt;span class="o"&gt;}&lt;/span&gt;, &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"AAPL"&lt;/span&gt;, &lt;span class="s2"&gt;"price"&lt;/span&gt;: 215&lt;span class="o"&gt;}]}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have successfully sent a message to the topic. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Enter &lt;code&gt;Control + C&lt;/code&gt; to stop the script.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;however this message was sent without any key, To send a key we have to set the properties &lt;code&gt;parse.key&lt;/code&gt; to allow sending the key.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Default key separator is &lt;code&gt;\t(tab)&lt;/code&gt;,we can change it by setting the property &lt;code&gt;key.separator&lt;/code&gt;. &lt;strong&gt;Eg:   --property "key.separator=:"&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's try to send a message with a random key &lt;code&gt;stocks-123&lt;/code&gt; this time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/usr/bin/kafka-console-producer &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; kafka:9092 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--topic&lt;/span&gt; kafka-test &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--property&lt;/span&gt; &lt;span class="s2"&gt;"parse.key=true"&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;stocks-123 &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"tickers"&lt;/span&gt;: &lt;span class="o"&gt;[{&lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"AMZN"&lt;/span&gt;, &lt;span class="s2"&gt;"price"&lt;/span&gt;: 1902&lt;span class="o"&gt;}&lt;/span&gt;, &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"MSFT"&lt;/span&gt;, &lt;span class="s2"&gt;"price"&lt;/span&gt;: 107&lt;span class="o"&gt;}&lt;/span&gt;, &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"AAPL"&lt;/span&gt;, &lt;span class="s2"&gt;"price"&lt;/span&gt;: 215&lt;span class="o"&gt;}]}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the release of kafka version &lt;a href="https://archive.apache.org/dist/kafka/3.2.0/RELEASE_NOTES.html"&gt;3.2.0&lt;/a&gt;, it's possible to &lt;a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-798%3A+Add+possibility+to+write+kafka+headers+in+Kafka+Console+Producer"&gt;send headers using ConsoleProducer&lt;/a&gt; by setting the property &lt;code&gt;parse.headers&lt;/code&gt; to &lt;strong&gt;true&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Headers are metadata about the kafka message, souce of these stock prices would be a good candidate for the headers. Let's add a header key as &lt;code&gt;stock_source&lt;/code&gt; and value as &lt;code&gt;nyse&lt;/code&gt; to the Kafka message :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/usr/bin/kafka-console-producer &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; kafka:9092 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--topic&lt;/span&gt; kafka-test &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--property&lt;/span&gt; &lt;span class="s2"&gt;"parse.key=true"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--property&lt;/span&gt; &lt;span class="s2"&gt;"parse.headers=true"&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;stock_source:nyse  stocks-123  &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"tickers"&lt;/span&gt;: &lt;span class="o"&gt;[{&lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"AMZN"&lt;/span&gt;, &lt;span class="s2"&gt;"price"&lt;/span&gt;: 1902&lt;span class="o"&gt;}&lt;/span&gt;, &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"MSFT"&lt;/span&gt;, &lt;span class="s2"&gt;"price"&lt;/span&gt;: 107&lt;span class="o"&gt;}&lt;/span&gt;, &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"AAPL"&lt;/span&gt;, &lt;span class="s2"&gt;"price"&lt;/span&gt;: 215&lt;span class="o"&gt;}]}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have successfully sent a kafka message with a header, key and value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwub7kn4d7nhy0ojs6e2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwub7kn4d7nhy0ojs6e2.gif" alt="baby-scream-yeah.gif" width="498" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Check out supported properties for kafka-consoler-producer &lt;a href="https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsoleProducer.scala#L221-L240"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Reading from a Kafka Topic
&lt;/h2&gt;

&lt;p&gt;Kafka provides a &lt;a href="https://docs.confluent.io/platform/current/clients/consumer.html"&gt;Consumer API&lt;/a&gt; to read messages from a Kafka topic. This API is available in java with &lt;a href="https://kafka.apache.org/33/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html"&gt;kafka-clients&lt;/a&gt; library and python with &lt;a href="https://kafka-python.readthedocs.io/en/master/apidoc/KafkaConsumer.html"&gt;kafka-python&lt;/a&gt; package.&lt;/p&gt;

&lt;p&gt;Kafka comes with an out of box script &lt;code&gt;kafka-console-consumer&lt;/code&gt; to read messages from the kafka topic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/usr/bin/kafka-console-consumer &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; kafka:9092 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--topic&lt;/span&gt; kafka-test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, this command only prints the values of the kafka message. To print the key and headers, we have to set the properties &lt;code&gt;print.headers&lt;/code&gt;, &lt;code&gt;print.key&lt;/code&gt; to &lt;strong&gt;true&lt;/strong&gt;. We can also print the timestamp of the message with the property &lt;code&gt;print.timestamp&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/usr/bin/kafka-console-consumer &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; kafka:9092 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--topic&lt;/span&gt; kafka-test &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--property&lt;/span&gt; print.headers&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--property&lt;/span&gt; print.key&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--property&lt;/span&gt; print.timestamp&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is other information such as &lt;code&gt;partition&lt;/code&gt; and &lt;code&gt;offset&lt;/code&gt;, they can be printed by setting the properties &lt;code&gt;--property print.offset=true&lt;/code&gt; and  &lt;code&gt;--property print.partition=true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Everytime we read from the a kafka topic, Kafka keeps track of the last offset the consumer read from and allows you to read from that point next time, however we can always read from the beginning using the arguments &lt;code&gt;--from-beginning&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To always read from a kafka topic from the beginning:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/usr/bin/kafka-console-consumer &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; kafka:9092 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--topic&lt;/span&gt; kafka-test &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--from-beginning&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--property&lt;/span&gt; print.headers&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--property&lt;/span&gt; print.key&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--property&lt;/span&gt; print.timestamp&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Is this too much to remember ?? &lt;br&gt;
Don't worry we have an easier way of reading and writing from Kafka topics.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6isvqojqtkmwe85pdsk.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6isvqojqtkmwe85pdsk.jpeg" alt="baby-yoda" width="500" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  kcat Utility
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/edenhill/kcat"&gt;kcat&lt;/a&gt; is an awesome tool to make our lives easier, it allows us to read and write from kafka topics without tons of scripts and in a more user-friendly way.&lt;/p&gt;

&lt;p&gt;As Confluent puts it, "It is a swiss-army knife of tools for inspecting and creating data in Kafka"&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kcat&lt;/code&gt; has two modes, it runs in producer mode by specifying the argument &lt;code&gt;-P&lt;/code&gt; and consumer mode by specifying the argument &lt;code&gt;-C&lt;/code&gt;.It also automatically selects its mode depending on the terminal or pipe type. If data is being piped to kcat it will automatically select producer (-P) mode. If data is being piped from kcat (e.g. standard terminal output) it will automatically select consumer (-C) mode.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;To read data from kafka topics, simply run&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kcat &lt;span class="nt"&gt;-b&lt;/span&gt; localhost:9092 &lt;span class="nt"&gt;-t&lt;/span&gt; kafka-test
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To write data to a Kafka topic, run&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kcat &lt;span class="nt"&gt;-P&lt;/span&gt; &lt;span class="nt"&gt;-b&lt;/span&gt; localhost:9092 &lt;span class="nt"&gt;-t&lt;/span&gt; kafka-test
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Take a look at the examples &lt;a href="https://github.com/edenhill/kcat#examples"&gt;here&lt;/a&gt; to find out more about the usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.confluent.io/blog/5-things-every-kafka-developer-should-know/"&gt;here&lt;/a&gt; are some tips and tricks of using Kafka.&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>docker</category>
      <category>containers</category>
      <category>confluent</category>
    </item>
    <item>
      <title>Getting Started with ACK RDS Controller</title>
      <dc:creator>Shardul Srivastava</dc:creator>
      <pubDate>Tue, 18 Oct 2022 20:31:17 +0000</pubDate>
      <link>https://dev.to/aws-builders/getting-started-with-ack-rds-controller-10kc</link>
      <guid>https://dev.to/aws-builders/getting-started-with-ack-rds-controller-10kc</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/aws-controllers-k8s/rds-controller"&gt;ACK controller for RDS&lt;/a&gt; is a &lt;a href="https://kubernetes.io/docs/concepts/architecture/controller/"&gt;Kubernetes Controller&lt;/a&gt; for provisioning RDS instances in a kubernetes native way.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ACK controller for RDS&lt;/code&gt; supports creating these database engines:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon Aurora (MySQL &amp;amp; PostgreSQL)&lt;/li&gt;
&lt;li&gt;Amazon RDS for PostgreSQL&lt;/li&gt;
&lt;li&gt;Amazon RDS for MySQL&lt;/li&gt;
&lt;li&gt;Amazon RDS for MariaDB&lt;/li&gt;
&lt;li&gt;Amazon RDS for Oracle&lt;/li&gt;
&lt;li&gt;Amazon RDS for SQL Server&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Install ACK Controller for RDS
&lt;/h2&gt;

&lt;p&gt;Let's start by setting up an EKS cluster. &lt;code&gt;ACK Controller for RDS&lt;/code&gt; requires &lt;code&gt;AmazonRDSFullAccess&lt;/code&gt; to create and manage RDS instances.&lt;br&gt;
We will also create a service account &lt;code&gt;ack-rds-controller&lt;/code&gt; for &lt;code&gt;ACK Controller for RDS&lt;/code&gt; and attach &lt;a href="https://github.com/aws-controllers-k8s/rds-controller/blob/main/helm/values.yaml#L81"&gt;AmazonRDSFullAccess&lt;/a&gt; IAM permissions to this service account with the help of &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html"&gt;IRSA&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: ack-cluster
  region: eu-west-1
  version: &lt;span class="s2"&gt;"1.22"&lt;/span&gt;
iam:
  withOIDC: &lt;span class="nb"&gt;true
  &lt;/span&gt;serviceAccounts:
    - metadata:
        &lt;span class="c"&gt;# Service account used by rds-controller https://github.com/aws-controllers-k8s/rds-controller/blob/main/helm/values.yaml#L81&lt;/span&gt;
        name: ack-rds-controller
        namespace: ack-system
      &lt;span class="c"&gt;# https://github.com/aws-controllers-k8s/rds-controller/blob/main/config/iam/recommended-policy-arn&lt;/span&gt;
      attachPolicyARNs: 
      - &lt;span class="s2"&gt;"arn:aws:iam::aws:policy/AmazonRDSFullAccess"&lt;/span&gt;

managedNodeGroups:
  - name: managed-ng-1
    minSize: 1
    maxSize: 10
    desiredCapacity: 1
    instanceType: t3.large
    amiFamily: AmazonLinux2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl create cluster &lt;span class="nt"&gt;-f&lt;/span&gt; ack-cluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the cluster is created, install the &lt;code&gt;ACK Controller for RDS&lt;/code&gt; using &lt;a href="https://gallery.ecr.aws/aws-controllers-k8s/rds-chart"&gt;Helm chart&lt;/a&gt;. &lt;code&gt;ACK Controller for RDS&lt;/code&gt; helm charts are stored in the public ECR repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Check the latest version of the helm chart &lt;a href="https://gallery.ecr.aws/aws-controllers-k8s/rds-chart"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Authenticate to the ECR repo using your AWS credentials :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; aws ecr-public get-login-password &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 | helm registry login &lt;span class="nt"&gt;--username&lt;/span&gt; AWS &lt;span class="nt"&gt;--password-stdin&lt;/span&gt; public.ecr.aws
 Login Succeeded
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you're authenticated, install &lt;code&gt;ACK Controller for RDS&lt;/code&gt; in the &lt;code&gt;ack-system&lt;/code&gt; namespace :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; helm upgrade &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--install&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    ack-rds &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-n&lt;/span&gt; ack-system &lt;span class="se"&gt;\&lt;/span&gt;
    oci://public.ecr.aws/aws-controllers-k8s/rds-chart &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;v0.1.1 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;aws.region&lt;span class="o"&gt;=&lt;/span&gt;eu-west-1 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;serviceAccount.create&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;log.level&lt;span class="o"&gt;=&lt;/span&gt;debug
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;During installation, we are setting these values in the Helm chart:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;aws.region&lt;/code&gt; - AWS Region for API Calls.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;serviceAccount.create&lt;/code&gt; - Setting this to false, since serviceAccount is created by eksctl during cluster creation above.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;log.level&lt;/code&gt; - Setting this to &lt;code&gt;debug&lt;/code&gt; to see detailed logs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;ACK controller for RDS&lt;/code&gt; installs several CRDs that allow you to provision RDS instances and related components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;GlobalCluster&lt;/code&gt;&lt;/strong&gt; - Custom resource to create &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html"&gt;Aurora Global cluster&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;DBCluster&lt;/code&gt;&lt;/strong&gt; - Custom resource to create &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html"&gt;Amazon Aurora DB Cluster&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;DBInstance&lt;/code&gt;&lt;/strong&gt; - Custom resource to create &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.html"&gt;Amazon RDS DB Instances&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;DBClusterParameterGroup&lt;/code&gt;&lt;/strong&gt; - Custom resource to create &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBClusterParamGroups.html"&gt;DB cluster parameter groups&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;DBParameterGroup&lt;/code&gt;&lt;/strong&gt; - Custom resource to create &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html"&gt;DB parameter groups&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;DBProxy&lt;/code&gt;&lt;/strong&gt; - Custom resource to create &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html"&gt;Amazon RDS Proxy&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;DBSubnetGroup&lt;/code&gt;&lt;/strong&gt; - Custom resource to create &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html"&gt;DB Subnet Group&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;. Refer &lt;a href="https://aws-controllers-k8s.github.io/community/reference/rds/v1alpha1/dbcluster/"&gt;here&lt;/a&gt; for the reference of these CRDs. &lt;/p&gt;

&lt;p&gt;Check &lt;code&gt;ACK controller for RDS&lt;/code&gt; logs in cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; app.kubernetes.io/instance&lt;span class="o"&gt;=&lt;/span&gt;ack-rds
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create Autora PostgreSQL Cluster with RDS Controller
&lt;/h3&gt;

&lt;p&gt;To create an Aurora PostgreSQL Cluster with &lt;code&gt;ACK controller for RDS&lt;/code&gt;, we have to first create a DB Subnet Group using &lt;code&gt;DBSubnetGroup&lt;/code&gt;, then create a Aurora Cluster using &lt;code&gt;DBCluster&lt;/code&gt; and add two Aurora Database instances in the cluster using &lt;code&gt;DBInstance&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a DB Subnet Group &lt;code&gt;test-ack-subnetgroup&lt;/code&gt; with all the subnets that are part of the VPC created by eksctl or use your existing subnets:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rds.services.k8s.aws/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DBSubnetGroup&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test-ack-subnetgroup"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test-ack-subnetgroup"&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Test&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ACK&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Subnet&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Group"&lt;/span&gt;
  &lt;span class="na"&gt;subnetIDs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;subnet-011e4c822231b65fa"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;subnet-001ce16b9b7c3578f"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;subnet-0a90579d9b066ecf7"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;subnet-00e6d2e90ea132c8b"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;subnet-0fbc22e542749c98c"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;subnet-080c0b1ed39b0aa91"&lt;/span&gt;
  &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;created-by"&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ack-rds"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; To get all the subnets created by eksctl, run command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl get cluster ack-cluster &lt;span class="nt"&gt;-o&lt;/span&gt; json| jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.[0].ResourcesVpcConfig.SubnetIds'&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the status of &lt;code&gt;test-ack-subnetgroup&lt;/code&gt; DB Subnet Group :&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get dbsubnetgroup test-ack-subnetgroup &lt;span class="nt"&gt;-ojson&lt;/span&gt;| jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.status.subnetGroupStatus'&lt;/span&gt;
Complete
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;if it returns &lt;code&gt;Complete&lt;/code&gt;, then &lt;code&gt;DB Subnet group&lt;/code&gt; is successfully created.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a secret to store the password of the master user :&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create secret generic &lt;span class="s2"&gt;"test-ack-master-password"&lt;/span&gt; &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;DATABASE_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"mypassword"&lt;/span&gt;
secret/test-ack-master-password created
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create an Aurora DB Cluster &lt;code&gt;test-ack-aurora-cluster&lt;/code&gt; using &lt;code&gt;DBCluster&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rds.services.k8s.aws/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DBCluster&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test-ack-aurora-cluster"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;engine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aurora-postgresql&lt;/span&gt;
  &lt;span class="na"&gt;engineVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;14.4"&lt;/span&gt;
  &lt;span class="na"&gt;engineMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;provisioned&lt;/span&gt;
  &lt;span class="na"&gt;dbClusterIdentifier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test-ack-aurora-cluster"&lt;/span&gt;
  &lt;span class="na"&gt;databaseName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-ack&lt;/span&gt;
  &lt;span class="na"&gt;dbSubnetGroupName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-ack-subnetgroup&lt;/span&gt;
  &lt;span class="na"&gt;deletionProtection&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;masterUsername&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test"&lt;/span&gt;
  &lt;span class="na"&gt;masterUserPassword&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;default"&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test-ack-master-password"&lt;/span&gt;
    &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DATABASE_PASSWORD"&lt;/span&gt;
  &lt;span class="na"&gt;storageEncrypted&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;enableCloudwatchLogsExports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;postgresql&lt;/span&gt;
  &lt;span class="na"&gt;backupRetentionPeriod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;14&lt;/span&gt;
  &lt;span class="na"&gt;copyTagsToSnapshot&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;preferredBackupWindow&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;00:00-01:59&lt;/span&gt;
  &lt;span class="na"&gt;preferredMaintenanceWindow&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sat:02:00-sat:04:00&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once created, check the status of &lt;code&gt;DBCluster&lt;/code&gt; resource &lt;code&gt;test-ack-aurora-cluster&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe DBCluster test-ack-aurora-cluster
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;if the status is &lt;code&gt;available&lt;/code&gt; then Aurora DB Cluster is successfully created.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After &lt;code&gt;DBCluster&lt;/code&gt; is successfully created, add two Aurora DB Instances &lt;code&gt;test-ack-aurora-instance-1&lt;/code&gt; and &lt;code&gt;test-ack-aurora-instance-2&lt;/code&gt; to the DB cluster using &lt;code&gt;DBInstance&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rds.services.k8s.aws/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DBInstance&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-ack-aurora-instance-1&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;dbInstanceClass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db.t4g.medium&lt;/span&gt;
  &lt;span class="na"&gt;dbInstanceIdentifier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-ack-aurora-instance-1&lt;/span&gt;
  &lt;span class="na"&gt;dbClusterIdentifier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test-ack"&lt;/span&gt;
  &lt;span class="na"&gt;engine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aurora-postgresql&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rds.services.k8s.aws/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DBInstance&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-ack-aurora-instance-2&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;dbInstanceClass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db.t4g.medium&lt;/span&gt;
  &lt;span class="na"&gt;dbInstanceIdentifier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-ack-aurora-instance-2&lt;/span&gt;
  &lt;span class="na"&gt;dbClusterIdentifier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test-ack"&lt;/span&gt;
  &lt;span class="na"&gt;engine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aurora-postgresql&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;When these instances are available, one of them will be a &lt;strong&gt;Writer Instance&lt;/strong&gt; and other one will be a &lt;strong&gt;Reader Instance&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the status of &lt;code&gt;DBInstance&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe DBInstance test-ack-aurora-instance-1
kubectl describe DBInstance test-ack-aurora-instance-2
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fos1m6jt0qf7rxd685fzj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fos1m6jt0qf7rxd685fzj.png" alt="rds-instance-status" width="800" height="126"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Status &lt;code&gt;available&lt;/code&gt; means that DB Instances are ready to use now.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Get the &lt;code&gt;Writer Endpoint&lt;/code&gt; of the Aurora RDS cluster:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get dbcluster test-ack-aurora-cluster &lt;span class="nt"&gt;-ojson&lt;/span&gt;| jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.status.endpoint'&lt;/span&gt;

test-ack-aurora-cluster.cluster-cncphmukmdde.eu-west-1.rds.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Get the &lt;code&gt;Reader Endpoint&lt;/code&gt; of the Aurora RDS cluster:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get dbcluster test-ack-aurora-cluster &lt;span class="nt"&gt;-ojson&lt;/span&gt;| jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.status.readerEndpoint'&lt;/span&gt;

test-ack-aurora-cluster.cluster-ro-cncphmukmdde.eu-west-1.rds.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When you have the Aurora DB Cluster provisioned, you would want to extract values such as &lt;code&gt;Reader Endpoint&lt;/code&gt; and &lt;code&gt;Writer Endpoint&lt;/code&gt; and configure them in the application to connect to the cluster.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ACK controller for RDS&lt;/code&gt; provides a way to import these values directly from the &lt;code&gt;DBCluster&lt;/code&gt; or &lt;code&gt;DBInstance&lt;/code&gt; resource to a desired &lt;code&gt;Configmap&lt;/code&gt; or &lt;code&gt;Secret&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Export the Database Details in a Configmap
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;FieldExport&lt;/code&gt; CRD allows you to export any spec or status field from a &lt;code&gt;DBCluster&lt;/code&gt; or &lt;code&gt;DBInstance&lt;/code&gt; resource into a ConfigMap.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create an empty &lt;code&gt;ConfigMap&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create cm ack-cluster-config
configmap/ack-cluster-config created
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a &lt;code&gt;FieldExport&lt;/code&gt; to import the value of &lt;code&gt;.status.endpoint&lt;/code&gt; from &lt;code&gt;test-ack-aurora-cluster&lt;/code&gt; DBCluster resource to the ConfigMap &lt;code&gt;ack-cluster-config&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;services.k8s.aws/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;FieldExport&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ack-cm-field-export&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rds.services.k8s.aws&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DBCluster&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-ack-aurora-cluster&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.status.endpoint"&lt;/span&gt;
  &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;configmap&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ack-cluster-config&lt;/span&gt;
    &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DATABASE_HOST&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Inspect the values of ConfigMap &lt;code&gt;ack-cluster-config&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get cm ack-cluster-config &lt;span class="nt"&gt;-ojson&lt;/span&gt;|jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.data'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flo1dnn2lld88r6ii4krt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flo1dnn2lld88r6ii4krt.png" alt="ack-field-export-cm" width="800" height="79"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Export the Database Details in a Secret
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;FieldExport&lt;/code&gt; CRD allows you to export any spec or status field from a &lt;code&gt;DBCluster&lt;/code&gt; or &lt;code&gt;DBInstance&lt;/code&gt; into a Secret.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create an empty &lt;code&gt;Secret&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create secret generic ack-cluster-secret
secret/ack-cluster-secret created
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a &lt;code&gt;FieldExport&lt;/code&gt; to import the value of &lt;code&gt;.status.endpoint&lt;/code&gt; from &lt;code&gt;test-ack-aurora-cluster&lt;/code&gt; DBCluster resource to the Secret &lt;code&gt;ack-cluster-secret&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;services.k8s.aws/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;FieldExport&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ack-secret-field-export&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rds.services.k8s.aws&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DBCluster&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-ack-aurora-cluster&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.status.endpoint"&lt;/span&gt;
  &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ack-cluster-secret&lt;/span&gt;
    &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DATABASE_HOST&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Inspect the values of Secret &lt;code&gt;ack-cluster-secret&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl view-secret ack-cluster-secret &lt;span class="nt"&gt;--all&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fha83dbp7fsdvj5ls0zf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fha83dbp7fsdvj5ls0zf7.png" alt="ack-field-export-secret" width="800" height="75"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;code&gt;view-secret&lt;/code&gt; is a kubectl plugin, check out my &lt;a href="https://shardul.dev/most-useful-kubectl-plugins/"&gt;blog&lt;/a&gt; on how to use this plugin.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;📌 &lt;strong&gt;FieldExport and ACK resource should be in the same namespace.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Important - Clean up
&lt;/h2&gt;

&lt;p&gt;To clean up the clusters, first disable the delete protection by setting &lt;code&gt;.spec.deletionProtection&lt;/code&gt; to false for the &lt;code&gt;test-ack-aurora-cluster&lt;/code&gt; DBCluster resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; test-ack-aurora-cluster.yaml

dbcluster.rds.services.k8s.aws/test-ack-aurora-cluster configured
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you can proceed to delete the instances first and then cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; test-ack-aurora-instance.yaml
kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; test-ack-aurora-cluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>rds</category>
      <category>ack</category>
      <category>controller</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Most Useful kubectl Plugins</title>
      <dc:creator>Shardul Srivastava</dc:creator>
      <pubDate>Sat, 15 Oct 2022 01:12:22 +0000</pubDate>
      <link>https://dev.to/aws-builders/most-useful-kubectl-plugins-11i1</link>
      <guid>https://dev.to/aws-builders/most-useful-kubectl-plugins-11i1</guid>
      <description>&lt;p&gt;Kubernetes provides a convenient utility &lt;code&gt;kubectl&lt;/code&gt; to interact with the cluster. &lt;code&gt;kubectl&lt;/code&gt; talks to &lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/"&gt;kube-apiserver&lt;/a&gt; and allows you to create, update and delete objects/resources in the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  How To Pronounce kubectl
&lt;/h2&gt;

&lt;p&gt;When you start using &lt;code&gt;kubectl&lt;/code&gt;, the first thing that comes to mind is &lt;code&gt;how the heck do you pronounce this&lt;/code&gt;. There are different pronunciations used by different people, as long as everybody is referring to the same command line tool, it's all good. &lt;/p&gt;

&lt;p&gt;Here are the three different pronunciations that I have heard people using:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;kube-control&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kube-cuttle&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kube-cee-tee-ell&lt;/code&gt; ←&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;P.S: I use the last one 😄&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What are kubectl Plugins
&lt;/h2&gt;

&lt;p&gt;Kubernetes provides a way to extend the functionality of &lt;code&gt;kubectl&lt;/code&gt; using plugins. Plugins allow us to add additional functionality to the &lt;code&gt;kubectl&lt;/code&gt; command line tool.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl plugins&lt;/code&gt; are executables whose names start with &lt;code&gt;kubectl-&lt;/code&gt;. These executables should be part of the &lt;code&gt;PATH&lt;/code&gt; so that &lt;code&gt;kubectl&lt;/code&gt; can discover them. kubectl automatically detects them and runs them for you.&lt;/p&gt;

&lt;p&gt;Eg: If we have a plugin called &lt;code&gt;hello&lt;/code&gt; then you can invoke it by using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;kubectl hello&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;here kubectl would look for an executable with the name &lt;code&gt;kubectl-hello&lt;/code&gt; in the &lt;code&gt;PATH&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install kubectl Plugins with Krew
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;kubectl plugins&lt;/code&gt; can be installed in numerous ways, the easiest way would be to install the official plugin manager called &lt;a href="https://github.com/kubernetes-sigs/krew/"&gt;krew&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Install &lt;code&gt;krew&lt;/code&gt; by following the instructions for your operating system &lt;a href="https://krew.sigs.k8s.io/docs/user-guide/setup/install/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For Mac, it can be installed with the &lt;code&gt;brew&lt;/code&gt; package manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;krew
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Installing official plugins
&lt;/h3&gt;

&lt;p&gt;krew maintains an index of officially maintained plugins called &lt;a href="https://krew.sigs.k8s.io/plugins/"&gt;krew plugin index&lt;/a&gt;. There are about &lt;em&gt;206&lt;/em&gt; plugins maintained in the official krew index by the maintainers.&lt;/p&gt;

&lt;p&gt;Let's take a look at some of the most useful plugins&lt;/p&gt;

&lt;h4&gt;
  
  
  neat Plugin
&lt;/h4&gt;

&lt;p&gt;Install &lt;a href="https://github.com/itaysk/kubectl-neat"&gt;neat&lt;/a&gt; plugin with krew :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl krew &lt;span class="nb"&gt;install &lt;/span&gt;neat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;neat&lt;/code&gt; is my favorite plugin. While working with Kubernetes, you often would want to check the resource spec in the cluster, however, when you run the command, you get more fields than intended as part of the spec.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods nginx-7fd68f74d-ntpdc &lt;span class="nt"&gt;-oyaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;creationTimestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2022-10-14T16:18:08Z"&lt;/span&gt;
  &lt;span class="na"&gt;generateName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-7fd68f74d-&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;pod-template-hash&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;7fd68f74d&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-7fd68f74d-ntpdc&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;ownerReferences&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
    &lt;span class="na"&gt;blockOwnerDeletion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;controller&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ReplicaSet&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-7fd68f74d&lt;/span&gt;
    &lt;span class="na"&gt;uid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4ff93f8e-a3c3-4c81-9556-e69eb47e9011&lt;/span&gt;
  &lt;span class="na"&gt;resourceVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;85985"&lt;/span&gt;
  &lt;span class="na"&gt;uid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;714bf0d2-2456-4efc-a527-71f29943662c&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quay.io/shardul/nginx:v1&lt;/span&gt;
    &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
    &lt;span class="na"&gt;terminationMessagePath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/dev/termination-log&lt;/span&gt;
    &lt;span class="na"&gt;terminationMessagePolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;File&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/run/secrets/kubernetes.io/serviceaccount&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-api-access-qskm4&lt;/span&gt;
      &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;dnsPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterFirst&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the output is too verbose for troubleshooting. This is where &lt;code&gt;neat&lt;/code&gt; plugin comes to our rescue.&lt;/p&gt;

&lt;p&gt;Let's get the pod details, this time add &lt;code&gt;| kubectl neat&lt;/code&gt; at the end of the command :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods nginx-7fd68f74d-ntpdc &lt;span class="nt"&gt;-oyaml&lt;/span&gt; | kubectl neat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;pod-template-hash&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;7fd68f74d&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-7fd68f74d-ntpdc&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quay.io/shardul/nginx:v1&lt;/span&gt;
    &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/run/secrets/kubernetes.io/serviceaccount&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-api-access-qskm4&lt;/span&gt;
      &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;kubectl-neat&lt;/code&gt; omits &lt;code&gt;managedFields&lt;/code&gt;, &lt;code&gt;default values&lt;/code&gt; and &lt;code&gt;status&lt;/code&gt; fields and some metadata fields such as &lt;code&gt;creationTimestamp&lt;/code&gt;, &lt;code&gt;resourceVersion&lt;/code&gt;,&lt;code&gt;selfLink&lt;/code&gt; and &lt;code&gt;uid&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  view-secret Plugin
&lt;/h4&gt;

&lt;p&gt;Install &lt;a href="https://github.com/elsesiy/kubectl-view-secret"&gt;view-secret&lt;/a&gt; plugin with krew:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl krew &lt;span class="nb"&gt;install &lt;/span&gt;view-secret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;view-secret&lt;/code&gt; plugin saves a lot of time when you want to view a secret in the cluster, especially if it's a secret with multiple keys and values. Normally to view a secret you would do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get secret my-secret &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;key1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;c3VwZXJzZWNyZXQ=&lt;/span&gt;
  &lt;span class="na"&gt;key2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dG9wc2VjcmV0&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;creationTimestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2022-10-14T19:58:25Z"&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-secret&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;resourceVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;88915"&lt;/span&gt;
  &lt;span class="na"&gt;uid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4b6fbf40-f27f-4744-ab61-2d7457a41ce6&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then copy the values such as &lt;code&gt;c3VwZXJzZWNyZXQ=&lt;/code&gt; and decode it with base64:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"c3VwZXJzZWNyZXQ="&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;
supersecret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the &lt;code&gt;view-secret&lt;/code&gt; plugin, you can just do :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;kubectl view-secret my-secret --all&lt;/span&gt;
&lt;span class="s"&gt;key1=supersecret&lt;/span&gt;
&lt;span class="s"&gt;key2=topsecret&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  access-matrix Plugin
&lt;/h4&gt;

&lt;p&gt;Install &lt;a href="https://github.com/corneliusweig/rakkess"&gt;access-matrix&lt;/a&gt; plugin with krew :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl krew &lt;span class="nb"&gt;install &lt;/span&gt;access-matrix
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;access-matrix&lt;/code&gt; plugin is very useful to visualize your access in the cluster or to find out who can access a particular resource in the cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  blame Plugin
&lt;/h4&gt;

&lt;p&gt;Install &lt;a href="https://github.com/knight42/kubectl-blame"&gt;blame&lt;/a&gt; plugin with krew :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl krew &lt;span class="nb"&gt;install &lt;/span&gt;blame
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;blame&lt;/code&gt; plugin helps you to figure out who changed several fields of an object in the cluster - &lt;code&gt;kubectl&lt;/code&gt;, &lt;code&gt;kube-controller-manager&lt;/code&gt; or &lt;code&gt;helm&lt;/code&gt;. It internally uses the &lt;code&gt;.metadata.manageFields&lt;/code&gt; field of the object to get this information. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Read more about &lt;code&gt;metadata.managedFields&lt;/code&gt; &lt;a href="https://kubernetes.io/docs/reference/using-api/server-side-apply/"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If we edit a deployment &lt;code&gt;nginx&lt;/code&gt; manually and update the replias to 2. We can see those details using the &lt;code&gt;blame&lt;/code&gt; plugin that changes were done using &lt;code&gt;kubectl edit&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl blame deploy nginx  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;                                                  &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;kubectl-client-side-apply (Update    8 hours ago)   progressDeadlineSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;600&lt;/span&gt;
&lt;span class="na"&gt;kubectl-edit              (Update 26 minutes ago)   replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;span class="na"&gt;kubectl-client-side-apply (Update    8 hours ago)   revisionHistoryLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  df-pv Plugin
&lt;/h4&gt;

&lt;p&gt;Install &lt;a href="https://github.com/yashbhutwala/kubectl-df-pv"&gt;df-pv&lt;/a&gt; plugin with krew :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl krew &lt;span class="nb"&gt;install &lt;/span&gt;df-pv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are familiar with the &lt;code&gt;df&lt;/code&gt; command in Linux and Mac, then you would love the &lt;code&gt;df-pv&lt;/code&gt; plugin. It provides the same functionality as &lt;code&gt;df&lt;/code&gt; provides, except that it provides details for &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/"&gt;Persistent volumes&lt;/a&gt; in a human-readable format.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;df-pv&lt;/code&gt; plugin comes in handy if you want to get an overall view of PVs in the cluster. It shows you details like &lt;code&gt;Size&lt;/code&gt;, &lt;code&gt;Used&lt;/code&gt;, &lt;code&gt;Available&lt;/code&gt;, and &lt;code&gt;%Used&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing plugins directly from repositories
&lt;/h2&gt;

&lt;p&gt;Apart from the krew index, &lt;code&gt;plugins&lt;/code&gt; can be installed from private repositories via manual steps or using a custom plugin index.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In the end plugins are just executables. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  clean plugin
&lt;/h3&gt;

&lt;p&gt;Install &lt;a href="https://github.com/shardulsrivastava/kubectl-plugin#kubectl-clean"&gt;clean&lt;/a&gt; plugin manually :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone git@github.com:shardulsrivastava/kubectl-plugin.git
&lt;span class="nb"&gt;cd &lt;/span&gt;kubectl-plugin/plugins/clean
&lt;span class="nb"&gt;mv &lt;/span&gt;kubectl-clean /usr/local/bin
kubectl clean &lt;span class="nt"&gt;--help&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;clean&lt;/code&gt; plugin comes handy if you're using EKS or GKE where you have orphaned pods lying around cluttering the cluster. It cleans them all up in one go.&lt;/p&gt;

&lt;p&gt;To delete all the orphaned pods in your cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl clean all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or you can clean up a particular namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl clean my-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  gke-outdated plugin
&lt;/h3&gt;

&lt;p&gt;Install &lt;a href="https://github.com/shardulsrivastava/kubectl-plugin#kubectl-gke-outdated"&gt;gke-outdated&lt;/a&gt; plugin manually :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone git@github.com:shardulsrivastava/kubectl-plugin.git
&lt;span class="nb"&gt;cd &lt;/span&gt;kubectl-plugin/plugins/gke-outdated
&lt;span class="nb"&gt;mv &lt;/span&gt;kubectl-gke_outdated print-table /usr/local/bin
kubectl gke-outdated &lt;span class="nt"&gt;--help&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;gke-outdated&lt;/code&gt; plugin finds all the outdated GKE clusters in your GCP organization's folder.&lt;/p&gt;

&lt;p&gt;To check all the GKE clusters running outdated kubernetes versions inside folder Id &lt;code&gt;907623304376&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl gke-outdated 907623304376 1.22
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this will list all the GKE clusters inside of folder &lt;code&gt;907623304376&lt;/code&gt; that are running a version less than &lt;code&gt;1.22&lt;/code&gt;.&lt;/p&gt;

</description>
      <category>kubect</category>
      <category>kubernetes</category>
      <category>plugins</category>
    </item>
    <item>
      <title>Scaling in EKS with Karpenter - Part 1</title>
      <dc:creator>Shardul Srivastava</dc:creator>
      <pubDate>Mon, 10 Oct 2022 22:06:12 +0000</pubDate>
      <link>https://dev.to/aws-builders/scaling-in-eks-with-karpenter-part-1-3c7e</link>
      <guid>https://dev.to/aws-builders/scaling-in-eks-with-karpenter-part-1-3c7e</guid>
      <description>&lt;p&gt;Kubernetes has become a de-facto standard because it takes care of a lot of complexities internally. One of those complexities is cluster autoscaling i.e provisioning of nodes based on the increased number of workloads.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler"&gt;Cluster Autoscaler&lt;/a&gt; is a project maintained by a community called &lt;code&gt;sig-autoscaling&lt;/code&gt;, one of the communities under &lt;code&gt;Kubernetes&lt;/code&gt;. Check out more about Kubernetes Communities &lt;a href="https://github.com/kubernetes/community"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Cluster autoscaler&lt;/code&gt; supports a number of &lt;a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider"&gt;cloud providers&lt;/a&gt; including EKS. &lt;a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws"&gt;here&lt;/a&gt; is the guide to setup &lt;code&gt;cluster autoscaler&lt;/code&gt; on EKS and various configuration &lt;a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws/examples"&gt;examples&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Cluster autoscaler&lt;/code&gt; runs in the cluster as an addon and adds or removes the nodes in the cluster to allow the scheduling of workloads. It kicks in when any of the pods are not able to schedule due to insufficient resources. Node groups in EKS are backed by &lt;code&gt;EC2 Auto Scaling Groups&lt;/code&gt; and &lt;code&gt;CA&lt;/code&gt; updates the number of nodes in the ASG dynamically to ensure pods are scheduled. It also supports &lt;strong&gt;Mixed Instance policy&lt;/strong&gt; for Spot instances that allows users to save costs with Spot instances with the added risk of workload interruption.&lt;/p&gt;

&lt;p&gt;While &lt;code&gt;CA&lt;/code&gt; takes care of the scaling efficiently, there are still issues that EKS users face such as :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;When there are no node in the Node group that matches the requirements of the pod and the pod remains unscheduled, which could cause an outage.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;While debugging these types of issues a common error message in autoscaler logs is &lt;code&gt;pod didn't trigger scale-up (it wouldn't fit if a new node is added)&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using too big instances in node groups, which leads to low resource utilization and increased cost.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using too low instances in Node groups, which leads to node groups maxing out and resulting in unscheduled pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No way to identify the optimal choice of instance types based on workloads.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All of these issues and many more can be solved with...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmpjvhoxdnpfj79740e8.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmpjvhoxdnpfj79740e8.jpeg" alt="khaby" width="800" height="785"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Karpenter
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://karpenter.sh/"&gt;Karpenter&lt;/a&gt; is an open-source project by AWS that improves the efficiency and cost of running workloads on &lt;strong&gt;Kubernetes clusters&lt;/strong&gt; (not just EKS, it's designed to have support for multiple cloud providers). &lt;/p&gt;

&lt;p&gt;Karpenter is group less i.e it works by provisioning nodes based on the workload and the provisioned nodes are not part of any ASG.&lt;/p&gt;

&lt;p&gt;This allows &lt;code&gt;Karpenter&lt;/code&gt; to scale nodes more efficiently, retry in a few milliseconds when node capacity is not available instead of waiting for minutes in case of an ASG and gives the flexibility to provision instances from a variety of instance types without creating hundreds of node groups.&lt;/p&gt;

&lt;h2&gt;
  
  
  Karpenter Installation
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;Karpenter controller&lt;/code&gt; runs on the cluster as a deployment and relies on a CRD called &lt;a href="https://karpenter.sh/v0.17.0/provisioner/"&gt;Provisioner&lt;/a&gt; to configure it's behaviour to handle workloads such as consolidate them to save costs, move them to another node with a different instance type and scale down nodes when they are empty.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup an EKS Cluster with Karpenter enabled
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Karpenter&lt;/code&gt; can be installed by following the steps in the &lt;a href="https://karpenter.sh/v0.17.0/getting-started/"&gt;Karpenter - Getting Started Guide&lt;/a&gt; which involves creating several Cloudformation stacks, IAM role cluster bindings. IMO, the simplest way to get started is to use &lt;code&gt;eksctl&lt;/code&gt; that has support for &lt;code&gt;karpenter&lt;/code&gt; installation out-of-box.&lt;/p&gt;

&lt;p&gt;Let's create a cluster with &lt;code&gt;eksctl&lt;/code&gt; with a managed node group and &lt;code&gt;karpenter&lt;/code&gt; installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eksctl.io/v1alpha5&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterConfig&lt;/span&gt;

&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter-cluster&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eu-west-1&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.22"&lt;/span&gt;
  &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;karpenter.sh/discovery&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter-cluster&lt;/span&gt; &lt;span class="c1"&gt;# Special tag for Karpenter that it uses to discover subnets and security groups&lt;/span&gt;
&lt;span class="na"&gt;iam&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;withOIDC&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="c1"&gt;# required&lt;/span&gt;

&lt;span class="na"&gt;karpenter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0.15.0'&lt;/span&gt; &lt;span class="c1"&gt;# Current version of eksctl only supports 0.15.0 of karpenter.&lt;/span&gt;
  &lt;span class="na"&gt;createServiceAccount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; 

&lt;span class="na"&gt;managedNodeGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;managed-ng-1&lt;/span&gt; &lt;span class="c1"&gt;# Managed node group for karpenter controller itself, could use fargate too that would be much simpler.&lt;/span&gt;
    &lt;span class="na"&gt;minSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
    &lt;span class="na"&gt;desiredCapacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;instanceType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;t3.large&lt;/span&gt;
    &lt;span class="na"&gt;amiFamily&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AmazonLinux2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this would create an EKS cluster &lt;code&gt;karpenter-cluster&lt;/code&gt; in the &lt;code&gt;eu-west-1&lt;/code&gt; region with a managed Node group &lt;code&gt;managed-ng-1&lt;/code&gt;, install few of components for &lt;code&gt;karpenter&lt;/code&gt; to work : &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;eksctl-KarpenterControllerPolicy-karpenter-cluster&lt;/code&gt; - An &lt;a href="https://github.com/aws/karpenter/blob/30a4a5af24fb065471c9ec1203db861d9eb45ac4/website/content/en/v0.15.0/getting-started/getting-started-with-eksctl/cloudformation.yaml#L34-L66"&gt;IAM policy&lt;/a&gt; with permissions to provision nodes and spot instances.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;eksctl-karpenter-cluster-iamservice-role&lt;/code&gt; - An IAM role with IAM policy &lt;code&gt;eksctl-KarpenterControllerPolicy-karpenter-cluster&lt;/code&gt; and attached to the service account used by &lt;code&gt;karpenter&lt;/code&gt; to allow it to provision instances.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;eksctl-KarpenterNodeRole-karpenter-cluster&lt;/code&gt; - An &lt;a href="https://github.com/aws/karpenter/blob/30a4a5af24fb065471c9ec1203db861d9eb45ac4/website/content/en/v0.15.0/getting-started/getting-started-with-eksctl/cloudformation.yaml#L15-L33"&gt;IAM role&lt;/a&gt; for provisioned nodes to work and get registered with the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;eksctl-KarpenterNodeInstanceProfile-karpenter-cluster&lt;/code&gt; - An &lt;a href="https://github.com/aws/karpenter/blob/30a4a5af24fb065471c9ec1203db861d9eb45ac4/website/content/en/v0.15.0/getting-started/getting-started-with-eksctl/cloudformation.yaml#L8-L14"&gt;Instance profile&lt;/a&gt; for instance provisioned by &lt;code&gt;Karpenter&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;karpenter Helm Release&lt;/code&gt; - Installation of &lt;code&gt;karpenter&lt;/code&gt; using &lt;a href="https://github.com/aws/karpenter/tree/main/charts/karpenter"&gt;Helm chart&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Setup Karpenter Provisioner
&lt;/h3&gt;

&lt;p&gt;To configure &lt;code&gt;Karpenter&lt;/code&gt;, you need to create a &lt;a href="https://karpenter.sh/v0.17.0/provisioner/"&gt;Provisioner&lt;/a&gt; resource that defines how &lt;code&gt;Karpenter&lt;/code&gt; provisions nodes and removes them. It has several configuration options but we will start with the basic &lt;strong&gt;Provisioner&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter.sh/v1alpha5&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Provisioner&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="na"&gt;subnetSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;karpenter.sh/discovery&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter-cluster&lt;/span&gt;
    &lt;span class="na"&gt;securityGroupSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;karpenter.sh/discovery&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter-cluster&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;For the sake of simplicity, we are using &lt;code&gt;provider&lt;/code&gt; definiton directly, however recommendaion is to create another CRD of type &lt;code&gt;AWSNodeTemplate&lt;/code&gt; and refer to it using &lt;code&gt;providerRef&lt;/code&gt; configuration.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here &lt;code&gt;subnetSelector&lt;/code&gt; and &lt;code&gt;securityGroupSelector&lt;/code&gt; values allow &lt;code&gt;karpenter&lt;/code&gt; to discover &lt;strong&gt;subnets&lt;/strong&gt; and &lt;strong&gt;security groups&lt;/strong&gt; i.e &lt;code&gt;karpenter&lt;/code&gt; would discover the subnets and security groups using these tags and create the node in one of these subnets and attach the discovered security groups to the node.&lt;/p&gt;

&lt;p&gt;Let's create a sample deployment for &lt;code&gt;Nginx&lt;/code&gt; with CPU request as &lt;code&gt;1&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quay.io/shardul/nginx:v1&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check if pods are running :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0glsctppx7ovfebtn8kg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0glsctppx7ovfebtn8kg.png" alt="pending pods" width="800" height="122"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Initially Nginx pod is pending, &lt;code&gt;karpenter&lt;/code&gt; detects a pending pod and kicks in to create a node to accommodate the pending pod, Check the &lt;code&gt;karpenter&lt;/code&gt; controller logs to see the details :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; karpenter logs &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; app.kubernetes.io/instance&lt;span class="o"&gt;=&lt;/span&gt;karpenter &lt;span class="nt"&gt;-c&lt;/span&gt; controller

2022-10-09T15:35:31.066Z  INFO  controller.provisioning Computed 1 new node&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt; will fit 1 pod&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt;  &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"commit"&lt;/span&gt;: &lt;span class="s2"&gt;"3d87474"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
2022-10-09T15:35:31.253Z  DEBUG controller.provisioning.cloudprovider Discovered subnets: &lt;span class="o"&gt;[&lt;/span&gt;subnet-0bb805312e46db35f &lt;span class="o"&gt;(&lt;/span&gt;eu-west-1b&lt;span class="o"&gt;)&lt;/span&gt; subnet-05bd621d69320799a &lt;span class="o"&gt;(&lt;/span&gt;eu-west-1c&lt;span class="o"&gt;)&lt;/span&gt; subnet-02828f6cf56a0eabc &lt;span class="o"&gt;(&lt;/span&gt;eu-west-1a&lt;span class="o"&gt;)&lt;/span&gt; subnet-0d94022cc49b76aeb &lt;span class="o"&gt;(&lt;/span&gt;eu-west-1a&lt;span class="o"&gt;)&lt;/span&gt; subnet-09ebfc5011dbf69fb &lt;span class="o"&gt;(&lt;/span&gt;eu-west-1b&lt;span class="o"&gt;)&lt;/span&gt; subnet-084a73ba681d60241 &lt;span class="o"&gt;(&lt;/span&gt;eu-west-1c&lt;span class="o"&gt;)]&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"commit"&lt;/span&gt;: &lt;span class="s2"&gt;"3d87474"&lt;/span&gt;, &lt;span class="s2"&gt;"provisioner"&lt;/span&gt;: &lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
2022-10-09T15:35:31.387Z  DEBUG controller.provisioning.cloudprovider Discovered security &lt;span class="nb"&gt;groups&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;sg-03a9dd659d9a0b8fd sg-00a2bf520128db45b] &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"commit"&lt;/span&gt;: &lt;span class="s2"&gt;"3d87474"&lt;/span&gt;, &lt;span class="s2"&gt;"provisioner"&lt;/span&gt;: &lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
2022-10-09T15:35:31.390Z  DEBUG controller.provisioning.cloudprovider Discovered kubernetes version 1.22  &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"commit"&lt;/span&gt;: &lt;span class="s2"&gt;"3d87474"&lt;/span&gt;, &lt;span class="s2"&gt;"provisioner"&lt;/span&gt;: &lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
2022-10-09T15:35:31.425Z  DEBUG controller.provisioning.cloudprovider Discovered ami-04f335c3e4d6dcfad &lt;span class="k"&gt;for &lt;/span&gt;query &lt;span class="s2"&gt;"/aws/service/eks/optimized-ami/1.22/amazon-linux-2-arm64/recommended/image_id"&lt;/span&gt;  &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"commit"&lt;/span&gt;: &lt;span class="s2"&gt;"3d87474"&lt;/span&gt;, &lt;span class="s2"&gt;"provisioner"&lt;/span&gt;: &lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
2022-10-09T15:35:31.446Z  DEBUG controller.provisioning.cloudprovider Discovered ami-0ed22cc46dcbf16ed &lt;span class="k"&gt;for &lt;/span&gt;query &lt;span class="s2"&gt;"/aws/service/eks/optimized-ami/1.22/amazon-linux-2/recommended/image_id"&lt;/span&gt;  &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"commit"&lt;/span&gt;: &lt;span class="s2"&gt;"3d87474"&lt;/span&gt;, &lt;span class="s2"&gt;"provisioner"&lt;/span&gt;: &lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
2022-10-09T15:35:31.483Z  DEBUG controller.provisioning.cloudprovider Discovered launch template Karpenter-karpenter-cluster-5093968976540638239  &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"commit"&lt;/span&gt;: &lt;span class="s2"&gt;"3d87474"&lt;/span&gt;, &lt;span class="s2"&gt;"provisioner"&lt;/span&gt;: &lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
2022-10-09T15:35:31.523Z  DEBUG controller.provisioning.cloudprovider Discovered launch template Karpenter-karpenter-cluster-9263159034123731516  &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"commit"&lt;/span&gt;: &lt;span class="s2"&gt;"3d87474"&lt;/span&gt;, &lt;span class="s2"&gt;"provisioner"&lt;/span&gt;: &lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
2022-10-09T15:35:33.829Z  INFO  controller.provisioning.cloudprovider Launched instance: i-0aa11b1f2eb52b92d, &lt;span class="nb"&gt;hostname&lt;/span&gt;: ip-192-168-32-230.eu-west-1.compute.internal, &lt;span class="nb"&gt;type&lt;/span&gt;: t3.medium, zone: eu-west-1c, capacityType: spot &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"commit"&lt;/span&gt;: &lt;span class="s2"&gt;"3d87474"&lt;/span&gt;, &lt;span class="s2"&gt;"provisioner"&lt;/span&gt;: &lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
2022-10-09T15:35:33.837Z  INFO  controller.provisioning Created node with 1 pods requesting &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"cpu"&lt;/span&gt;:&lt;span class="s2"&gt;"1125m"&lt;/span&gt;,&lt;span class="s2"&gt;"pods"&lt;/span&gt;:&lt;span class="s2"&gt;"3"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; from types t4g.micro, t3a.micro, t3.micro, t4g.small, t3a.small and 477 other&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"commit"&lt;/span&gt;: &lt;span class="s2"&gt;"3d87474"&lt;/span&gt;, &lt;span class="s2"&gt;"provisioner"&lt;/span&gt;: &lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Immediately we can see that there is an &lt;strong&gt;additional node&lt;/strong&gt; &lt;code&gt;ip-192-168-32-230.eu-west-1.compute.internal&lt;/code&gt; launched by &lt;code&gt;karpenter&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd32ehnon5ejhuk6kjw9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd32ehnon5ejhuk6kjw9q.png" alt="karpenter node provisioning" width="800" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and the pod is running now on this node:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3ie67un6m2gwnajr5d1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3ie67un6m2gwnajr5d1.png" alt="running pods" width="800" height="120"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's inspect the node created by the &lt;code&gt;karpenter&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get node ip-192-168-32-230.eu-west-1.compute.internal &lt;span class="nt"&gt;-ojson&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.metadata.labels'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5k96cb913tn671rg1yql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5k96cb913tn671rg1yql.png" alt="exlore provisioned node" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that it's a &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html"&gt;spot instance&lt;/a&gt; of type &lt;code&gt;t3.medium&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;By default, &lt;code&gt;karpenter&lt;/code&gt; provisions &lt;strong&gt;spot instance&lt;/strong&gt;. However, we can change it to &lt;code&gt;on-demand&lt;/code&gt; instance too based on our requirements. Let's try to delete the deployment to see if &lt;code&gt;karpenter&lt;/code&gt; removes the instances.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete deployment nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcezbqyqu40n88kifs7y9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcezbqyqu40n88kifs7y9.png" alt="no pods" width="780" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and the pod is gone, but the node is still there :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5j1uo2dpth2547l6765g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5j1uo2dpth2547l6765g.png" alt="node still there" width="800" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ttlSecondsAfterEmpty
&lt;/h3&gt;

&lt;p&gt;With &lt;code&gt;ttlSecondsAfterEmpty&lt;/code&gt;, Karpenter deletes empty instances after the TTL is elapsed.&lt;/p&gt;

&lt;p&gt;Let's update the Provisioner to this value :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter.sh/v1alpha5&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Provisioner&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ttlSecondsAfterEmpty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
  &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;subnetSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;karpenter.sh/discovery&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter-cluster&lt;/span&gt;
    &lt;span class="na"&gt;securityGroupSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;karpenter.sh/discovery&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter-cluster&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now &lt;code&gt;karpenter&lt;/code&gt; will wait for 60 seconds before it cordons, drains, and deletes the empty node. We can check the details in the &lt;code&gt;karpenter&lt;/code&gt; logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; karpenter logs &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; app.kubernetes.io/instance&lt;span class="o"&gt;=&lt;/span&gt;karpenter &lt;span class="nt"&gt;-c&lt;/span&gt; controller

2022-10-09T16:17:08.000Z  INFO  controller.node Triggering termination after 1m0s &lt;span class="k"&gt;for &lt;/span&gt;empty node  &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"commit"&lt;/span&gt;: &lt;span class="s2"&gt;"3d87474"&lt;/span&gt;, &lt;span class="s2"&gt;"node"&lt;/span&gt;: &lt;span class="s2"&gt;"ip-192-168-32-230.eu-west-1.compute.internal"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
2022-10-09T16:17:08.024Z  INFO  controller.termination  Cordoned node &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"commit"&lt;/span&gt;: &lt;span class="s2"&gt;"3d87474"&lt;/span&gt;, &lt;span class="s2"&gt;"node"&lt;/span&gt;: &lt;span class="s2"&gt;"ip-192-168-32-230.eu-west-1.compute.internal"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
2022-10-09T16:17:08.184Z  INFO  controller.termination  Deleted node  &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"commit"&lt;/span&gt;: &lt;span class="s2"&gt;"3d87474"&lt;/span&gt;, &lt;span class="s2"&gt;"node"&lt;/span&gt;: &lt;span class="s2"&gt;"ip-192-168-32-230.eu-west-1.compute.internal"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
For the nodes provisioned by &lt;code&gt;Karpenter&lt;/code&gt;,  it adds a &lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/"&gt;finalizer&lt;/a&gt; for graceful node termination, we can check this by inspecting the node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes ip-192-168-32-230.eu-west-1.compute.internal &lt;span class="nt"&gt;-ojson&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt;  &lt;span class="s1"&gt;'.metadata.finalizers'&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"karpenter.sh/termination"&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Understanding Istio Access Logs</title>
      <dc:creator>Shardul Srivastava</dc:creator>
      <pubDate>Sun, 06 Mar 2022 10:30:58 +0000</pubDate>
      <link>https://dev.to/aws-builders/understanding-istio-access-logs-2k5o</link>
      <guid>https://dev.to/aws-builders/understanding-istio-access-logs-2k5o</guid>
      <description>&lt;p&gt;Istio access logs are very helpful to understand the incoming traffic pattern. These logs are produced by the Envoy proxy and can be viewed overall at the Istio Ingress gateway or at the individual pod that is injected with the envoy proxy sidecar.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enable Istio Access Logs
&lt;/h2&gt;

&lt;p&gt;Istio access logs are not enabled by default, it can be enabled by setting the &lt;code&gt;meshConfig.accessLogFile&lt;/code&gt; of the &lt;code&gt;IstioOperator&lt;/code&gt; resource.&lt;/p&gt;

&lt;p&gt;A basic istio installation with access logs enabled would look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;install.istio.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IstioOperator&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;meshConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;accessLogFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/dev/stdout&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here the value &lt;code&gt;/dev/stdout&lt;/code&gt; outputs the access logs to standard output. By default these access logs are in &lt;strong&gt;TEXT&lt;/strong&gt; format i.e it looks like this :&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="o"&gt;[&lt;/span&gt;2022-10-23T20:38:15.413Z] &lt;span class="s2"&gt;"GET / HTTP/1.1"&lt;/span&gt; 200 - via_upstream - &lt;span class="s2"&gt;"-"&lt;/span&gt; 0 37 0 0 &lt;span class="s2"&gt;"-"&lt;/span&gt; &lt;span class="s2"&gt;"curl/7.35.0"&lt;/span&gt; &lt;span class="s2"&gt;"3ce3e159-9dba-4617-9e85-feb1106a682c"&lt;/span&gt; &lt;span class="s2"&gt;"nginx"&lt;/span&gt; &lt;span class="s2"&gt;"192.168.42.152:80"&lt;/span&gt; inbound|80|| 127.0.0.6:52375 192.168.42.152:80 192.168.59.90:34854 outbound_.80_._.nginx.default.svc.cluster.local default


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;it can be changed to json by setting &lt;code&gt;.spec.meshConfig.accessLogEncoding&lt;/code&gt; to &lt;code&gt;JSON&lt;/code&gt; and logs would change to:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"response_code_details"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"via_upstream"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"downstream_remote_address"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"192.168.59.90:58478"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"downstream_local_address"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"192.168.42.152:80"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"request_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"e2873f51-0a42-4da4-a8e8-f711b46dea5c"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"authority"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"nginx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"requested_server_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"outbound_.80_._.nginx.default.svc.cluster.local"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"upstream_cluster"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"inbound|80||"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"bytes_sent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;37&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"user_agent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"curl/7.35.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"method"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"upstream_local_address"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"127.0.0.6:41549"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"x_forwarded_for"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"start_time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2022-10-23T22:15:39.663Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"response_code"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"upstream_transport_failure_reason"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"duration"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"upstream_service_time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"response_flags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"-"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"upstream_host"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"192.168.42.152:80"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"route_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"bytes_received"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"protocol"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"HTTP/1.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"connection_termination_details"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
 

&lt;h2&gt;
  
  
  Istio Access Logs Format
&lt;/h2&gt;

&lt;p&gt;When Istio access logs are enabled, they are by default in this format&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="o"&gt;[&lt;/span&gt;%START_TIME%] &lt;span class="se"&gt;\"&lt;/span&gt;%REQ&lt;span class="o"&gt;(&lt;/span&gt;:METHOD&lt;span class="o"&gt;)&lt;/span&gt;% %REQ&lt;span class="o"&gt;(&lt;/span&gt;X-ENVOY-ORIGINAL-PATH?:PATH&lt;span class="o"&gt;)&lt;/span&gt;% %PROTOCOL%&lt;span class="se"&gt;\"&lt;/span&gt; %RESPONSE_CODE% %RESPONSE_FLAGS% %RESPONSE_CODE_DETAILS% %CONNECTION_TERMINATION_DETAILS%
&lt;span class="se"&gt;\"&lt;/span&gt;%UPSTREAM_TRANSPORT_FAILURE_REASON%&lt;span class="se"&gt;\"&lt;/span&gt; %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP&lt;span class="o"&gt;(&lt;/span&gt;X-ENVOY-UPSTREAM-SERVICE-TIME&lt;span class="o"&gt;)&lt;/span&gt;% &lt;span class="se"&gt;\"&lt;/span&gt;%REQ&lt;span class="o"&gt;(&lt;/span&gt;X-FORWARDED-FOR&lt;span class="o"&gt;)&lt;/span&gt;%&lt;span class="se"&gt;\"&lt;/span&gt; &lt;span class="se"&gt;\"&lt;/span&gt;%REQ&lt;span class="o"&gt;(&lt;/span&gt;USER-AGENT&lt;span class="o"&gt;)&lt;/span&gt;%&lt;span class="se"&gt;\"&lt;/span&gt; &lt;span class="se"&gt;\"&lt;/span&gt;%REQ&lt;span class="o"&gt;(&lt;/span&gt;X-REQUEST-ID&lt;span class="o"&gt;)&lt;/span&gt;%&lt;span class="se"&gt;\"&lt;/span&gt;
&lt;span class="se"&gt;\"&lt;/span&gt;%REQ&lt;span class="o"&gt;(&lt;/span&gt;:AUTHORITY&lt;span class="o"&gt;)&lt;/span&gt;%&lt;span class="se"&gt;\"&lt;/span&gt; &lt;span class="se"&gt;\"&lt;/span&gt;%UPSTREAM_HOST%&lt;span class="se"&gt;\"&lt;/span&gt; %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SERVER_NAME% %ROUTE_NAME%&lt;span class="se"&gt;\n&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;here &lt;code&gt;%&lt;/code&gt; is used to define a field to be printed in the access logs. To print the response code we can use &lt;code&gt;%RESPONSE_CODE%&lt;/code&gt;. Let's look at some of the important fields here:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;%START_TIME%&lt;/code&gt;&lt;/strong&gt;  - Request start time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;%RESPONSE_CODE%&lt;/code&gt;&lt;/strong&gt; - Response code of the request.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;%RESPONSE_FLAGS%&lt;/code&gt;&lt;/strong&gt; - Additional details about response.  More details &lt;a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators:~:text=typed%20JSON%20logs.-,%25RESPONSE_FLAGS%25,-Additional%20details%20about" rel="noopener noreferrer"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;%RESPONSE_CODE_DETAILS%&lt;/code&gt;&lt;/strong&gt; - Response code details. More details &lt;a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/response_code_details#response-code-details" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;%DURATION%&lt;/code&gt;&lt;/strong&gt; - Total duration of the request.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;%UPSTREAM_HOST%&lt;/code&gt;&lt;/strong&gt; - Upstream where the request is routed to such as pod. It would print the pod's IP and port where the request went. Eg - &lt;code&gt;192.168.42.152:80&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;%REQ()%&lt;/code&gt;&lt;/strong&gt; - This is used to print request headers. Eg -  &lt;code&gt;%REQ(-FORWARDED-FOR)%&lt;/code&gt; would print the &lt;code&gt;X-FORWARDED-FOR&lt;/code&gt; request header.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;%RESP()%&lt;/code&gt;&lt;/strong&gt; - This is used to print response headers. Eg - &lt;code&gt;%RESP(CONTENT-TYPE)%&lt;/code&gt; would print the &lt;code&gt;CONTENT-TYPE&lt;/code&gt; header in the response.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;An exhausting list of available fields can be found &lt;a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Customize Access Logs Format
&lt;/h2&gt;

&lt;p&gt;If you want to parse your logs with a tool like &lt;a href="https://grafana.com/oss/loki/" rel="noopener noreferrer"&gt;Loki&lt;/a&gt;, json format logs would be well suited. Access logs format can be changed by setting the &lt;code&gt;meshConfig.accessLogFormat&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Eg: To print json logs in a custom format&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;meshConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;accessLogFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/dev/stdout&lt;/span&gt;
    &lt;span class="na"&gt;accessLogEncoding&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;JSON&lt;/span&gt;
    &lt;span class="na"&gt;accessLogFormat&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;"protocol": "%PROTOCOL%",&lt;/span&gt;
        &lt;span class="s"&gt;"upstream_service_time": "%REQ(X-ENVOY-UPSTREAM_SERVICE_TIME)%",&lt;/span&gt;
        &lt;span class="s"&gt;"upstream_local_address": "%UPSTREAM_LOCAL_ADDRESS%",&lt;/span&gt;
        &lt;span class="s"&gt;"duration": "%DURATION%",&lt;/span&gt;
        &lt;span class="s"&gt;"upstream_transport_failure_reason": "%UPSTREAM_TRANSPORT_FAILURE_REASON%",&lt;/span&gt;
        &lt;span class="s"&gt;"route_name": "%ROUTE_NAME%",&lt;/span&gt;
        &lt;span class="s"&gt;"downstream_local_address": "%DOWNSTREAM_LOCAL_ADDRESS%",&lt;/span&gt;
        &lt;span class="s"&gt;"user_agent": "%REQ(USER-AGENT)%",&lt;/span&gt;
        &lt;span class="s"&gt;"response_code": "%RESPONSE_CODE%",&lt;/span&gt;
        &lt;span class="s"&gt;"response_flags": "%RESPONSE_FLAGS%",&lt;/span&gt;
        &lt;span class="s"&gt;"start_time": "%START_TIME%",&lt;/span&gt;
        &lt;span class="s"&gt;"method": "%REQ(:METHOD)%",&lt;/span&gt;
        &lt;span class="s"&gt;"request_id": "%REQ(X-REQUEST-ID)%",&lt;/span&gt;
        &lt;span class="s"&gt;"upstream_host": "%UPSTREAM_HOST%",&lt;/span&gt;
        &lt;span class="s"&gt;"x_forwarded_for": "%REQ(X-FORWARDED-FOR)%",&lt;/span&gt;
        &lt;span class="s"&gt;"client_ip": "%REQ(TRUE-Client-IP)%",&lt;/span&gt;
        &lt;span class="s"&gt;"requested_server_name": "%REQUESTED_SERVER_NAME%",&lt;/span&gt;
        &lt;span class="s"&gt;"bytes_received": "%BYTES_RECEIVED%",&lt;/span&gt;
        &lt;span class="s"&gt;"bytes_sent": "%BYTES_SENT%",&lt;/span&gt;
        &lt;span class="s"&gt;"upstream_cluster": "%UPSTREAM_CLUSTER%",&lt;/span&gt;
        &lt;span class="s"&gt;"downstream_remote_address": "%DOWNSTREAM_REMOTE_ADDRESS%",&lt;/span&gt;
        &lt;span class="s"&gt;"authority": "%REQ(:AUTHORITY)%",&lt;/span&gt;
        &lt;span class="s"&gt;"path": "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%",&lt;/span&gt;
        &lt;span class="s"&gt;"response_code_details": "%RESPONSE_CODE_DETAILS%"&lt;/span&gt;
      &lt;span class="s"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Printing Known Request and Response Headers
&lt;/h2&gt;

&lt;p&gt;EnvoyProxy allows printing request headers and response headers with &lt;code&gt;%REQ(X?Y):Z%&lt;/code&gt; and &lt;code&gt;%RESP(X?Y):Z%&lt;/code&gt; respectively. This can be used to print any request header or response header and if the header is not present an alternative header can be given optionally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%&lt;/code&gt;&lt;/strong&gt; would print the &lt;code&gt;X-ENVOY-ORIGINAL-PATH&lt;/code&gt; header, if it doesn't exist &lt;code&gt;PATH&lt;/code&gt; header would be printed and if even the &lt;code&gt;PATH&lt;/code&gt; header is also not present then &lt;code&gt;-&lt;/code&gt; would be printed instead.&lt;/p&gt;

&lt;p&gt;So we can print all the headers that we know the name of, but how about any dynamic headers whose name changes with every request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak2pgf3g92q9uqvgibo9.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak2pgf3g92q9uqvgibo9.jpeg" alt="Istio Access logs IDK"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Printing All the Request Headers and Request Body
&lt;/h2&gt;

&lt;p&gt;Istio uses EnvoyProxy which is extensible by design, &lt;code&gt;EnvoyFilter&lt;/code&gt; provides a mechanism to customize the Envoy configuration.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;HTTP Lua filter&lt;/code&gt; allows Lua scripts to be run during both the request and response flows. This Envoy HTTP Lua filter prints all the request headers :&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.istio.io/v1alpha3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;EnvoyFilter&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;request-filter&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;istio-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;configPatches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;applyTo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTP_FILTER&lt;/span&gt;
      &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GATEWAY&lt;/span&gt;
        &lt;span class="na"&gt;listener&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;filterChain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;envoy.filters.network.http_connection_manager&lt;/span&gt;
              &lt;span class="na"&gt;subFilter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;envoy.filters.http.router&lt;/span&gt;
      &lt;span class="na"&gt;patch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;operation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;INSERT_BEFORE&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;envoy.lua&lt;/span&gt;
          &lt;span class="na"&gt;typed_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;@type'&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua&lt;/span&gt;
            &lt;span class="s"&gt;inlineCode&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
              &lt;span class="s"&gt;-- Called on the request path.&lt;/span&gt;
              &lt;span class="s"&gt;function envoy_on_request(request_handle)&lt;/span&gt;
                  &lt;span class="s"&gt;request_handle:logCritical("Printing Request Headers =&amp;gt; ")&lt;/span&gt;
                  &lt;span class="s"&gt;for key, value in pairs(request_handle:headers()) do&lt;/span&gt;
                    &lt;span class="s"&gt;request_handle:logCritical(key .. " =&amp;gt; " .. value)&lt;/span&gt;
                  &lt;span class="s"&gt;end&lt;/span&gt;
                  &lt;span class="s"&gt;for chunk in request_handle:bodyChunks() do&lt;/span&gt;
                    &lt;span class="s"&gt;request_handle:logCritical("Printing Request body =&amp;gt; " .. chunk:getBytes(0, chunk:length()))&lt;/span&gt;
                  &lt;span class="s"&gt;end&lt;/span&gt;
              &lt;span class="s"&gt;end&lt;/span&gt;
  &lt;span class="na"&gt;workloadSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;istio-ingressgateway&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; istio-filter.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;once applied, all the headers will be printed in the logs:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

2022-10-23T23:48:39.750646Z critical  envoy lua script log: Printing Request Headers =&amp;gt; 
2022-10-23T23:48:39.750688Z critical  envoy lua script log: :authority =&amp;gt; 54.77.207.3
2022-10-23T23:48:39.750691Z critical  envoy lua script log: :path =&amp;gt; /
2022-10-23T23:48:39.750693Z critical  envoy lua script log: :method =&amp;gt; GET
2022-10-23T23:48:39.750695Z critical  envoy lua script log: :scheme =&amp;gt; http
2022-10-23T23:48:39.750697Z critical  envoy lua script log: user-agent =&amp;gt; Mozilla/5.0 (Windows NT 6.2;en-US) AppleWebKit/537.32.36 (KHTML, live Gecko) Chrome/56.0.3006.85 Safari/537.32
2022-10-23T23:48:39.750699Z critical  envoy lua script log: accept-encoding =&amp;gt; gzip, deflate
2022-10-23T23:48:39.750701Z critical  envoy lua script log: accept =&amp;gt; */*
2022-10-23T23:48:39.750714Z critical  envoy lua script log: x-forwarded-for =&amp;gt; 192.168.59.163
2022-10-23T23:48:39.750716Z critical  envoy lua script log: x-forwarded-proto =&amp;gt; http
2022-10-23T23:48:39.750717Z critical  envoy lua script log: x-envoy-internal =&amp;gt; true
2022-10-23T23:48:39.750719Z critical  envoy lua script log: x-request-id =&amp;gt; 0d1def76-2d84-4ddb-9aa3-62b2445cc1f8
2022-10-23T23:48:39.750747Z critical  envoy lua script log: x-envoy-peer-metadata-id =&amp;gt; router~192.168.45.105~istio-ingressgateway-94b448d6d-pbsqj.istio-system~istio-system.svc.cluster.local



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Read more about EnvoyFilter &lt;a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Here I am using &lt;code&gt;request_handle:logCritical&lt;/code&gt; method because default logLevel is WARN for Istio components. &lt;code&gt;request_handle:logInfo&lt;/code&gt; can be used, if logLevel is set to Info.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>istio</category>
      <category>logs</category>
      <category>envoy</category>
      <category>filter</category>
    </item>
    <item>
      <title>EKS Auth Deep Dive</title>
      <dc:creator>Shardul Srivastava</dc:creator>
      <pubDate>Fri, 17 Sep 2021 20:58:54 +0000</pubDate>
      <link>https://dev.to/aws-builders/eks-auth-deep-dive-4fib</link>
      <guid>https://dev.to/aws-builders/eks-auth-deep-dive-4fib</guid>
      <description>&lt;p&gt;If you use EKS then you have found yourself in a situation where a user can't access the cluster despite having all the IAM permissions and gets an &lt;code&gt;Unauthorized&lt;/code&gt; message like &lt;a href="https://friends.fandom.com/wiki/Eddie_Menuek" rel="noopener noreferrer"&gt;Eddie&lt;/a&gt; here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpykbg9lncvvkmduyp9b.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpykbg9lncvvkmduyp9b.jpeg" alt="eddie-locked" width="700" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS EKS uses &lt;code&gt;IAM credentials&lt;/code&gt; for &lt;code&gt;authentication&lt;/code&gt; and &lt;code&gt;Kubernetes RBAC&lt;/code&gt; for &lt;code&gt;authorization&lt;/code&gt;. As per &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-auth.html" rel="noopener noreferrer"&gt;EKS docs&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;EKS uses IAM permissions for authentication of valid entities such IAM users or roles. All the permissions for interacting with the EKS cluster is managed through Kubernetes RBAC&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;or simply put, EKS doesn't work the same way as other services such as S3 where if you have &lt;code&gt;AmazonS3FullAccess&lt;/code&gt;, you can access any S3 bucket and create or delete files/folders. In EKS, IAM permissions are only used to check if the user has valid IAM credentials and permissions to run any command using &lt;code&gt;kubectl&lt;/code&gt; such as &lt;code&gt;kubectl get pods&lt;/code&gt; is managed by Kubernetes API that uses &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noopener noreferrer"&gt;RBAC&lt;/a&gt; to control the access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq70jx5gnhh9s4wrj27jc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq70jx5gnhh9s4wrj27jc.png" alt="aws-eks-auth" width="500" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By default, the &lt;code&gt;IAM Role&lt;/code&gt; or &lt;code&gt;IAM User&lt;/code&gt; that was used to create the cluster, is added to the &lt;code&gt;system:masters&lt;/code&gt; group and gets cluster-wide admin permission with &lt;code&gt;cluster-admin&lt;/code&gt; ClusterRole.&lt;/p&gt;

&lt;p&gt;As per &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles" rel="noopener noreferrer"&gt;Kubernetes documentation&lt;/a&gt; &lt;code&gt;system:masters&lt;/code&gt; group is one of the &lt;code&gt;default&lt;/code&gt; ClusterRoleBindings available in the Kubernetes cluster, it's attached to the &lt;code&gt;cluster-admin&lt;/code&gt; ClusterRole that gives the user admin permissions in the cluster.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Default ClusterRole&lt;/th&gt;
&lt;th&gt;Default ClusterRoleBinding&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;cluster-admin&lt;/td&gt;
&lt;td&gt;system:masters  group&lt;/td&gt;
&lt;td&gt;Allows super-user access to perform any action on any resource.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1hm3xhl2kugcp747tid.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1hm3xhl2kugcp747tid.jpeg" alt="unlimited-power" width="620" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This mapping of creator IAM User or Role to &lt;code&gt;system:masters&lt;/code&gt; group is not visible in any configuration such as &lt;code&gt;aws-auth&lt;/code&gt; configmap.&lt;/p&gt;

&lt;p&gt;EKS allows giving access to other users by adding them in a configmap &lt;code&gt;aws-auth&lt;/code&gt; in &lt;code&gt;kube-system&lt;/code&gt; namespace. By default, this configmap is empty. However, If you are using &lt;code&gt;eksctl&lt;/code&gt; to create the cluster, this config map will have the role created by &lt;code&gt;eksctl&lt;/code&gt; for the node group and this role is attached to the &lt;code&gt;system:bootstrappers&lt;/code&gt; and &lt;code&gt;system:nodes&lt;/code&gt; groups.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws-auth&lt;/code&gt; configmap is based on &lt;a href="https://github.com/kubernetes-sigs/aws-iam-authenticator" rel="noopener noreferrer"&gt;aws-iam-authenticator&lt;/a&gt; and has several configuration options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;mapRoles&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;mapUsers&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;mapAccounts&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Using mapRoles to Map an IAM Role to the Cluster
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;mapRoles&lt;/code&gt; allows mapping an &lt;code&gt;IAM role&lt;/code&gt; in the cluster to allow any entity or user assuming that role to access the cluster. After mapping an IAM role with &lt;code&gt;mapRoles&lt;/code&gt;, any user or entity assuming this role is allowed to access the cluster, However, the level of access is defined by the &lt;code&gt;groups&lt;/code&gt; attribute.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mapRoles&lt;/code&gt; has three attributes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;rolearn&lt;/strong&gt; - IAM Role ARN to map to EKS cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;username&lt;/strong&gt; - Username for the IAM Role to map in Kubernetes, this could be a static value like &lt;code&gt;eks-developer&lt;/code&gt; or &lt;code&gt;ci-account&lt;/code&gt; or a templated variable like &lt;code&gt;{{AccountID}}/{{SessionName}}/{{EC2PrivateDNSName}}&lt;/code&gt; or both. This value would be printed in the &lt;code&gt;aws-authenticator&lt;/code&gt; Cloudwatch logs if logging is enabled. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;groups&lt;/strong&gt; - List of Kubernetes groups that are defined in &lt;code&gt;ClusterRoleBinding/RoleBinding&lt;/code&gt;. Example
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Group&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-group"&lt;/span&gt;
    &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's create two IAM roles &lt;code&gt;eks-admin&lt;/code&gt; and &lt;code&gt;eks-dev&lt;/code&gt; and assume the &lt;code&gt;eks-admin&lt;/code&gt; role to create a cluster with one node group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eksctl.io/v1alpha5&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterConfig&lt;/span&gt;

&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;iam-auth-cluster&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.21"&lt;/span&gt;

&lt;span class="na"&gt;availabilityZones&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;us-east-1a&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;us-east-1b&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;us-east-1c&lt;/span&gt;

&lt;span class="na"&gt;cloudWatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;clusterLogging&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enableTypes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;authenticator"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;managedNodeGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;managed-ng-1&lt;/span&gt;
    &lt;span class="na"&gt;instanceType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;t2.micro&lt;/span&gt;
    &lt;span class="na"&gt;minSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
    &lt;span class="na"&gt;desiredCapacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once created, this cluster would have one NodeGroup and the IAM role associated with this node group would be added to the &lt;code&gt;aws-auth&lt;/code&gt; configmap. &lt;/p&gt;

&lt;p&gt;Check the contents of &lt;code&gt;aws-auth&lt;/code&gt; configMap :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get configmap aws-auth &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;-oyaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mapRoles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;- groups:&lt;/span&gt;
      &lt;span class="s"&gt;- system:bootstrappers&lt;/span&gt;
      &lt;span class="s"&gt;- system:nodes&lt;/span&gt;
      &lt;span class="s"&gt;rolearn: arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/eksctl-iam-auth-cluster-nodegroup-NodeInstanceRole-1RNKIEA50ZD0B&lt;/span&gt;
      &lt;span class="s"&gt;username: system:node:{{EC2PrivateDNSName}}&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-auth&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, only &lt;code&gt;IAM Role&lt;/code&gt; that created the cluster would have access to the cluster, any other IAM Role has to be added separately added in &lt;code&gt;aws-auth&lt;/code&gt;. Let's try to assume the &lt;code&gt;eks-developer&lt;/code&gt; IAM role and try to access the cluster with that role.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods

error: You must be logged &lt;span class="k"&gt;in &lt;/span&gt;to the server &lt;span class="o"&gt;(&lt;/span&gt;Unauthorized&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As expected, &lt;code&gt;eks-developer&lt;/code&gt; IAM role would not be allowed access. To allow &lt;code&gt;eks-developer&lt;/code&gt; IAM role access to the cluster, add the mapping in the &lt;code&gt;aws-auth&lt;/code&gt; configMap to map this role to &lt;code&gt;eks-developer&lt;/code&gt; Kubernetes user. We can either directly edit the configMap or use &lt;code&gt;eksctl&lt;/code&gt; to add this mapping:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl create iamidentitymapping &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster&lt;/span&gt; iam-auth-cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--arn&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/eks-developer"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--username&lt;/span&gt; &lt;span class="s2"&gt;"eks-developer"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this would create an entry under the &lt;code&gt;mapRoles&lt;/code&gt; section in &lt;code&gt;aws-auth&lt;/code&gt; configMap :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mapRoles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;- groups:&lt;/span&gt;
      &lt;span class="s"&gt;- system:bootstrappers&lt;/span&gt;
      &lt;span class="s"&gt;- system:nodes&lt;/span&gt;
      &lt;span class="s"&gt;rolearn: arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/eksctl-iam-auth-cluster-nodegroup-NodeInstanceRole-1RNKIEA50ZD0B&lt;/span&gt;
      &lt;span class="s"&gt;username: system:node:{{EC2PrivateDNSName}}&lt;/span&gt;

    &lt;span class="s"&gt;- rolearn: arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/eks-developer&lt;/span&gt;
      &lt;span class="s"&gt;username: eks-developer&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-auth&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's try again accessing cluster by assuming the &lt;code&gt;eks-developer&lt;/code&gt; IAM role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods

Error from server &lt;span class="o"&gt;(&lt;/span&gt;Forbidden&lt;span class="o"&gt;)&lt;/span&gt;: pods is forbidden: User &lt;span class="s2"&gt;"eks-developer"&lt;/span&gt; cannot list resource &lt;span class="s2"&gt;"pods"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;API group &lt;span class="s2"&gt;""&lt;/span&gt; at the cluster scope
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This time, we can access the cluster, however not allowed to list pods in the cluster due to not having enough RBAC permissions. RBAC permissions can be assigned to this IAM role in two ways :&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. RBAC permissions with Kubernetes User&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;2. RBAC permissions with Kubernetes Groups&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  RBAC permissions with Kubernetes User
&lt;/h3&gt;

&lt;p&gt;We can assign RBAC permissions to an IAM role by binding mapped &lt;code&gt;Kubernetes User&lt;/code&gt; in &lt;code&gt;aws-auth&lt;/code&gt; i.e &lt;code&gt;eks-developer&lt;/code&gt; to a &lt;code&gt;ClusterRole&lt;/code&gt;/&lt;code&gt;Role&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a &lt;strong&gt;ClusterRole&lt;/strong&gt; &lt;code&gt;eks-developer-cluster-role&lt;/code&gt; with permissions to &lt;code&gt;get&lt;/code&gt;, &lt;code&gt;list&lt;/code&gt; or &lt;code&gt;watch&lt;/code&gt; the &lt;code&gt;pods&lt;/code&gt; resources :&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eks-developer-cluster-role&lt;/span&gt;
&lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# Pod is part of Core API Group and "" indicates the core API group&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pods"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# pods resource&lt;/span&gt;
    &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;get"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;list"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;watch"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# Allow user to get, list of watch the pods.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;We have mapped IAM role &lt;code&gt;arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/eks-developer&lt;/code&gt; to Kubernetes user &lt;code&gt;eks-developer&lt;/code&gt; in &lt;code&gt;aws-auth&lt;/code&gt;, now create a &lt;strong&gt;ClusterRoleBinding&lt;/strong&gt; to bind &lt;code&gt;developer-cluster-role&lt;/code&gt; to the Kubernetes user &lt;code&gt;eks-developer&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eks-developer-user-cluster-role-binding&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;User&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eks-developer&lt;/span&gt; &lt;span class="c1"&gt;# Kubernetes User mapped to the IAM role in aws-auth configmap.&lt;/span&gt;
    &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eks-developer-cluster-role&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Access the cluster again by assuming the IAM role &lt;code&gt;eks-developer&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-A&lt;/span&gt;

NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-f584p             1/1     Running   0          79m
kube-system   coredns-66cb55d4f4-8hjj2   1/1     Running   0          91m
kube-system   coredns-66cb55d4f4-vtf6j   1/1     Running   0          91m
kube-system   kube-proxy-psjk5           1/1     Running   0          79m
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;and this time it works!!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqeckq1k61yci21ddmpo.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqeckq1k61yci21ddmpo.jpeg" alt="baby-yess" width="400" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also verify the access granted to the users by checking the &lt;code&gt;authenticator&lt;/code&gt; logs in Cloudwatch:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;br&gt;
 &lt;code&gt;time="2021-09-13T16:29:14Z" level=info msg="access granted" arn="arn:aws:iam::01755xxxxx:role/eks-developer" client="127.0.0.1:50410" groups="[]" method=POST path=/authenticate sts=sts.us-east-1.amazonaws.com uid="heptio-authenticator-aws:01755xxxxx:AROAQIFUWO66PDOXKSLMQ" username=eks-developer&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  RBAC permissions with Kubernetes Groups
&lt;/h3&gt;

&lt;p&gt;While assigning permissions directly to the Kubernetes User works just fine for most of the use-cases however this approach is not so great if you want to audit who is assuming the IAM role and accessing the cluster and would like additional information captured in the audit logs.&lt;/p&gt;

&lt;p&gt;One such use-case is &lt;strong&gt;AWS SSO&lt;/strong&gt;, where many users are assigned to a permission set and whenever these users log in using their credentials, they assume the same &lt;code&gt;IAM role&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We can assign IAM permissions to an IAM role by creating a Kubernetes Group and add it to the &lt;code&gt;mapRoles.groups&lt;/code&gt; field of IAM Role mapping in &lt;code&gt;aws-auth&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let's take the earlier example of the &lt;code&gt;eks-developer&lt;/code&gt; IAM role and create a ClusterRoleBinding with Kubernetes Group &lt;code&gt;developer&lt;/code&gt; bound to &lt;code&gt;eks-developer-cluster-role&lt;/code&gt; and add Kubernetes Group &lt;code&gt;developer&lt;/code&gt; in the mapping of this IAM role in &lt;code&gt;aws-auth&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;First delete the earlier ClusterRoleBinding &lt;code&gt;eks-developer-user-cluster-role-binding&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete clusterrolebindings eks-developer-user-cluster-role-binding
clusterrolebinding.rbac.authorization.k8s.io &lt;span class="s2"&gt;"eks-developer-user-cluster-role-binding"&lt;/span&gt; deleted
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;As soon as we delete the &lt;code&gt;ClusterRolebinding&lt;/code&gt;, &lt;code&gt;eks-developer&lt;/code&gt; IAM role won't be able to list the pods, let's check the access by assuming the &lt;code&gt;eks-developer&lt;/code&gt; IAM role :&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-A&lt;/span&gt;
Error from server &lt;span class="o"&gt;(&lt;/span&gt;Forbidden&lt;span class="o"&gt;)&lt;/span&gt;: pods is forbidden: User &lt;span class="s2"&gt;"eks-developer"&lt;/span&gt; cannot list resource &lt;span class="s2"&gt;"pods"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;API group &lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;the namespace &lt;span class="s2"&gt;"default"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Delete &lt;strong&gt;IAM Mapping&lt;/strong&gt; from &lt;code&gt;aws-auth&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl delete iamidentitymapping &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--cluster&lt;/span&gt; iam-auth-cluster &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--arn&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/eks-developer"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a &lt;strong&gt;ClusterRoleBinding&lt;/strong&gt; to bind Kubernetes Group &lt;code&gt;developer&lt;/code&gt; to cluster role &lt;code&gt;eks-developer-cluster-role&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eks-developer-group-cluster-role-binding&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Group&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;developer&lt;/span&gt;
    &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eks-developer-cluster-role&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add Kubernetes Group &lt;code&gt;developer&lt;/code&gt; to IAM role mapping of &lt;code&gt;eks-developer&lt;/code&gt; in &lt;code&gt;aws-auth&lt;/code&gt; and include the session name in username using templated variable &lt;code&gt;{{SessionName}}&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl create iamidentitymapping &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster&lt;/span&gt; iam-auth-cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--arn&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/eks-developer"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--username&lt;/span&gt; &lt;span class="s2"&gt;"eks-developer:{{SessionName}}"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--group&lt;/span&gt; &lt;span class="s2"&gt;"developer"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;this would create an entry under &lt;code&gt;mapRoles&lt;/code&gt; section in &lt;code&gt;aws-auth&lt;/code&gt; configmap as:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;mapRoles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;- groups:&lt;/span&gt;
    &lt;span class="s"&gt;- developer&lt;/span&gt;
    &lt;span class="s"&gt;rolearn: arn:aws:iam::017558828988:role/eks-developer&lt;/span&gt;
    &lt;span class="s"&gt;username: eks-developer:{{SessionName}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Check the Cloudwatch &lt;code&gt;authenticator&lt;/code&gt; logs for the authenticated user assuming &lt;code&gt;eks-developer&lt;/code&gt; IAM role and we can see that this time session name is appended to the username in logs:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;br&gt;
 &lt;code&gt;time="2021-09-13T17:57:46Z" level=info msg="access granted" arn="arn:aws:iam::0175XXXXXXXX:role/eks-developer" client="127.0.0.1:48520" groups="[developer]" method=POST path=/authenticate sts=sts.us-east-1.amazonaws.com uid="heptio-authenticator-aws:0175XXXXXXXX:AROAQIFUWO66PDOXKSLMQ" username="eks-developer:eks-developer-session"&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If the session name consists of &lt;code&gt;@&lt;/code&gt;, it would be replaced with &lt;code&gt;-&lt;/code&gt;. Let's assume the IAM role &lt;code&gt;eks-developer&lt;/code&gt; with session name containing &lt;code&gt;@&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws sts assume-role &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--role-arn&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;IAM_ROLE_ARN&amp;gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--role-session-name&lt;/span&gt; &lt;span class="s2"&gt;"my-develper-session@123456789"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--duration-seconds&lt;/span&gt; 3600
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now Cloudwatch logs would have session name printed as &lt;code&gt;eks-developer:my-developer-session-123456789&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;br&gt;
 &lt;code&gt;time="2021-09-14T17:50:25Z" level=info msg="access granted" arn="arn:aws:iam::017558828988:role/eks-developer" client="127.0.0.1:57794" groups="[developer]" method=POST path=/authenticate sts=sts.us-east-1.amazonaws.com uid="heptio-authenticator-aws:017558828988:AROAQIFUWO66PDOXKSLMQ" username="eks-developer:my-develper-session-123456789"&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There is one problem here, if your EKS cluster is being accessed from &lt;strong&gt;multiple AWS accounts&lt;/strong&gt;, it would not be possible to track the AWS account of the user who accessed the EKS cluster just by session name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1k1t6y9yf2462qtwzzq.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1k1t6y9yf2462qtwzzq.jpeg" alt="baby-what-now" width="600" height="707"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{{AccountID}}&lt;/code&gt; comes to the rescue, we can use this templated variable to get the &lt;strong&gt;AWS account ID&lt;/strong&gt; of the user who is assuming the role, so we can set the username to :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws:&lt;span class="o"&gt;{{&lt;/span&gt;AccountID&lt;span class="o"&gt;}}&lt;/span&gt;:eks-developer:&lt;span class="o"&gt;{{&lt;/span&gt;SessionName&lt;span class="o"&gt;}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please note that &lt;code&gt;iamidentitymapping&lt;/code&gt; can't be overridden with &lt;code&gt;eksctl&lt;/code&gt;, so you have to delete it and create it again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
eksctl delete iamidentitymapping &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--cluster&lt;/span&gt; &lt;span class="s2"&gt;"iam-auth-cluster"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--region&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--arn&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/eks-developer"&lt;/span&gt;

eksctl create iamidentitymapping &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--cluster&lt;/span&gt; &lt;span class="s2"&gt;"iam-auth-cluster"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--region&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--arn&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/eks-developer"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--username&lt;/span&gt; &lt;span class="s2"&gt;"aws:{{AccountID}}:eks-developer:{{SessionName}}"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--group&lt;/span&gt; &lt;span class="s2"&gt;"developer"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we would get the &lt;code&gt;AWS Account ID&lt;/code&gt; along with &lt;code&gt;Session Name&lt;/code&gt; in cloudwatch logs :&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;br&gt;
 &lt;code&gt;time="2021-09-14T18:26:33Z" level=info msg="access granted" arn="arn:aws:iam::017558828988:role/eks-developer" client="127.0.0.1:39752" groups="[developer]" method=POST path=/authenticate sts=sts.us-east-1.amazonaws.com uid="heptio-authenticator-aws:0175XXXXXXXX:AROAQIFUWO66PDOXKSLMQ" username="aws:0175XXXXXXXX:eks-developer:my-develper-session-123456789"&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Note: If you want session name in raw format, you can use templated variable &lt;code&gt;{{SessionNameRaw}}&lt;/code&gt; instead. However as of EKS 1.21, these two variables &lt;code&gt;{{AccessKeyID}}&lt;/code&gt; and &lt;code&gt;{{SessionNameRaw}}&lt;/code&gt; don't work.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Using mapUser to Map an IAM User to the Cluster
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;mapUsers&lt;/code&gt; allows mapping an &lt;code&gt;IAM User&lt;/code&gt; to the cluster and add the user to one or more Kubernetes Groups. It has 3 attributes :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;userarn&lt;/strong&gt; - ARN of IAM User to map to EKS cluster. This could be an IAM user from the same AWS account or another account.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;username&lt;/strong&gt; - Static username to map this IAM User to, in Kubernetes. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;groups&lt;/strong&gt; - List of Kubernetes groups that are defined in &lt;code&gt;ClusterRoleBinding/RoleBinding&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note: Templated variables are not supported in &lt;code&gt;username&lt;/code&gt; field with mapUser.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To add an IAM user with ARN &lt;code&gt;arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:user/dev-user&lt;/code&gt; in &lt;code&gt;aws-auth&lt;/code&gt; configmap, we can run the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl create iamidentitymapping &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster&lt;/span&gt; iam-auth-cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--arn&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:user/dev-user"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--username&lt;/span&gt; &lt;span class="s2"&gt;"dev-user"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this command would add these lines in &lt;code&gt;aws-auth&lt;/code&gt; configMap:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;mapUsers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;- userarn: arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:user/dev-user&lt;/span&gt;
      &lt;span class="s"&gt;username: dev-user&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we didn't specify any group, &lt;code&gt;dev-user&lt;/code&gt; would be able to authenticate to the cluster, however wouldn't be able to &lt;code&gt;list&lt;/code&gt; or &lt;code&gt;get&lt;/code&gt; any resources.&lt;/p&gt;

&lt;p&gt;For users mapped using &lt;code&gt;mapUsers&lt;/code&gt;, RBAC permission can be given in two ways :&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. RBAC permissions with Kubernetes User&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;2. RBAC permissions with Kubernetes Groups&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  RBAC permissions with Kubernetes User
&lt;/h3&gt;

&lt;p&gt;We can assign RBAC permissions to an &lt;code&gt;IAM user&lt;/code&gt; by binding mapped &lt;code&gt;Kubernetes User&lt;/code&gt; in &lt;code&gt;aws-auth&lt;/code&gt; i.e &lt;code&gt;dev-user&lt;/code&gt; to a &lt;code&gt;ClusterRole&lt;/code&gt;/&lt;code&gt;Role&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a &lt;strong&gt;ClusterRoleBinding&lt;/strong&gt; to bind Kubernetes user &lt;code&gt;dev-user&lt;/code&gt; to the &lt;code&gt;developer-cluster-role&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev-user-cluster-role-binding&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;User&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev-user&lt;/span&gt; &lt;span class="c1"&gt;# Kubernetes User mapped to the IAM user in aws-auth configmap.&lt;/span&gt;
    &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eks-developer-cluster-role&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Map IAM user &lt;code&gt;arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:user/dev-user&lt;/code&gt; to Kubernetes user &lt;code&gt;dev-user&lt;/code&gt; in &lt;code&gt;aws-auth&lt;/code&gt; configMap:&lt;br&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;

&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  eksctl create iamidentitymapping &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--cluster&lt;/span&gt; iam-auth-cluster &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--arn&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:user/dev-user"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--username&lt;/span&gt; &lt;span class="s2"&gt;"dev-user"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this &lt;code&gt;ClusterRoleBinding&lt;/code&gt; is created and the IAM user is mapped in &lt;code&gt;aws-auth&lt;/code&gt;, IAM user &lt;code&gt;dev-user&lt;/code&gt; would be able to get, list, or watch pods in any namespace.&lt;/p&gt;

&lt;h3&gt;
  
  
  RBAC permissions with Kubernetes Groups
&lt;/h3&gt;

&lt;p&gt;If we need to give the same set of permissions to multiple users, then instead of creating multiple &lt;code&gt;ClusterRoleBindings&lt;/code&gt;, we can use Kubernetes Groups and attach that group to the users for whom those permissions are required.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a ClusterRoleBinding to bind Kubernetes Group &lt;code&gt;developer&lt;/code&gt; to cluster role &lt;code&gt;eks-developer-cluster-role&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev-user-group-cluster-role-binding&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Group&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;
    &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eks-developer-cluster-role&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Map IAM user &lt;code&gt;arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:user/dev-user&lt;/code&gt; to Kubernetes user &lt;code&gt;dev-user&lt;/code&gt; with &lt;code&gt;dev&lt;/code&gt; group in &lt;code&gt;aws-auth&lt;/code&gt; configMap:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl create iamidentitymapping &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster&lt;/span&gt; iam-auth-cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--arn&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/dev-user"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--username&lt;/span&gt; &lt;span class="s2"&gt;"dev-user"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--group&lt;/span&gt; &lt;span class="s2"&gt;"dev"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;this would create an entry under &lt;code&gt;mapUsers&lt;/code&gt; section in &lt;code&gt;aws-auth&lt;/code&gt; configmap as:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;mapUsers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;- groups:&lt;/span&gt;
    &lt;span class="s"&gt;- dev&lt;/span&gt;
    &lt;span class="s"&gt;userarn: arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/dev-user&lt;/span&gt;
    &lt;span class="s"&gt;username: dev-user&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Using mapAccounts to Map IAM ARN in an AWS Account to the Cluster
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;mapAccounts&lt;/code&gt; allows mapping all the &lt;code&gt;IAM Users&lt;/code&gt; or &lt;code&gt;IAM Roles&lt;/code&gt; of an &lt;strong&gt;AWS account&lt;/strong&gt; to the cluster. It accepts the list of AWS Account IDs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;mapAccounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;- "&amp;lt;AWS_ACCOUNT_ID_1&amp;gt;"&lt;/span&gt;
  &lt;span class="s"&gt;- "&amp;lt;AWS_ACCOUNT_ID_2&amp;gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After mapping the AWS accounts to the cluster, we can use &lt;strong&gt;Kubernetes User&lt;/strong&gt; and &lt;strong&gt;Kubernetes Group&lt;/strong&gt; to assign permissions to those IAM entities.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>eks</category>
      <category>authentication</category>
    </item>
    <item>
      <title>EKS IAM Deep Dive</title>
      <dc:creator>Shardul Srivastava</dc:creator>
      <pubDate>Thu, 19 Aug 2021 18:56:24 +0000</pubDate>
      <link>https://dev.to/aws-builders/eks-iam-deep-dive-136d</link>
      <guid>https://dev.to/aws-builders/eks-iam-deep-dive-136d</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvai1pgujygh2h9vxoz0g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvai1pgujygh2h9vxoz0g.png" alt="aws-eks-iam"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon EKS cluster consists of &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html" rel="noopener noreferrer"&gt;Managed NodeGroups&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/worker.html" rel="noopener noreferrer"&gt;Self Managed NodeGroups&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html" rel="noopener noreferrer"&gt;Fargate profiles&lt;/a&gt;. NodeGroups are autoscaling groups behind the scene, while Fargate is serverless and creates one fargate node per pod.&lt;/p&gt;

&lt;p&gt;IAM permissions in EKS can be defined in two ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;IAM Role for NodeGroups&lt;/li&gt;
&lt;li&gt;IAM role for Service Accounts(IRSA)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  IAM Role for NodeGroups
&lt;/h2&gt;

&lt;p&gt;Whenever we create an EKS cluster using &lt;code&gt;eksctl&lt;/code&gt; with node groups like below :&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eksctl.io/v1alpha5&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterConfig&lt;/span&gt;

&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;monitoring-cluster&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.21"&lt;/span&gt;

&lt;span class="na"&gt;availabilityZones&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;us-east-1a&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;us-east-1b&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;us-east-1c&lt;/span&gt;

&lt;span class="na"&gt;managedNodeGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;managed-ng-1&lt;/span&gt;
    &lt;span class="na"&gt;instanceType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;t3a.medium&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;code&gt;eksctl&lt;/code&gt; automatically creates an IAM role with minimum IAM permissions required for the cluster to work and attaches it to the nodes part of the node group. All the pods running on these nodes inherit these permissions.&lt;/p&gt;

&lt;p&gt;This role has 3 IAM policies attached that give basic access to the node :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;AmazonEKSWorkerNodePolicy&lt;/code&gt; - This policy allows EKS worker nodes to connect to EKS clusters.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AmazonEC2ContainerRegistryReadOnly&lt;/code&gt; - This policy gives read-only access to ECR.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AmazonEKS_CNI_Policy&lt;/code&gt; - This policy is required for &lt;a href="https://github.com/aws/amazon-vpc-cni-k8s#setup" rel="noopener noreferrer"&gt;amazon-vpc-cni&lt;/a&gt; plugin to function properly on nodes which is deployed as part of &lt;code&gt;aws-node&lt;/code&gt; daemonset in &lt;code&gt;kube-system&lt;/code&gt; namespace.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These permissions may not be enough if you are running workloads that require various other IAM permissions. &lt;code&gt;eksctl&lt;/code&gt; provides several ways to define additional permissions for the Node.&lt;/p&gt;
&lt;h3&gt;
  
  
  Attach IAM Policies using ARN to the Node Group.
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;eksctl&lt;/code&gt; allows you to attach IAM policies to a node group using &lt;code&gt;attachPolicyARNs&lt;/code&gt;. &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;managedNodeGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;managed-ng-1&lt;/span&gt;
    &lt;span class="na"&gt;iam&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;attachPolicyARNs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# Mandatory IAM Policies for NodeGroup.&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly&lt;/span&gt;

      &lt;span class="c1"&gt;# AWS Managed or Customer Managed IAM Policy ARN.&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::aws:policy/AmazonS3FullAccess&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Please note that while specifying IAM policy ARNs using &lt;code&gt;attachPolicyARNs&lt;/code&gt;, it's mandatory to include the above 3 IAM policies as they are required for the node to function properly.&lt;/p&gt;

&lt;p&gt;If you missed specifying these 3 IAM policies, you will get an error like this :&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

AWS::EKS::Nodegroup/ManagedNodeGroup: CREATE_FAILED – &lt;span class="s2"&gt;"The provided role doesn't have the Amazon EKS Managed Policies associated with it. Please ensure the following policies [arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy, arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly] are attached "&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Attach IAM Role and Instance Profile to the Node Group
&lt;/h3&gt;

&lt;p&gt;If you have an existing &lt;code&gt;IAM Role&lt;/code&gt; and &lt;code&gt;Instance Profile&lt;/code&gt;, then you can define that for a node group :&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;managedNodeGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;managed-ng-1&lt;/span&gt;
    &lt;span class="na"&gt;iam&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;instanceProfileARN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;Instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Profile&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ARN&amp;gt;"&lt;/span&gt;
      &lt;span class="na"&gt;instanceRoleARN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;IAM&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Role&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ARN&amp;gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Attach Addons IAM Policy
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;eksctl&lt;/code&gt; provides out-of-box IAM policies for the cluster add-ons such as &lt;code&gt;cluster autoscaler&lt;/code&gt;, &lt;code&gt;external DNS&lt;/code&gt;, &lt;code&gt;cert-manager&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;managedNodeGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;managed-ng-1&lt;/span&gt;
    &lt;span class="na"&gt;iam&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;withAddonPolicies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;imageBuilder&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;autoScaler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;externalDNS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;certManager&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;appMesh&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;ebs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;fsx&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;efs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;albIngress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;xRay&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;cloudWatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;imageBuilder&lt;/code&gt; - Full ECR access with &lt;code&gt;AmazonEC2ContainerRegistryPowerUser&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;autoScaler&lt;/code&gt; - IAM policy for &lt;a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws#iam-policy" rel="noopener noreferrer"&gt;Cluster Autoscaler&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;externalDNS&lt;/code&gt; - IAM policy for &lt;a href="https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md#iam-policy" rel="noopener noreferrer"&gt;External DNS&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;certManager&lt;/code&gt; - IAM Policy for &lt;a href="https://cert-manager.io/docs/configuration/acme/dns01/route53/#set-up-an-iam-role" rel="noopener noreferrer"&gt;Cert Manager&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;appMesh&lt;/code&gt; - IAM policy for &lt;a href="https://github.com/aws/aws-app-mesh-controller-for-k8s/blob/master/config/iam/controller-iam-policy.json" rel="noopener noreferrer"&gt;AWS App Mesh&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ebs&lt;/code&gt; - IAM Policy for &lt;a href="https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/example-iam-policy.json" rel="noopener noreferrer"&gt;AWS EBS CSI Driver&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;fsx&lt;/code&gt; - IAM Policy for &lt;a href="https://github.com/kubernetes-sigs/aws-fsx-csi-driver/tree/master/docs#installation" rel="noopener noreferrer"&gt;AWS FSX Lustre&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;efs&lt;/code&gt; - IAM Policy for &lt;a href="https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/iam-policy-example.json" rel="noopener noreferrer"&gt;AWS EFS CSI Driver&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;albIngress&lt;/code&gt; - IAM Policy for &lt;a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/install/iam_policy.json" rel="noopener noreferrer"&gt;ALB Ingress Controller&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;xRay&lt;/code&gt; - IAM Policy &lt;code&gt;AWSXRayDaemonWriteAccess&lt;/code&gt; for &lt;a href="https://github.com/aws/aws-xray-daemon" rel="noopener noreferrer"&gt;AWS X-ray Daemon&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;cloudWatch&lt;/code&gt; - IAM Policy &lt;code&gt;CloudWatchAgentServerPolicy&lt;/code&gt; for &lt;a href="https://github.com/aws/amazon-cloudwatch-agent" rel="noopener noreferrer"&gt;AWS Cloudwatch Agent&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While all the above options allow you to define IAM permissions for the node group, there is a problem with defining IAM permissions at the node level.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fot1yz308b4adguc5wcme.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fot1yz308b4adguc5wcme.jpeg" alt="problem"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All the pods running on nodes part of the node group will have these permissions thus not adhering to the &lt;strong&gt;principle of least privilege&lt;/strong&gt;. For example, if we attach &lt;code&gt;EC2 Admin&lt;/code&gt; and &lt;code&gt;Cloudformation&lt;/code&gt; permissions to the node group to run &lt;code&gt;CI server&lt;/code&gt;, any other pods running on this node group will effectively inherit these permissions.&lt;/p&gt;

&lt;p&gt;One way to overcome this problem is to create a separate node group for the CI server. ( Use &lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#concepts" rel="noopener noreferrer"&gt;taints&lt;/a&gt; and &lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity" rel="noopener noreferrer"&gt;affinity&lt;/a&gt; to ensure only CI pods are deployed on this node group)&lt;/p&gt;

&lt;p&gt;However AWS access is not just required for CI servers, many applications use AWS services such as &lt;code&gt;S3&lt;/code&gt;, &lt;code&gt;SQS&lt;/code&gt;, and &lt;code&gt;KMS&lt;/code&gt; and would require fine-grained IAM permissions. Creating one node group for every such application would not be an ideal solution and can lead to &lt;strong&gt;maintenance issues&lt;/strong&gt;, &lt;strong&gt;higher cost&lt;/strong&gt;, and &lt;strong&gt;low resource consumption&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1bzi3gm71dyrfkdfvji.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1bzi3gm71dyrfkdfvji.jpeg" alt="solution"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  IAM Roles for Service Accounts
&lt;/h2&gt;

&lt;p&gt;Amazon EKS supports &lt;code&gt;IAM Roles for Service Accounts (IRSA)&lt;/code&gt; that allows us to map AWS IAM Roles to Kubernetes Service Accounts. With IRSA, instead of defining IAM permissions on the node, we can attach an &lt;strong&gt;IAM role&lt;/strong&gt; to a &lt;strong&gt;Kubernetes Service Account&lt;/strong&gt; and attach the service account to the &lt;strong&gt;pod/deployment&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;IAM role is added to the Kubernetes service account by adding annotation &lt;code&gt;eks.amazonaws.com/role-arn&lt;/code&gt; with value as the &lt;code&gt;IAM Role ARN&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-serviceaccount&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;eks.amazonaws.com/role-arn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;IAM&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Role&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ARN&amp;gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;EKS create a Mutating Admission Webhook &lt;a href="https://github.com/aws/amazon-eks-pod-identity-webhook/" rel="noopener noreferrer"&gt;pod-identity-webhook&lt;/a&gt; and uses it to intercept the pod creation request and update the pod spec to include IAM credentials.&lt;/p&gt;

&lt;p&gt;Check the &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook" rel="noopener noreferrer"&gt;Mutating Admission Webhooks&lt;/a&gt; in the cluster:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io

NAME                                             WEBHOOKS   AGE
0500-amazon-eks-fargate-mutation.amazonaws.com   2          21m
pod-identity-webhook                             1          28m
vpc-resource-mutating-webhook                    1          28m


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;code&gt;pod-identity-webhook&lt;/code&gt; supports several configuration options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;eks.amazonaws.com/role-arn&lt;/code&gt; - IAM Role ARN to attach to the service account.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;eks.amazonaws.com/audience&lt;/code&gt; - Intended autience of the &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection" rel="noopener noreferrer"&gt;token&lt;/a&gt;, defaults to "sts.amazonaws.com" if not set.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;eks.amazonaws.com/sts-regional-endpoints&lt;/code&gt; - AWS STS is a global service, however, we can use regional STS endpoints to reduce latency. Set &lt;code&gt;true&lt;/code&gt; to use regional STS endpoints.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;eks.amazonaws.com/token-expiration&lt;/code&gt; - AWS STS Token expiration duration, default is &lt;code&gt;86400 seconds&lt;/code&gt; or &lt;code&gt;24 hours&lt;/code&gt;. We can override this value by defining it at the &lt;code&gt;pod level&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;eks.amazonaws.com/skip-containers&lt;/code&gt; - A comma-separated list of containers to skip adding volume and environment variables. &lt;code&gt;This annotation is only supported at the pod level.&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-serviceaccount&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;eks.amazonaws.com/role-arn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;IAM&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Role&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ARN&amp;gt;"&lt;/span&gt;
    &lt;span class="na"&gt;eks.amazonaws.com/audience&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sts.amazonaws.com"&lt;/span&gt;
    &lt;span class="na"&gt;eks.amazonaws.com/sts-regional-endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
    &lt;span class="na"&gt;eks.amazonaws.com/token-expiration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;86400"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  How IRSA Works
&lt;/h3&gt;

&lt;p&gt;When you define the IAM role for a service account using &lt;code&gt;eks.amazonaws.com/role-arn&lt;/code&gt; annotation and add this service account to the pod, &lt;code&gt;pod-identity-webhook&lt;/code&gt; mutates the pod spec to add &lt;code&gt;environment variables&lt;/code&gt; and &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#projected" rel="noopener noreferrer"&gt;projected mount volumes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;These are the environment variables added by &lt;code&gt;pod-identity-webhook&lt;/code&gt; in the pod spec:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

  &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS_DEFAULT_REGION&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS_REGION&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS_ROLE_ARN&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;IAM&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ROLE&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ARN&amp;gt;"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS_WEB_IDENTITY_TOKEN_FILE&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/var/run/secrets/eks.amazonaws.com/serviceaccount/token"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;AWS_DEFAULT_REGION&lt;/code&gt; and &lt;code&gt;AWS_REGION&lt;/code&gt; - Cluster Region &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;AWS_ROLE_ARN&lt;/code&gt; - IAM Role ARN.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;AWS_WEB_IDENTITY_TOKEN_FILE&lt;/code&gt; - Path to the tokens file&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;pod-identity-webhook&lt;/code&gt; also adds a projected volume for service account token:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/var/run/secrets/eks.amazonaws.com/serviceaccount"&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-iam-token&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-iam-token&lt;/span&gt;
    &lt;span class="na"&gt;projected&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;sources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;serviceAccountToken&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;audience&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sts.amazonaws.com"&lt;/span&gt;
          &lt;span class="na"&gt;expirationSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;86400&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;token&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;a projected volume is created with the name &lt;code&gt;aws-iam-token&lt;/code&gt; and mounted to the container at &lt;code&gt;/var/run/secrets/eks.amazonaws.com/serviceaccount&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's test it out.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an EKS cluster with a Managed NodeGroup and an IAM service account &lt;code&gt;s3-reader&lt;/code&gt; in the default namespace with &lt;code&gt;AmazonS3ReadOnlyAccess&lt;/code&gt; IAM permissions :&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eksctl.io/v1alpha5&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterConfig&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;iam-cluster&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.21"&lt;/span&gt;
&lt;span class="na"&gt;availabilityZones&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;us-east-1a&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;us-east-1b&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;us-east-1c&lt;/span&gt;
&lt;span class="na"&gt;iam&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;withOIDC&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;serviceAccounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;s3-reader&lt;/span&gt;
    &lt;span class="na"&gt;attachPolicyARNs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"&lt;/span&gt;
&lt;span class="na"&gt;managedNodeGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;managed-ng-1&lt;/span&gt;
    &lt;span class="na"&gt;instanceType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;t3a.medium&lt;/span&gt;
    &lt;span class="na"&gt;minSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
    &lt;span class="na"&gt;desiredCapacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;  We can also attach IAM Role directly using &lt;code&gt;attachRoleARN&lt;/code&gt;. However, the role should have below &lt;code&gt;trust policy&lt;/code&gt; to allow the pods to use this role :&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Federated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:oidc-provider/oidc.&amp;lt;REGION&amp;gt;.eks.amazonaws.com/&amp;lt;CLUSTER_ID&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sts:AssumeRoleWithWebIdentity"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"Condition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"StringEquals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"oidc.&amp;lt;REGION&amp;gt;.eks.amazonaws.com/&amp;lt;CLUSTER_ID&amp;gt;:sub"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"system:serviceaccount:default:my-serviceaccount"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"StringLike"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"oidc.&amp;lt;REGION&amp;gt;.eks.amazonaws.com/CLUSTER_ID:sub"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"system:serviceaccount:default:*"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Check the role created by &lt;code&gt;eksctl&lt;/code&gt; for service account &lt;code&gt;s3-reader&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

eksctl get iamserviceaccount s3-reader &lt;span class="nt"&gt;--cluster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;iam-cluster &lt;span class="nt"&gt;--region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-east-1
2021-08-25 02:51:14 &lt;span class="o"&gt;[&lt;/span&gt;ℹ]  eksctl version 0.61.0
2021-08-25 02:51:14 &lt;span class="o"&gt;[&lt;/span&gt;ℹ]  using region us-east-1
NAMESPACE NAME    ROLE ARN
default   s3-reader &amp;lt;IAM ROLE ARN&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Check the service account to verify &lt;code&gt;eks.amazonaws.com/role-arn&lt;/code&gt; annotation:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl get sa s3-reader &lt;span class="nt"&gt;-oyaml&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;eks.amazonaws.com/role-arn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;IAM ROLE ARN&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/managed-by&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eksctl&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;s3-reader&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;secrets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;s3-reader-token-5hnn7&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;we can see that &lt;code&gt;eksctl&lt;/code&gt; has created the IAM role, service account and automatically added the annotation to the service account.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Since &lt;code&gt;eksctl&lt;/code&gt; doesn't support all the annotations, let's add them manually:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl annotate &lt;span class="se"&gt;\&lt;/span&gt;
  sa s3-reader &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"eks.amazonaws.com/audience=test"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"eks.amazonaws.com/token-expiration=43200"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; As of &lt;code&gt;EKS v1.21&lt;/code&gt;, &lt;code&gt;eks.amazonaws.com/sts-regional-endpoints&lt;/code&gt; annotation doesn't work due to this issue.&lt;/p&gt;


&lt;div class="ltag_github-liquid-tag"&gt;
  &lt;h1&gt;
    &lt;a href="https://github.com/aws/amazon-eks-pod-identity-webhook/issues/110" rel="noopener noreferrer"&gt;
      &lt;img class="github-logo" alt="GitHub logo" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg"&gt;
      &lt;span class="issue-title"&gt;
        eks.amazonaws.com/sts-regional-endpoints has no effect
      &lt;/span&gt;
      &lt;span class="issue-number"&gt;#110&lt;/span&gt;
    &lt;/a&gt;
  &lt;/h1&gt;
  &lt;div class="github-thread"&gt;
    &lt;div class="timeline-comment-header"&gt;
      &lt;a href="https://github.com/arunvelsriram" rel="noopener noreferrer"&gt;
        &lt;img class="github-liquid-tag-img" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Favatars.githubusercontent.com%2Fu%2F6568319%3Fv%3D4" alt="arunvelsriram avatar"&gt;
      &lt;/a&gt;
      &lt;div class="timeline-comment-header-text"&gt;
        &lt;strong&gt;
          &lt;a href="https://github.com/arunvelsriram" rel="noopener noreferrer"&gt;arunvelsriram&lt;/a&gt;
        &lt;/strong&gt; posted on &lt;a href="https://github.com/aws/amazon-eks-pod-identity-webhook/issues/110" rel="noopener noreferrer"&gt;&lt;time&gt;Mar 19, 2021&lt;/time&gt;&lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag-github-body"&gt;
      
&lt;p&gt;&lt;strong&gt;What happened&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;eks.amazonaws.com/sts-regional-endpoints: "true"&lt;/code&gt; annotation is not injecting &lt;code&gt;AWS_STS_REGIONAL_ENDPOINTS=regional&lt;/code&gt; to the Pods.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What you expected to happen&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;AWS_STS_REGIONAL_ENDPOINTS=regional&lt;/code&gt; is injected in to the Pods.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;How to reproduce it (as minimally and precisely as possible)&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create a serviceaccount as mentioned in the README with annotation &lt;code&gt;eks.amazonaws.com/sts-regional-endpoints: "true"&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create a Pod, using that serviceaccount&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Anything else we need to know?&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;I noticed that in the README there are two versions of the annotation:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;eks.amazonaws.com/sts-regional-endpoints: "true"&lt;/code&gt; - &lt;a href="https://github.com/aws/amazon-eks-pod-identity-webhook/blob/master/README.md#eks-walkthrough" rel="noopener noreferrer"&gt;https://github.com/aws/amazon-eks-pod-identity-webhook/blob/master/README.md#eks-walkthrough&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;eks.amazonaws.com/sts-regional-endpoint: "true"&lt;/code&gt; - &lt;a href="https://github.com/aws/amazon-eks-pod-identity-webhook/blob/master/README.md#aws_sts_regional_endpoints-injection" rel="noopener noreferrer"&gt;https://github.com/aws/amazon-eks-pod-identity-webhook/blob/master/README.md#aws_sts_regional_endpoints-injection&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Which one is correct?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Environment&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;AWS Region: ap-south-1&lt;/li&gt;
&lt;li&gt;EKS Platform version (if using EKS, run &lt;code&gt;aws eks describe-cluster --name &amp;lt;name&amp;gt; --query cluster.platformVersion&lt;/code&gt;): eks.1&lt;/li&gt;
&lt;li&gt;Kubernetes version (if using EKS, run &lt;code&gt;aws eks describe-cluster --name &amp;lt;name&amp;gt; --query cluster.version&lt;/code&gt;): 1.19&lt;/li&gt;
&lt;li&gt;Webhook Version: Am using AWS EKS, so not sure how to find the version&lt;/li&gt;
&lt;/ul&gt;


    &lt;/div&gt;
    &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/aws/amazon-eks-pod-identity-webhook/issues/110" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;By default &lt;code&gt;eks.amazonaws.com/audience&lt;/code&gt; is set to &lt;code&gt;sts.amazonaws.com&lt;/code&gt; but we can set it to any value. To allow a different value for &lt;code&gt;audience&lt;/code&gt;, we have to add this value in the &lt;code&gt;Audiences&lt;/code&gt; section of the &lt;code&gt;OIDC Provider&lt;/code&gt; and update the &lt;code&gt;trust policy&lt;/code&gt; of the IAM Role to allow this audience.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Go to &lt;a href="https://console.aws.amazon.com/iamv2/home" rel="noopener noreferrer"&gt;IAM Console&lt;/a&gt;, click on &lt;code&gt;Identity Providers&lt;/code&gt; and add the audience to the &lt;code&gt;OIDC Identity provider&lt;/code&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dwuj8anj8r01yrc8495.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dwuj8anj8r01yrc8495.png" alt="oidc-audience"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Modify the trust policy of the IAM Role used with the service account to add this &lt;code&gt;audience&lt;/code&gt; :&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Federated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:oidc-provider/oidc.eks.&amp;lt;REGION&amp;gt;.amazonaws.com/id/&amp;lt;CLUSTER_ID&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sts:AssumeRoleWithWebIdentity"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Condition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"StringEquals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"oidc.eks.&amp;lt;REGION&amp;gt;.amazonaws.com/id/&amp;lt;CLUSTER_ID&amp;gt;:sub"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"system:serviceaccount:default:s3-reader"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"oidc.eks.us-east-1.amazonaws.com/id/&amp;lt;CLUSTER_ID&amp;gt;:aud"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"test"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Create a pod to test the IAM permissions:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;iam-test&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Skip injecting credentials to the sidecar container.&lt;/span&gt;
    &lt;span class="na"&gt;eks.amazonaws.com/skip-containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sidecar-busybox-container"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;serviceAccountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;s3-reader&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;iam-test&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;amazon/aws-cli&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sts"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;get-caller-identity"&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sidecar-busybox-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;radial/busyboxplus:curl&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; iam-test.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Once the pod is ready, check the environment variables in the pod:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl get pods iam-test &lt;span class="nt"&gt;-ojson&lt;/span&gt;|jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.spec.containers[0].env'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS_DEFAULT_REGION"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS_REGION"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS_ROLE_ARN"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;IAM ROLE ARN&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS_WEB_IDENTITY_TOKEN_FILE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/var/run/secrets/eks.amazonaws.com/serviceaccount/token"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Check the &lt;code&gt;aws-iam-token volume&lt;/code&gt; in the pod:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl get pods iam-test &lt;span class="nt"&gt;-ojson&lt;/span&gt;|jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.spec.volumes[]| select(.name=="aws-iam-token")'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"aws-iam-token"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"projected"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"defaultMode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;420&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"sources"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"serviceAccountToken"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"audience"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"test"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"expirationSeconds"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;43200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"token"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;we can see that &lt;code&gt;expirationSeconds&lt;/code&gt; is &lt;code&gt;43200&lt;/code&gt; as specified in the annotation &lt;code&gt;eks.amazonaws.com/token-expiration&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check the &lt;code&gt;volumeMounts&lt;/code&gt; in the pod:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl get pods iam-test &lt;span class="nt"&gt;-ojson&lt;/span&gt;|jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.spec.containers[0].volumeMounts'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mountPath"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/var/run/secrets/kubernetes.io/serviceaccount"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"kube-api-access-xjjqv"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"readOnly"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mountPath"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/var/run/secrets/eks.amazonaws.com/serviceaccount"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"aws-iam-token"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"readOnly"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Check the logs of the pod &lt;code&gt;iam-test&lt;/code&gt; to see the role assumed by the pod:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl logs iam-test &lt;span class="nt"&gt;-c&lt;/span&gt; iam-test


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>iam</category>
      <category>eks</category>
    </item>
    <item>
      <title>Running Apache Spark on EKS Fargate</title>
      <dc:creator>Shardul Srivastava</dc:creator>
      <pubDate>Sat, 14 Aug 2021 20:18:18 +0000</pubDate>
      <link>https://dev.to/aws-builders/running-apache-spark-on-eks-fargate-1l55</link>
      <guid>https://dev.to/aws-builders/running-apache-spark-on-eks-fargate-1l55</guid>
      <description>&lt;p&gt;Apache Spark is one of the most famous Big Data frameworks that allows you to process data at any scale. &lt;/p&gt;

&lt;p&gt;Spark jobs can run on the Kubernetes cluster and have native support for the Kubernetes scheduler in GA from &lt;a href="https://spark.apache.org/releases/spark-release-3-1-1.html"&gt;release 3.1.1&lt;/a&gt; onwards.&lt;/p&gt;

&lt;p&gt;Spark comes with a &lt;code&gt;spark-submit&lt;/code&gt; script that allows submitting spark applications on a cluster using a single interface without the need to customize the script for different cluster managers.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;spark-submit&lt;/code&gt; on Kubernetes cluster works as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Spark creates a Spark driver running within a Kubernetes pod.&lt;/li&gt;
&lt;li&gt;The driver creates executors which are also running within Kubernetes pods and connects to them and executes application code.&lt;/li&gt;
&lt;li&gt;When the application completes, the executor pods terminate and are cleaned up, but the driver pod persists logs and remains in “completed” state in the Kubernetes API until it’s eventually garbage collected or manually cleaned up.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxarfncezl1094ravi9h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxarfncezl1094ravi9h.png" alt="spark-eks" width="761" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To submit a spark job on a kubernetes cluster using &lt;code&gt;spark-submit&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./bin/spark-submit &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--master&lt;/span&gt; k8s://https://&amp;lt;k8s-apiserver-host&amp;gt;:&amp;lt;k8s-apiserver-port&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--deploy-mode&lt;/span&gt; cluster &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--name&lt;/span&gt; spark-pi &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--class&lt;/span&gt; org.apache.spark.examples.SparkPi &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--conf&lt;/span&gt; spark.executor.instances&lt;span class="o"&gt;=&lt;/span&gt;5 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--conf&lt;/span&gt; spark.kubernetes.container.image&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;spark-image&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nb"&gt;local&lt;/span&gt;:///path/to/examples.jar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While &lt;code&gt;spark-submit&lt;/code&gt; provides support for several Kubernetes features such as &lt;code&gt;secrets&lt;/code&gt;, &lt;code&gt;persistentVolumes&lt;/code&gt;, &lt;code&gt;rbac&lt;/code&gt; via &lt;a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#configuration"&gt;configuration parameters&lt;/a&gt;, it still lacks a lot of features thus it's not suitable to use in production effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Spark on K8s Operator
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator"&gt;Spark on K8s Operator&lt;/a&gt; is a project from Google that allows submitting spark applications on Kubernetes cluster using CustomResource Definition &lt;a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/api-docs.md#sparkoperator.k8s.io/v1beta2.SparkApplication"&gt;SparkApplication&lt;/a&gt;.&lt;br&gt;
It uses mutating admission webhook to modify the pod spec and add the features not officially supported by &lt;code&gt;spark-submit&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The Kubernetes Operator for Apache Spark consists of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A &lt;code&gt;SparkApplication&lt;/code&gt; controller that watches events of creation, updates, and deletion of &lt;code&gt;SparkApplication&lt;/code&gt; objects and acts on the watch events,
a submission runner that runs &lt;code&gt;spark-submit&lt;/code&gt; for submissions received from the controller,&lt;/li&gt;
&lt;li&gt;A Spark pod monitor that watches for Spark pods and sends pod status updates to the controller,&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/"&gt;Mutating Admission Webhook&lt;/a&gt; that handles customizations for Spark driver and executor pods based on the annotations on the pods added by the controller,&lt;/li&gt;
&lt;li&gt;A command-line tool named &lt;code&gt;sparkctl&lt;/code&gt; for working with the operator.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The following diagram shows how different components interact and work together.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizwol4kmf92ybjpx0p4g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizwol4kmf92ybjpx0p4g.png" alt="spark-operator" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Spark on K8s Operator on EKS Fargate
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Setup EKS cluster using &lt;code&gt;eksctl&lt;/code&gt; with fargate profile for &lt;code&gt;default&lt;/code&gt;, &lt;code&gt;kube-system&lt;/code&gt;, and &lt;code&gt;spark&lt;/code&gt; namespaces.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    eksctl apply &lt;span class="nt"&gt;-f&lt;/span&gt; - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
    apiVersion: eksctl.io/v1alpha5
    kind: ClusterConfig
    metadata:
      name: spark-cluster
      region: us-east-1
      version: "1.21"
    availabilityZones: 
      - us-east-1a
      - us-east-1b
      - us-east-1c
    fargateProfiles:
      - name: fp-all
        selectors:
          - namespace: kube-system
          - namespace: default
          - namespace: spark
&lt;/span&gt;&lt;span class="no"&gt;    EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install &lt;code&gt;Spark on K8s Operator&lt;/code&gt; using helm3 in the &lt;code&gt;spark&lt;/code&gt; namespace.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    helm repo add spark-operator https://googlecloudplatform.github.io/spark-on-k8s-operator
    helm upgrade &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--install&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        spark-operator &lt;span class="se"&gt;\&lt;/span&gt;
        spark-operator/spark-operator &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--namespace&lt;/span&gt; spark &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--set&lt;/span&gt; webhook.enable&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;sparkJobNamespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;spark &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--set&lt;/span&gt; serviceAccounts.spark.name&lt;span class="o"&gt;=&lt;/span&gt;spark &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;logLevel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10 &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--version&lt;/span&gt; 1.1.6 &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--wait&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify Operator installation
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; spark
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Submit SparkPi on EKS Cluster
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Submit the &lt;code&gt;SparkPi&lt;/code&gt; application to the EKS cluster
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
    apiVersion: "sparkoperator.k8s.io/v1beta2"
    kind: SparkApplication
    metadata:
      name: spark-pi
      namespace: spark
    spec:
      type: Scala
      mode: cluster
      image: "gcr.io/spark-operator/spark:v3.1.1"
      imagePullPolicy: Always
      mainClass: org.apache.spark.examples.SparkPi
      mainApplicationFile: "local:///opt/spark/examples/jars/spark-examples_2.12-3.1.1.jar"
      sparkVersion: "3.1.1"
      restartPolicy:
        type: Never
      driver:
        cores: 1
        coreLimit: "1200m"
        memory: "512m"
        labels:
          version: 3.1.1
        serviceAccount: spark
      executor:
        cores: 1
        instances: 1
        memory: "512m"
        labels:
          version: 3.1.1
&lt;/span&gt;&lt;span class="no"&gt;    EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath"&gt;hostPath&lt;/a&gt; volume mounts are not supported in Fargate.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check the status of &lt;code&gt;SparkApplication&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; spark describe sparkapplications.sparkoperator.k8s.io spark-pi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Access Spark UI by port-forwarding to the &lt;code&gt;spark-pi-ui-svc&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; spark port-forward svc/spark-pi-ui-svc 4040:4040
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsoqlp8l157tkci21kry9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsoqlp8l157tkci21kry9.png" alt="spark-pi" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing SparkApplication with sparkctl
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;sparkctl&lt;/code&gt; is CLI tool for creating, listing, checking status of, getting logs of, and deleting &lt;code&gt;SparkApplications&lt;/code&gt; running on the Kubernetes cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build &lt;code&gt;sparkctl&lt;/code&gt; from source:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   git clone git@github.com:GoogleCloudPlatform/spark-on-k8s-operator.git
   &lt;span class="nb"&gt;cd &lt;/span&gt;spark-on-k8s-operator/sparkctl
   go build &lt;span class="nt"&gt;-o&lt;/span&gt; /usr/local/bin/sparkctl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;List SparkApplication objects in &lt;code&gt;spark&lt;/code&gt; namespace:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   sparkctl list &lt;span class="nt"&gt;-n&lt;/span&gt; spark
   +----------+-----------+----------------+-----------------+
   |   NAME   |   STATE   | SUBMISSION AGE | TERMINATION AGE |
   +----------+-----------+----------------+-----------------+
   | spark-pi | COMPLETED | 1h             | 1h              |
   +----------+-----------+----------------+-----------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check the status of SparkApplication &lt;code&gt;spark-pi&lt;/code&gt; :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   sparkctl status spark-pi &lt;span class="nt"&gt;-n&lt;/span&gt; spark

    application state:
    +-----------+----------------+----------------+-----------------+--------------------+--------------------+-------------------+
    |   STATE   | SUBMISSION AGE | COMPLETION AGE |   DRIVER POD    |     DRIVER UI      | SUBMISSIONATTEMPTS | EXECUTIONATTEMPTS |
    +-----------+----------------+----------------+-----------------+--------------------+--------------------+-------------------+
    | COMPLETED | 1h             | 1h             | spark-pi-driver | 10.100.97.206:4040 |                  1 |                 1 |
    +-----------+----------------+----------------+-----------------+--------------------+--------------------+-------------------+
    executor state:
    +----------------------------------+-----------+
    |           EXECUTOR POD           |   STATE   |
    +----------------------------------+-----------+
    | spark-pi-418ac87b48d177c9-exec-1 | COMPLETED |
    +----------------------------------+-----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check SparkApplication &lt;code&gt;spark-pi&lt;/code&gt; logs:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   sparkctl log spark-pi &lt;span class="nt"&gt;-n&lt;/span&gt; spark 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Port-forward to Spark UI:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   sparkctl forward spark-pi &lt;span class="nt"&gt;-n&lt;/span&gt; spark
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you can access the Spark UI at &lt;a href="http://localhost:4040"&gt;http://localhost:4040&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>spark</category>
      <category>eks</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Canary Deployment with Istio on EKS</title>
      <dc:creator>Shardul Srivastava</dc:creator>
      <pubDate>Sat, 14 Aug 2021 12:27:03 +0000</pubDate>
      <link>https://dev.to/aws-builders/canary-deployment-with-istio-on-eks-2m98</link>
      <guid>https://dev.to/aws-builders/canary-deployment-with-istio-on-eks-2m98</guid>
      <description>&lt;p&gt;Canary deployment is a way of deploying the application in a phased manner. In this pattern, we deploy a new version of the application alongside the production version, then rollout the change to a small subset of servers. &lt;br&gt;
Once new version of the application is tested by the real users, then rollout the change out to the rest of the servers.&lt;/p&gt;

&lt;p&gt;Canary deployments can be complex and involve testing in production and manual verification.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlo7w0fwc5iwyza1e36d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlo7w0fwc5iwyza1e36d.png" alt="canary-release" width="593" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To demonstrate Canary deployments, we will setup an EKS cluster, install Istio, deploy sample application and setup canary release of new version of application.&lt;/p&gt;

&lt;p&gt;you can follow the steps below or use the script &lt;a href="https://github.com/shardulsrivastava/eks-istio-canary/blob/main/auto/setup-cluster"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setup EKS Cluster
&lt;/h2&gt;

&lt;p&gt;Setup an EKS cluster of version &lt;code&gt;1.21&lt;/code&gt; in &lt;code&gt;us-east-1&lt;/code&gt; region with a managed node group &lt;code&gt;default-pool&lt;/code&gt; of machine type &lt;code&gt;t3a-medium&lt;/code&gt;. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download and install the latest version of &lt;code&gt;eksctl&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   curl &lt;span class="nt"&gt;--silent&lt;/span&gt; &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s2"&gt;"https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;_amd64.tar.gz"&lt;/span&gt; | &lt;span class="nb"&gt;tar &lt;/span&gt;xz &lt;span class="nt"&gt;-C&lt;/span&gt; /tmp
   &lt;span class="nb"&gt;sudo mv&lt;/span&gt; /tmp/eksctl /usr/local/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;for Mac and other operating systems, follow the steps &lt;a href="https://eksctl.io/introduction/#installation"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create EKS cluster
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   eksctl create cluster &lt;span class="nt"&gt;--name&lt;/span&gt; canary-cluster &lt;span class="se"&gt;\&lt;/span&gt;
          &lt;span class="nt"&gt;--version&lt;/span&gt; 1.21 &lt;span class="se"&gt;\&lt;/span&gt;
          &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="se"&gt;\&lt;/span&gt;
          &lt;span class="nt"&gt;--nodegroup-name&lt;/span&gt; default-pool &lt;span class="se"&gt;\&lt;/span&gt;
          &lt;span class="nt"&gt;--node-type&lt;/span&gt; t3a.medium &lt;span class="se"&gt;\&lt;/span&gt;
          &lt;span class="nt"&gt;--nodes&lt;/span&gt; 3 &lt;span class="se"&gt;\&lt;/span&gt;
          &lt;span class="nt"&gt;--nodes-min&lt;/span&gt; 0 &lt;span class="se"&gt;\&lt;/span&gt;
          &lt;span class="nt"&gt;--nodes-max&lt;/span&gt; 4 &lt;span class="se"&gt;\&lt;/span&gt;
          &lt;span class="nt"&gt;--managed&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Once the Control plane is ready, you should see the output like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   2021-08-13 23:59:52 &lt;span class="o"&gt;[&lt;/span&gt;ℹ]  waiting &lt;span class="k"&gt;for &lt;/span&gt;the control plane availability...
   2021-08-13 23:59:52 &lt;span class="o"&gt;[&lt;/span&gt;✔]  saved kubeconfig as &lt;span class="s2"&gt;"/Users/shardulsrivastava/.kube/config"&lt;/span&gt;
   2021-08-13 23:59:52 &lt;span class="o"&gt;[&lt;/span&gt;ℹ]  no tasks
   2021-08-13 23:59:52 &lt;span class="o"&gt;[&lt;/span&gt;✔]  all EKS cluster resources &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="s2"&gt;"canary-cluster"&lt;/span&gt; have been created
   2021-08-14 00:01:57 &lt;span class="o"&gt;[&lt;/span&gt;ℹ]  kubectl &lt;span class="nb"&gt;command &lt;/span&gt;should work with &lt;span class="s2"&gt;"/Users/shardulsrivastava/.kube/config"&lt;/span&gt;, try &lt;span class="s1"&gt;'kubectl get nodes'&lt;/span&gt;
   2021-08-14 00:01:57 &lt;span class="o"&gt;[&lt;/span&gt;✔]  EKS cluster &lt;span class="s2"&gt;"canary-cluster"&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt; region is ready
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: You would need a minimum of &lt;a href="https://eksctl.io/usage/minimum-iam-policies/"&gt;these permissions&lt;/a&gt; to run the eksctl commands above.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download and install &lt;code&gt;kubectl&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   curl &lt;span class="nt"&gt;-LO&lt;/span&gt; &lt;span class="s2"&gt;"https://dl.k8s.io/release/&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; https://dl.k8s.io/release/stable.txt&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/bin/linux/amd64/kubectl"&lt;/span&gt;
   &lt;span class="nb"&gt;sudo install&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; root &lt;span class="nt"&gt;-g&lt;/span&gt; root &lt;span class="nt"&gt;-m&lt;/span&gt; 0755 kubectl /usr/local/bin/kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check the pods running in cluster:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get pods &lt;span class="nt"&gt;-A&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, your cluster would have only the &lt;code&gt;core-dns&lt;/code&gt;, &lt;code&gt;kube-proxy&lt;/code&gt;, and &lt;code&gt;amazon-vpc-cni&lt;/code&gt; plugins.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pb71izuokju0bvtyt5p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pb71izuokju0bvtyt5p.png" alt="eks-cluster" width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Istio
&lt;/h2&gt;

&lt;p&gt;Istio provides a convenient binary &lt;code&gt;istioctl&lt;/code&gt; to set up and interact with Istio components.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install &lt;code&gt;istioctl&lt;/code&gt; and if you're running these commands from an ec2 instance, add the installation directory to system path:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   curl &lt;span class="nt"&gt;-L&lt;/span&gt; https://istio.io/downloadIstio | sh -
   &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PATH&lt;/span&gt;&lt;span class="s2"&gt;:/home/ec2-user/istio-1.11.0/bin"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Istio has multiple configuration &lt;a href="https://istio.io/latest/docs/setup/additional-setup/config-profiles/"&gt;profiles&lt;/a&gt;, these profiles provide customization of the Istio control plane and of the sidecars for the Istio data plane.
&lt;code&gt;default&lt;/code&gt; profile is recommended for production deployments.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   istioctl &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;profile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;default &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;alternatively, you can also take the dump of the manifests and apply using kubectl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    istioctl manifest generate &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;profile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;default &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; generated-manifest.yaml
    kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; generated-manifest.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once installed, you should see the output like this for a successful installation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wc5pk2vn1ura373qt5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wc5pk2vn1ura373qt5r.png" alt="istioctl-output" width="800" height="133"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: If you get the below error, you should check the instance type you are using as there is a limit to the number of pods that can be scheduled on the node based on the node's instance type. See &lt;a href="https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt"&gt;here&lt;/a&gt; for the list of instance types and the max pods.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xqhm7lhc7fsb8bzrkr7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xqhm7lhc7fsb8bzrkr7.png" alt="istioctl-error" width="800" height="95"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Istio injects a &lt;a href="https://github.com/envoyproxy/envoy"&gt;envoy proxy&lt;/a&gt; side-car container to the pods to intercept traffic from pods, this behavior is enabled using label &lt;code&gt;istio-injection=enabled&lt;/code&gt; on the namespace level and &lt;code&gt;sidecar.istio.io/inject=true&lt;/code&gt; on the pod level.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Add this label to the &lt;code&gt;default&lt;/code&gt; namespace to instruct Istio to automatically inject Envoy sidecar proxies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl label namespace default istio-injection&lt;span class="o"&gt;=&lt;/span&gt;enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Read more about Istio side-car injection &lt;a href="https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install &lt;code&gt;Kiali&lt;/code&gt;, &lt;code&gt;Prometheus&lt;/code&gt; and &lt;code&gt;Grafana&lt;/code&gt; to monitor and visualize mesh
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/istio/istio/1.10.3/samples/addons/prometheus.yaml
   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/istio/istio/1.10.3/samples/addons/grafana.yaml
   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/istio/istio/1.10.3/samples/addons/kiali.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: If you get an error like this &lt;code&gt;unable to recognize "https://raw.githubusercontent.com/istio/istio/1.10.3/samples/addons/kiali.yaml": no matches for kind "MonitoringDashboard" in version "monitoring.kiali.io/v1alpha1"&lt;/code&gt;, then re-run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/istio/istio/1.10.3/samples/addons/kiali.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup Sample Application
&lt;/h2&gt;

&lt;p&gt;To demonstrate how canary deployment works, let's set up version &lt;code&gt;v1&lt;/code&gt; of an Nginx-based sample application and set up a service to expose it at port &lt;code&gt;80&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy &lt;code&gt;v1&lt;/code&gt; vesion of sample application :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
   &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
     &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-v1&lt;/span&gt;
   &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
     &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
         &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
     &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
           &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
           &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
       &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quay.io/shardul/nginx:v1&lt;/span&gt;
           &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-v1&lt;/span&gt;
           &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt;
           &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
           &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
             &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
           &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
             &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
               &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/health&lt;/span&gt;
               &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
           &lt;span class="na"&gt;readinessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
             &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
               &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/health&lt;/span&gt;
               &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Expose the application as a service:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
   &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
     &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
   &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
       &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
     &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Setup a test pod to test the connectivity to the nginx service :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl run &lt;span class="nt"&gt;-it&lt;/span&gt; test-connection &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;radial/busyboxplus:curl &lt;span class="nt"&gt;--&lt;/span&gt; sh
   &lt;span class="o"&gt;[&lt;/span&gt; root@test-connection:/ &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;curl nginx &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done
   &lt;/span&gt;You&lt;span class="s1"&gt;'re at the root of nginx server v1
   You'&lt;/span&gt;re at the root of nginx server v1
   You&lt;span class="s1"&gt;'re at the root of nginx server v1
   You'&lt;/span&gt;re at the root of nginx server v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Canary Deployment with Istio
&lt;/h2&gt;

&lt;p&gt;With Istio, traffic routing and replica deployment are totally independent of each other. Istio &lt;a href="https://istio.io/latest/docs/concepts/traffic-management/#routing-rules"&gt;routing rules&lt;/a&gt; provide fine-grained control over how to route traffic based on &lt;code&gt;host&lt;/code&gt;, &lt;code&gt;port&lt;/code&gt;, &lt;code&gt;headers&lt;/code&gt;, &lt;code&gt;uri&lt;/code&gt;, &lt;code&gt;method&lt;/code&gt;, &lt;code&gt;source labels&lt;/code&gt; and control the distribution of traffic.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy another version &lt;code&gt;v2&lt;/code&gt; of the same application that uses image &lt;code&gt;quay.io/shardul/nginx:v2&lt;/code&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
   &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
     &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-v2&lt;/span&gt;
   &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
     &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
         &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2&lt;/span&gt;
     &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
           &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
           &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2&lt;/span&gt;
       &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quay.io/shardul/nginx:v2&lt;/span&gt;
           &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-v2&lt;/span&gt;
           &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt;
           &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
           &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
             &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
           &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
             &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
               &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/health&lt;/span&gt;
               &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
           &lt;span class="na"&gt;readinessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
             &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
               &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/health&lt;/span&gt;
               &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the &lt;code&gt;v2&lt;/code&gt; version is deployed, when we hit the service again, we would get the below output :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; test-connection &lt;span class="nt"&gt;--&lt;/span&gt; sh
   &lt;span class="o"&gt;[&lt;/span&gt; root@test-connection:/ &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;curl nginx &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done
   &lt;/span&gt;You&lt;span class="s1"&gt;'re at the root of nginx server v2
   You'&lt;/span&gt;re at the root of nginx server v2
   You&lt;span class="s1"&gt;'re at the root of nginx server v1
   You'&lt;/span&gt;re at the root of nginx server v1
   You&lt;span class="s1"&gt;'re at the root of nginx server v2
   You'&lt;/span&gt;re at the root of nginx server v1
   You&lt;span class="s1"&gt;'re at the root of nginx server v1
   You'&lt;/span&gt;re at the root of nginx server v2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since there are two deployments with different versions exposed from the same service, whenever you hit the service, it will hit the different versions of the application in a round-robin manner and hence this output.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://istio.io/latest/docs/reference/config/networking/gateway/"&gt;Istio Gateway&lt;/a&gt; acts as a load balancer receiving incoming and outgoing HTTP/TCP connections and it's bound to the &lt;code&gt;istio-ingressgateway&lt;/code&gt; resource created during installation as a &lt;code&gt;LoadBalancer&lt;/code&gt; service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Setup a &lt;code&gt;default-gateway&lt;/code&gt; for all the incoming traffic on port &lt;code&gt;80&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.istio.io/v1alpha3&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Gateway&lt;/span&gt;
   &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default-gateway&lt;/span&gt;
     &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;istio-system&lt;/span&gt;
   &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;istio&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingressgateway&lt;/span&gt;
     &lt;span class="na"&gt;servers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
       &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
         &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
         &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ek9snqayacgiz0s5w14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ek9snqayacgiz0s5w14.png" alt="istio-ingressgateway" width="800" height="51"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://istio.io/latest/docs/reference/config/networking/destination-rule/"&gt;Destination Rule&lt;/a&gt; allows you to define &lt;a href="https://istio.io/latest/docs/reference/config/networking/destination-rule/#Subset"&gt;subsets&lt;/a&gt; of an application based on a set of labels. For example, we have deployed two subsets &lt;code&gt;v1&lt;/code&gt; and &lt;code&gt;v2&lt;/code&gt; of an application, and they are identified by the label &lt;code&gt;version&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Setup a DestinationRule &lt;code&gt;nginx-dest-rule&lt;/code&gt; to define two subsets &lt;code&gt;v1&lt;/code&gt; and &lt;code&gt;v2&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.istio.io/v1alpha3&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DestinationRule&lt;/span&gt;
   &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-dest-rule&lt;/span&gt;
   &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
     &lt;span class="na"&gt;subsets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
       &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2&lt;/span&gt;
       &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/"&gt;Virtual Service&lt;/a&gt; acts just like an Ingress resource and matches traffic and directs it to a service based on the HTTP routing rules.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It can operate on internal as well as external service and can match traffic based on HTTP host, path (with full regular expression support), method, headers, ports, query parameters.&lt;/p&gt;

&lt;p&gt;Setup a VirtualService &lt;code&gt;nginx-virtual-svc&lt;/code&gt; for host &lt;code&gt;nginx&lt;/code&gt; that receives the incoming traffic on &lt;code&gt;default-gateway&lt;/code&gt; and routes 90% of the traffic to the subset &lt;code&gt;v1&lt;/code&gt; and 10% to subset &lt;code&gt;v2&lt;/code&gt; defined in &lt;code&gt;nginx-dest-rule&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.istio.io/v1alpha3&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VirtualService&lt;/span&gt;
   &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-virtual-svc&lt;/span&gt;
   &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
     &lt;span class="na"&gt;gateways&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;istio-system/default-gateway&lt;/span&gt;
     &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;route&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
           &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
           &lt;span class="na"&gt;subset&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
         &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;90&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
           &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
           &lt;span class="na"&gt;subset&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2&lt;/span&gt;
         &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After applying, &lt;code&gt;90%&lt;/code&gt; of the traffic will be routed to version &lt;code&gt;v1&lt;/code&gt; and &lt;code&gt;10%&lt;/code&gt; to &lt;code&gt;v2&lt;/code&gt; of the &lt;code&gt;nginx&lt;/code&gt; service. Access the nginx service via &lt;code&gt;istio-ingressgateway&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; test-connection &lt;span class="nt"&gt;--&lt;/span&gt; sh
&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;curl &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Host: nginx"&lt;/span&gt; istio-ingressgateway.istio-system &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can visualize the traffic flow using the Kiali dashboard:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;istioctl dashboard kiali
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7yjm6y5gbcoyonzplrps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7yjm6y5gbcoyonzplrps.png" alt="istio-canary-routing" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In production, after testing the &lt;code&gt;canary version&lt;/code&gt; to ensure that it's working fine, we can update the VirtualService to route 100% of the traffic to this version and rollout the newer version for all the users.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>canarydeployments</category>
      <category>istio</category>
      <category>eks</category>
    </item>
    <item>
      <title>Cloud Native Chaos Engineering with Chaos Mesh</title>
      <dc:creator>Shardul Srivastava</dc:creator>
      <pubDate>Mon, 09 Aug 2021 19:51:35 +0000</pubDate>
      <link>https://dev.to/aws-builders/cloud-native-chaos-engineering-with-chaos-mesh-3a96</link>
      <guid>https://dev.to/aws-builders/cloud-native-chaos-engineering-with-chaos-mesh-3a96</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgs1tq6sy5ex8xiw4x5qt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgs1tq6sy5ex8xiw4x5qt.jpg" alt="chaos-engineering" width="750" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Cloud, distributed architectures have grown even more complex and with complexity comes the uncertainty in how the system could fail.&lt;/p&gt;

&lt;p&gt;Chaos Engineering aims to test system resiliency by injecting faults to identify weaknesses before they cause massive outages such as improper fallback settings for a service, cascading failures due to a single point of failure, or retry storms due to misconfigured timeouts.&lt;/p&gt;

&lt;h2&gt;
  
  
  History
&lt;/h2&gt;

&lt;p&gt;Chaos Engineering started at Netflix back in 2010 when Netflix moved from on-prem servers to AWS infrastructure to test the resiliency of their infrastructure. &lt;/p&gt;

&lt;p&gt;In 2012, Netflix open-sourced &lt;a href="https://github.com/Netflix/chaosmonkey"&gt;ChaosMonkey&lt;/a&gt; under Apache 2.0 license that randomly terminates instances to ensure that services are resilient to instance failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Native Chaos Engineering in CNCF Landscape
&lt;/h2&gt;

&lt;p&gt;CNCF focuses on Cloud Native Chaos Engineering defined as engineering practices focused on (and built on) Kubernetes environments, applications, microservices, and infrastructure.&lt;/p&gt;

&lt;p&gt;Cloud Native Chaos Engineering has 4 core principles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open source&lt;/li&gt;
&lt;li&gt;CRDs for Chaos Management &lt;/li&gt;
&lt;li&gt;Extensible and pluggable&lt;/li&gt;
&lt;li&gt;Broad Community adoption&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;CNCF has two sandbox projects for Cloud Native Chaos Engineering &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/chaos-mesh/chaos-mesh"&gt;ChaosMesh&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/litmuschaos/litmus"&gt;Litmus Chaos&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmb5uh3kd7q6izwjsf3i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmb5uh3kd7q6izwjsf3i.png" alt="cncf-chaos-engineering" width="768" height="730"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Chaos Mesh
&lt;/h2&gt;

&lt;p&gt;Chaos Mesh is a cloud-native Chaos Engineering platform that orchestrates chaos on Kubernetes environments. It is based on &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/"&gt;Kubernetes Operator pattern&lt;/a&gt; and provides a Chaos Operator to inject into the applications and Kubernetes infrastructure in a manageable way.&lt;/p&gt;

&lt;p&gt;Chaos Operator uses Custom Resource Defition(CRD) to define chaos objects. It provides a variety of these CRDs for fault injection such as :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://chaos-mesh.org/docs/simulate-pod-chaos-on-kubernetes/"&gt;PodChaos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://chaos-mesh.org/docs/simulate-network-chaos-on-kubernetes"&gt;NetworkChaos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://chaos-mesh.org/docs/simulate-dns-chaos-on-kubernetes"&gt;DNSChaos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://chaos-mesh.org/docs/simulate-http-chaos-on-kubernetes"&gt;HTTPChaos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://chaos-mesh.org/docs/simulate-heavy-stress-on-kubernetes"&gt;StressChaos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://chaos-mesh.org/docs/simulate-io-chaos-on-kubernetes"&gt;IOChaos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://chaos-mesh.org/docs/simulate-time-chaos-on-kubernetes"&gt;TimeChaos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://chaos-mesh.org/docs/simulate-kernel-chaos-on-kubernetes"&gt;KernelChaos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://chaos-mesh.org/docs/simulate-aws-chaos"&gt;AWSChaos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://chaos-mesh.org/docs/simulate-gcp-chaos"&gt;GCPChaos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://chaos-mesh.org/docs/simulate-jvm-application-chaos"&gt;JVMChaos&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Chaos Mesh Installation
&lt;/h3&gt;

&lt;p&gt;Chaos Mesh can be installed quickly using &lt;a href="https://chaos-mesh.org/docs/quick-start#quick-installation"&gt;installation script&lt;/a&gt;. However, it's recommended to use Helm 3 chart in production environments.&lt;/p&gt;

&lt;p&gt;To install Chaos Mesh using Helm :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add the Chaos Mesh repository to the Helm repository.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   helm repo add chaos-mesh https://charts.chaos-mesh.org
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;It's recommended to install ChaosMesh in a separate namespace, so you can either create a namespace &lt;code&gt;chaos-testing&lt;/code&gt; manually or let Helm create it automatically, if it doesn't exist :
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   helm upgrade &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--install&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        chaos-mesh &lt;span class="se"&gt;\&lt;/span&gt;
        chaos-mesh/chaos-mesh &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-n&lt;/span&gt; chaos-testing &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--version&lt;/span&gt; v2.0.0 &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--wait&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: If you're using GKE or EKS with &lt;code&gt;containerd&lt;/code&gt;, then use&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   helm upgrade &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--install&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        chaos-mesh &lt;span class="se"&gt;\&lt;/span&gt;
        chaos-mesh/chaos-mesh &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-n&lt;/span&gt; chaos-testing &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--set&lt;/span&gt; chaosDaemon.runtime&lt;span class="o"&gt;=&lt;/span&gt;containerd &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--set&lt;/span&gt; chaosDaemon.socketPath&lt;span class="o"&gt;=&lt;/span&gt;/run/containerd/containerd.sock &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--version&lt;/span&gt; v2.0.0 &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--wait&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Verify if pods are running :
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; chaos-testing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Run First Chaos Mesh Experiment
&lt;/h3&gt;

&lt;p&gt;Chaos Experiment describes what type of fault is injected and how.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setup an Nginx pod and expose it on port 80.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl run nginx &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"app=nginx"&lt;/span&gt; &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Get the IP of the nginx pod
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl get pods nginx &lt;span class="nt"&gt;-ojsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.status.podIP}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Open another terminal and setup a test pod to test the connectivity to nginx pod :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl run &lt;span class="nt"&gt;-it&lt;/span&gt; test-connection &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;radial/busyboxplus:curl &lt;span class="nt"&gt;--&lt;/span&gt; sh
  ping &amp;lt;IP of the Nginx Pod&amp;gt; &lt;span class="nt"&gt;-c&lt;/span&gt; 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this should show you the time it takes to ping the IP :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpjo9rn4tw0gy2rb6mao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpjo9rn4tw0gy2rb6mao.png" alt="nginx-pod-connectivity" width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create your first Chaos Experiment by running :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
  apiVersion: chaos-mesh.org/v1alpha1
  kind: NetworkChaos
  metadata:
    name: nginx-network-delay
  spec:
    action: delay
    mode: one
    selector:
      namespaces:
        - default
      labelSelectors:
        'app': 'nginx'
    delay:
      latency: '1s'
    duration: '60s'
&lt;/span&gt;&lt;span class="no"&gt;  EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this will create a CRD of type &lt;code&gt;NetworkChaos&lt;/code&gt; that will introduce a latency of &lt;code&gt;1 second&lt;/code&gt; in the network of pods with labels &lt;code&gt;app:nginx&lt;/code&gt; i.e nginx pod for the next &lt;code&gt;60 seconds&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test the response of ping to the nginx pod now to see the delay of &lt;code&gt;1 second&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0m58z0ya82bmf6hiwix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0m58z0ya82bmf6hiwix.png" alt="nginx-pod-connectivity-with-delay" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Run HTTPChaos Experiment
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;HTTPChaos&lt;/code&gt; allows you to inject faults in the request and response of an HTTP server. It supports &lt;code&gt;abort&lt;/code&gt;,&lt;code&gt;delay&lt;/code&gt;,&lt;code&gt;replace&lt;/code&gt;,&lt;code&gt;patch&lt;/code&gt; fault types.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Note: Before proceeding, delete the NetworkChaos experiment created earlier.&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check the response time of nginx pod :
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; test-connection &lt;span class="nt"&gt;--&lt;/span&gt; sh
   &lt;span class="nb"&gt;time &lt;/span&gt;curl &amp;lt;IP of the Nginx Pod&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftspvf7p8ynhxh5gdnsfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftspvf7p8ynhxh5gdnsfj.png" alt="nginx-pod-httpchaos" width="800" height="728"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create &lt;code&gt;HTTPSChaos&lt;/code&gt; experiment by running:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
   apiVersion: chaos-mesh.org/v1alpha1
   kind: HTTPChaos
   metadata:
     name: nginx-http-delay
   spec:
     mode: all
     selector:
       labelSelectors:
         app: nginx
     target: Request
     port: 80
     delay: 1s
     method: GET
     path: /
     duration: 5m
&lt;/span&gt;&lt;span class="no"&gt;   EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this will create a CRD of type &lt;code&gt;HTTPChaos&lt;/code&gt; that will introduce a latency of &lt;code&gt;1 seconds&lt;/code&gt; to the requests sent to the pods with labels &lt;code&gt;app:nginx&lt;/code&gt; i.e nginx pod on port 80 for the next &lt;code&gt;5 mins&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Note: If you get an error like &lt;code&gt;admission webhook "vauth.kb.io" denied the request&lt;/code&gt;, as of version 2.0 there is an open issue &lt;a href="https://github.com/chaos-mesh/chaos-mesh/issues/2187"&gt;2187&lt;/a&gt; and a temporary fix is to delete the validating webhook.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io validate-auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Test the response time of nginx pod :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;time &lt;/span&gt;curl &amp;lt;IP of the Nginx Pod&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you will see the additional &lt;code&gt;1 second&lt;/code&gt; latency in the response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhscss8vmlksy2sv9zp3e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhscss8vmlksy2sv9zp3e.png" alt="nginx-pod-httpchaos-delay" width="800" height="706"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>chaosengineering</category>
      <category>chaosmesh</category>
      <category>awscommunitybuilder</category>
    </item>
  </channel>
</rss>
