<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Binaya Sharma</title>
    <description>The latest articles on DEV Community by Binaya Sharma (@bs14).</description>
    <link>https://dev.to/bs14</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bs14"/>
    <language>en</language>
    <item>
      <title>Managing Secrets in Terraform with BitWarden</title>
      <dc:creator>Binaya Sharma</dc:creator>
      <pubDate>Tue, 22 Jul 2025 06:03:44 +0000</pubDate>
      <link>https://dev.to/bs14/managing-secrets-in-terraform-with-bitwarden-233a</link>
      <guid>https://dev.to/bs14/managing-secrets-in-terraform-with-bitwarden-233a</guid>
      <description>&lt;p&gt;Bitwarden has a Secrets Manager that has been specifically designed for storing secrets that can be used for machine-to-machine interaction.&lt;/p&gt;

&lt;p&gt;In this demo, we will be integrating Terraform with Bitwarden. For this there are some pre-requisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;bws command line utility&lt;/li&gt;
&lt;li&gt;BitWarden Account&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;bws command line utility can simply be installed using the &lt;a href="https://bws.bitwarden.com/install" rel="noopener noreferrer"&gt;bash script&lt;/a&gt;. Also, we would need a account, and need to activate the Secrets Manager. We would also need to create a machine account. This will be used to authenticate with bitwarden.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtg668caf6i4cv24l892.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtg668caf6i4cv24l892.png" alt="Machine Secrets" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Keep in mind that tokens are not saved, you will need to save it somewhere. Now we will be using configuring the providers:&lt;/p&gt;

&lt;p&gt;providers.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    bitwarden-secrets = {
      source  = "sebastiaan-dev/bitwarden-secrets"
      version = "0.1.2"
    }
    random = {
      source  = "hashicorp/random"
      version = "3.6.3"
    }
  }
}

provider "random" {}

provider "bitwarden-secrets" {
  access_token = "#Acces_Token_From_the_Machine_Accounts"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I addition to bitwarden-secrets, i have also included random, as we will be creating a radom strong password and store it in the bitwarden.&lt;/p&gt;

&lt;p&gt;Now we can go ahead and create project and secret within that.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create a project managed by Terraform
resource "bitwarden-secrets_project" "project" {
  name = "MyAwesomeProject"
}

# Create Random Password 
resource "random_password" "password" {
  length           = 16
  special          = true
  override_special = "!#$%&amp;amp;*()-_=+[]{}&amp;lt;&amp;gt;:?"
}

# Create a secret in project managed by Terraform
resource "bitwarden-secrets_secret" "password" {
  key        = "password"
  value      = random_password.password.result
  project_id = bitwarden-secrets_project.project.id
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Screenshot from the UI:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxajiybzcley7nm5bmbig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxajiybzcley7nm5bmbig.png" alt="Secrets" width="800" height="712"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This way we can leverage the awesome BitWarden to store secrets and share with the team.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>security</category>
    </item>
    <item>
      <title>Basic Authentication in Nginx with docker-compose</title>
      <dc:creator>Binaya Sharma</dc:creator>
      <pubDate>Sun, 22 Jun 2025 10:59:47 +0000</pubDate>
      <link>https://dev.to/bs14/basic-authentication-in-nginx-with-docker-compose-2d3o</link>
      <guid>https://dev.to/bs14/basic-authentication-in-nginx-with-docker-compose-2d3o</guid>
      <description>&lt;p&gt;In this tutorial we are going to secure a deployment in which we are exposing application using reverse proxy. We will be using nginx in-front of nextjs application, and authenticating using simple auth. There could be multiple reason we would like it to do such as limiting access to the certain environment (may be development before big release to whole world)&lt;/p&gt;

&lt;p&gt;Let us take a trivial docker-compose. I am taking example from the &lt;a href="https://medium.com/@bnay14/dockerize-a-next-js-app-79a1a379a3f8" rel="noopener noreferrer"&gt;article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;docker-compose.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'
services:
  nextjs:
    build:
      context: .
      dockerfile: Dockerfile.multistage
    container_name: nextjs_app
    ports:
      - "3000:3000"
    restart: unless-stopped
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we would like to introduce nginx in front of nexjs application. And this will have basic auth mechanism within it.&lt;/p&gt;

&lt;p&gt;Now let us generate password (This will be md5 hash). We can use temporary docker to generate htpasswd which later will be used to mount.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run --rm httpd htpasswd -nb myuser mypassword &amp;gt; htpasswd

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, let us create a nginx.conf.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user nginx;
worker_processes auto;

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
    worker_connections 1024;
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';

    access_log /var/log/nginx/access.log main;

    sendfile on;
    keepalive_timeout 65;

    server {
        listen 80;
        server_name localhost;

        location / {
            auth_basic "Restricted Access";
            auth_basic_user_file /etc/nginx/.htpasswd;

            proxy_pass http://nextjs:3000;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
            root /usr/share/nginx/html;
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Dockerfile for nginx&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM nginx:latest

# Remove the default Nginx configuration file
RUN rm /etc/nginx/conf.d/default.conf

# Copy the custom Nginx configuration file
COPY nginx.conf /etc/nginx/nginx.conf

# Copy the htpasswd file
COPY htpasswd /etc/nginx/.htpasswd

# Expose port 80
EXPOSE 80

# Start Nginx when the container launches
CMD ["nginx", "-g", "daemon off;"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the docker-compose will look like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'
services:
  apache:
    build:
      context: . 
      dockerfile: Dockerfile.nginx
    container_name: nginx
    ports:
      - "8181:80"

  nextjs:
    build:
      context: .
      dockerfile: Dockerfile.multistage
    container_name: nextjs_app
    restart: unless-stopped
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we should be able to see a dialogue box before accessing nextjs application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1cbfftql427auf2k9an.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1cbfftql427auf2k9an.webp" alt="Image description" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to add additional Users ?
&lt;/h2&gt;

&lt;p&gt;You can simply run the command for creating a new user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run --rm httpd htpasswd -nb user2 password &amp;gt;&amp;gt; htpasswd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And re-run docker-compose. &lt;/p&gt;

&lt;p&gt;This will append new user.&lt;/p&gt;

</description>
      <category>nginx</category>
      <category>devops</category>
      <category>docker</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Migrate Self-Managed add-on to EKS Managed</title>
      <dc:creator>Binaya Sharma</dc:creator>
      <pubDate>Wed, 18 Jun 2025 16:32:33 +0000</pubDate>
      <link>https://dev.to/aws-builders/migrate-self-managed-add-on-to-eks-managed-4hjp</link>
      <guid>https://dev.to/aws-builders/migrate-self-managed-add-on-to-eks-managed-4hjp</guid>
      <description>&lt;p&gt;AWS introduced the concepts of &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html" rel="noopener noreferrer"&gt;Amazon EKS add-ons&lt;/a&gt; to ease the management of cluster add-on with the release of k8s 1.19.&lt;/p&gt;

&lt;p&gt;In this particular use-case we had a cluster where add-ons were self managed, meaning installed using helm. There for each cluster upgrades, and new release of add-ons also needs to be upgraded. We were required to go through change-logs, and other pre-requisite so that there are no any breaking changes, and if there were we would need to make sure cluster works fine after the upgrade.&lt;/p&gt;

&lt;p&gt;This management overhead could have been easily mitigated if we migrated to the EKS Managed add-ons. In this tutorial we will be migrating add-ons&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon VPC CNI&lt;/li&gt;
&lt;li&gt;CoreDNS&lt;/li&gt;
&lt;li&gt;Kube-Proxy&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Verify add-ons status
&lt;/h2&gt;

&lt;p&gt;Firstly we would need to verify which add-ons are configured using managed add-ons. For this use the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aws eks list-addons --cluster-name $CLUSTER_NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output may be different in your case. But in my case aws-efs-csi-driver and aws-guardduty-agent were managed using add-ons. So output was something like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aws eks list-addons --cluster-name demo-eks

{
    "addons": [
        "aws-efs-csi-driver",
        "aws-guardduty-agent"
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Migrating Amazon VPC CNI Plugin
&lt;/h2&gt;

&lt;p&gt;Amazon VPC CNI is responsible for creating Elastic Network Interface (ENI) and attach them to your worker nodes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html" rel="noopener noreferrer"&gt;More about VPC CNI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s first see the version configured in cluster and also create a backup of configuration. Backup is required, if in case something goes wrong, we can easily restore.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Version of VPC CNI Plugin&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3

v1.19.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Verify VPC CNI Plugin is managed manually&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aws eks describe-addon --cluster-name demo-eks --addon-name vpc-cni --query addon.addonVersion --output text

An error occurred (ResourceNotFoundException) when calling the DescribeAddon operation: No addon: vpc-cni found in cluster: demo-eks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create a Backup of Configuration&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get daemonset aws-node -n kube-system -o yaml &amp;gt; aws-k8s-cni-backup.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have backup, also configured the correct version. Now lets create a IAM role with AmazonEKS_CNI_Policy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

set -euo pipefail

# ==== CONFIGURATION ====
CLUSTER_NAME="your-cluster-name"   # Replace with your EKS cluster name
REGION="your-region"               # Replace with your AWS region (e.g., ap-south-1)
ENV="demo"                          # Environment or suffix
SERVICE_ACCOUNT_NAME="aws-node"
NAMESPACE="kube-system"
ROLE_NAME="AmazonEKSVPCCNIRole-${ENV}"
POLICY_ARN="arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"

# ==== DERIVED VALUES ====
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
OIDC_URL=$(aws eks describe-cluster \
  --name "$CLUSTER_NAME" \
  --region "$REGION" \
  --query "cluster.identity.oidc.issuer" \
  --output text)

OIDC_PROVIDER=$(echo "$OIDC_URL" | sed 's|https://||')
OIDC_PROVIDER_ARN="arn:aws:iam::$ACCOUNT_ID:oidc-provider/$OIDC_PROVIDER"

echo "Creating IAM role for $SERVICE_ACCOUNT_NAME in $NAMESPACE..."
echo "OIDC URL: $OIDC_URL"
echo "OIDC Provider ARN: $OIDC_PROVIDER_ARN"

# ==== CREATE TRUST POLICY JSON ====
cat &amp;gt; trust-policy.json &amp;lt;&amp;lt;EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "${OIDC_PROVIDER_ARN}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${OIDC_PROVIDER}:sub": "system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
        }
      }
    }
  ]
}
EOF

# ==== CREATE IAM ROLE ====
aws iam create-role \
  --role-name "$ROLE_NAME" \
  --assume-role-policy-document file://trust-policy.json

# ==== ATTACH POLICY ====
aws iam attach-role-policy \
  --role-name "$ROLE_NAME" \
  --policy-arn "$POLICY_ARN"

echo "IAM role '$ROLE_NAME' created and policy attached."

# ==== CLEANUP ====
rm trust-policy.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This also can be done using terraform. Sample terraform code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "env" {
  default = "demo"
}

variable "cluster_oidc_provider_arn" {}
variable "cluster_oidc_provider_url" {}

resource "aws_iam_role" "eks_cni_role" {
  name = "AmazonEKSVPCCNIRole-${var.env}"

  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Principal = {
          Federated = var.cluster_oidc_provider_arn
        },
        Action = "sts:AssumeRoleWithWebIdentity",
        Condition = {
          StringEquals = {
            "${replace(var.cluster_oidc_provider_url, "https://", "")}:sub" = "system:serviceaccount:kube-system:aws-node"
          }
        }
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "cni_policy_attach" {
  role       = aws_iam_role.eks_cni_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set the cluster_oidc_provider_arn and cluster_oidc_provider_url from your EKS cluster or call from the resource/module block for EKS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks describe-cluster --name &amp;lt;CLUSTER&amp;gt; --query "cluster.identity.oidc.issuer" --output text
Use the URL as cluster_oidc_provider_url, and the full ARN as cluster_oidc_provider_arn.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we would need to create a ServiceAccount with appropriate roles attachments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ServiceAccount
metadata:
  name: aws-node
  namespace: kube-system
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:role/AmazonEKSVPCCNIRole-&amp;lt;ENV&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally we would need to migrate self-managed addons to EKS managed. Keep in mind keyword here is &lt;strong&gt;OVERWRITE&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aws eks create-addon \
  --cluster-name $CLUSTER \
  --addon-name vpc-cni \
  --service-account-role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/AmazonEKSVPCCNIRole-${ENV} \
  --resolve-conflicts OVERWRITE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Migrating CoreDNS
&lt;/h2&gt;

&lt;p&gt;CoreDNS is a flexible, extensible DNS server that can serve as the Kubernetes cluster DNS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html" rel="noopener noreferrer"&gt;More about CoreDNS&lt;br&gt;
&lt;/a&gt;&lt;br&gt;
Let’s first see the version configured in cluster and also create a backup of configuration. Backup is required, if in case something goes wrong, we can easily restore.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl describe deployment coredns --namespace kube-system | grep coredns: | cut -d : -f 3

v1.11.4-eksbuild.2

$ kubectl get deployment coredns -n kube-system -o yaml &amp;gt; aws-k8s-coredns-backup.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally we would need to migrate self-managed addons to EKS managed. Again Keep in mind keyword here is &lt;strong&gt;OVERWRITE&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aws eks create-addon --cluster-name $CLUSTER --addon-name coredns --resolve-conflicts OVERWRITE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Migrating kube-proxy
&lt;/h2&gt;

&lt;p&gt;The kube-proxy add-on is deployed on each Amazon EC2 node in your Amazon EKS cluster. It maintains network rules on your nodes and enables network communication to your Pods.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html" rel="noopener noreferrer"&gt;More about kube-proxy&lt;br&gt;
&lt;/a&gt;&lt;br&gt;
Let’s first see the version configured in cluster and also create a backup of configuration. Backup is required, if in case something goes wrong, we can easily restore.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl describe daemonset kube-proxy -n kube-system | grep Image
602401143452.dkr.ecr.eu-west-2.amazonaws.com/eks/kube-proxy:v1.32.0-eksbuild.2

$ kubectl get daemonset kube-proxy -n kube-system -o yaml &amp;gt; aws-k8s-kube-proxy-backup.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally we would need to migrate self-managed addons to EKS managed. Again Keep in mind keyword here is &lt;strong&gt;OVERWRITE&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aws eks create-addon --cluster-name $CLUSTER --addon-name kube-proxy --resolve-conflicts OVERWRITE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verification
&lt;/h2&gt;

&lt;p&gt;Finally you can verify the installation of all addons using the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aws eks list-addons --cluster-name demo-eks

{
    "addons": [
        "coredns",
        "kube-proxy",
        "vpc-cni",
        "aws-efs-csi-driver",
        "aws-guardduty-agent"
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Happy Migration ! ! !&lt;/p&gt;

</description>
      <category>eks</category>
      <category>addons</category>
      <category>aws</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
