<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Santhosh S</title>
    <description>The latest articles on DEV Community by Santhosh S (@santhosh_004).</description>
    <link>https://dev.to/santhosh_004</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/santhosh_004"/>
    <language>en</language>
    <item>
      <title>Deploy Mutliple NGINX Ingress on EKS</title>
      <dc:creator>Santhosh S</dc:creator>
      <pubDate>Thu, 23 Oct 2025 09:55:13 +0000</pubDate>
      <link>https://dev.to/santhosh_004/deploy-mutliple-nginx-ingress-on-eks-2nc5</link>
      <guid>https://dev.to/santhosh_004/deploy-mutliple-nginx-ingress-on-eks-2nc5</guid>
      <description>&lt;p&gt;To deploy multiple NGINX Ingress Controllers on Amazon EKS with separate ingress classes for internal and external traffic, you'll need to:&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Create two Helm value files (values-internal.yaml and values-external.yaml)
&lt;/h2&gt;

&lt;p&gt;*&lt;em&gt;values-internal.yaml&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
controller:
  ingressClass: nginx-internal
  ingressClassResource:
    name: nginx-internal
    controllerValue: "k8s.io/ingress-nginx-internal"  # Matches your IngressClass spec
    enabled: false  #Prevent Helm from creating or managing the IngressClass
  ingressClassByName: true
  watchIngressWithoutClass: false
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: "true"
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
      service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-################4,subnet-######################"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: You have to add below annotation to follow the corrrect nginc ingress class and controller.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Apply appropriate annotations to each controller&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;IngressClass
  ingressClassByName: true
  watchIngressWithoutClass: false
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: "true"
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"`


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;values-external.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;controller:
  ingressClass: nginx-external
  ingressClassResource:
    name: nginx-external
    controllerValue: "k8s.io/ingress-nginx-external"  # Matches your IngressClass spec
    enabled: false  #Prevent Helm from creating or managing the IngressClass
  ingressClassByName: true
  watchIngressWithoutClass: false
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: "true"
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
      service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-09d############,subnet-009#########"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2.Define separate ingress classes (nginx-internal and nginx-external)
&lt;/h2&gt;

&lt;p&gt;Create externa-class.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: nginx-external
spec:
  controller: k8s.io/ingress-nginx-external
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create internal-class.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: nginx-internal
spec:
  controller: k8s.io/ingress-nginx-internal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once created create custom ingress class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f external-class.yaml
kubectl apply -f internal-class.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3.Deploy each controller using Helm with its respective values file:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Add the ingress-nginx repo
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

# Deploy internal ingress
helm install nginx-internal ingress-nginx/ingress-nginx \
  --namespace ingress-internal --create-namespace \
  -f values-internal.yaml

# Deploy external ingress
helm install nginx-external ingress-nginx/ingress-nginx \
  --namespace ingress-external --create-namespace \
  -f values-external.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Deploying separate NGINX Ingress Controllers for internal and external traffic on EKS enhances security, scalability, and traffic management. By defining distinct ingress classes and customizing Helm values, you gain fine-grained control over how services are exposed—whether privately within your VPC or publicly to the internet.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>tutorial</category>
      <category>kubernetes</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deploying EFK stack on Kubernetes cluster using Helm Charts</title>
      <dc:creator>Santhosh S</dc:creator>
      <pubDate>Tue, 30 Sep 2025 10:20:03 +0000</pubDate>
      <link>https://dev.to/santhosh_004/f-3ffe</link>
      <guid>https://dev.to/santhosh_004/f-3ffe</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fowyk0ybypwj1lc3k0wuw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fowyk0ybypwj1lc3k0wuw.png" alt=" " width="694" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A running Kubernetes cluster (e.g., Minikube, EKS, GKE).&lt;/li&gt;
&lt;li&gt;kubectl CLI installed and configured.&lt;/li&gt;
&lt;li&gt;Helm installed on your local machine.&lt;/li&gt;
&lt;li&gt;Sufficient cluster resources (memory and CPU) for Elasticsearch&lt;/li&gt;
&lt;li&gt;Your applications are deployed on K8s using Helm charts, if not, then I am confident you can figure that out, as we would be focussing on a solution using Helm charts&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Create a Storage Class for Elasticsearch
&lt;/h2&gt;

&lt;p&gt;To set up the necessary storage class, you can create one using the following YAML configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-sc
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
volumeBindingMode: WaitForFirstConsumer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Installing the AWS EBS CSI Driver using Helm Chart
&lt;/h2&gt;

&lt;p&gt;To proceed with the installation, follow these steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install aws-ebs-csi-driver aws-ebs-csi-driver/aws-ebs-csi-driver
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 3: Installing Elasticsearch with Helm Chart&lt;/p&gt;

&lt;p&gt;Now is the time to install the Helm chart for Elasticsearch, but first, let’s create a namespace for our logging stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create namespace efk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;We create a dedicated namespace called efk to keep our Elasticsearch and other logging components organized within the Kubernetes cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, add the Elasticsearch repository to your Helm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add elastic https://helm.elastic.co
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This command adds the Elastic Helm chart repository, allowing you to access and install Elasticsearch with Helm.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, you can install Elasticsearch using the following Helm chart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install elasticsearch \
 --set service.type=LoadBalancer \
 --set volumeClaimTemplate.storageClassName=ebs-sc \
 --set persistence.labels.enabled=true elastic/elasticsearch -n efk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;After the installation, you can retrieve the username and password for Elasticsearch using the following commands:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get secrets --namespace=efk elasticsearch-master-credentials -ojsonpath='{.data.username}' | base64 -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get secrets --namespace=efk elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;These commands fetch the credentials required to access Elasticsearch securely. The username and password are stored in a Kubernetes secret called elasticsearch-master-credentials within the efk namespace. The base64 -d part is used to decode the base64-encoded values for human-readable access.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;you can obtain the LoadBalancer IP for the service and test Elasticsearch using the following commands:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get svc -n efk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look for the service with the name associated with Elasticsearch, and under the “EXTERNAL-IP” column, you should see the LoadBalancer IP. It might be pending at first but will eventually get an IP assigned.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://&amp;lt;LoadBalancer-IP&amp;gt;:9200&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
When you access this URL, your browser should prompt you for the username and password. Enter the credentials you obtained earlier, and you should be able to access Elasticsearch securely over HTTPS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqt1ruhnnahnzlo6hw5of.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqt1ruhnnahnzlo6hw5of.png" alt=" " width="720" height="394"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Kibana Installation
&lt;/h2&gt;

&lt;p&gt;Step 1: Installing Kibana with LoadBalancer Service Type&lt;/p&gt;

&lt;p&gt;To set up Kibana and make it accessible through a LoadBalancer service type, you can use the following Helm installation command:&lt;/p&gt;

&lt;p&gt;The elasticsearch repo also provides kibana so we just need to install it by the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install kibana --set service.type=LoadBalancer elastic/kibana -n efk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;watch all pods in the efk namespace:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods --namespace=efk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Retrieve the Kibana service account token:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get secrets --namespace=efk kibana-kibana-es-token -ojsonpath=’{.data.token}’ | base64 -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you can obtain the LoadBalancer IP for the service and open the Kibana UI using the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get svc -n efk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After obtaining the LoadBalancer IP, you can open Kibana in your browser. Kibana typically runs on port 5601. Open your web browser and enter the following URL, replacing  with the actual LoadBalancer IP:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://&amp;lt;LoadBalancer-IP&amp;gt;:5601&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fef1pxdykw2l1bbw3j66n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fef1pxdykw2l1bbw3j66n.png" alt=" " width="720" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use the username and password you obtained for Elasticsearch above.&lt;/p&gt;

&lt;p&gt;After that, you should see the following dashboard:&lt;/p&gt;

&lt;p&gt;Press enter or click to view image in full size&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nhwccywjro1rz0mf6ob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nhwccywjro1rz0mf6ob.png" alt=" " width="720" height="423"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Fluent Bit Installation:
&lt;/h2&gt;

&lt;p&gt;To set up Fluent Bit for log processing, follow these steps:&lt;/p&gt;

&lt;p&gt;Add the Fluent Bit Helm Repository:&lt;br&gt;
You need to add the Fluent Bit Helm repository to access the necessary Helm charts.&lt;br&gt;
Run the following command to add the repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add fluent https://fluent.github.io/helm-charts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Configure Fluent Bit:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before installing the Helm chart for Fluent Bit, it’s essential to configure the Fluent Bit to correctly access Elasticsearch, which may be part of your EFK (Elasticsearch, Fluent Bit, Kibana) stack. To do this, you need to set up a configuration file. Here’s how to obtain the values file for Fluent Bit and save it in YAML format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm show values fluent/fluent-bit &amp;gt; fluentbit-values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command fetches the default configuration values for Fluent Bit and saves them in a file named &lt;code&gt;fluentbit-values.yaml&lt;/code&gt;. You can modify this file to customize Fluent Bit's settings for your specific Elasticsearch setup and logging needs. After making your changes, you can install Fluent Bit using Helm with your customized configuration.&lt;/p&gt;

&lt;p&gt;And update the following changes in the &lt;code&gt;fluentbit-values.yaml&lt;/code&gt; file:&lt;/p&gt;

&lt;p&gt;1.modify fluentbit-values.yaml elastic search user name and password &lt;br&gt;
Ex:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    `HTTP_User elastic`
    `HTTP_Passwd e7uafnP9WARJEaZX`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;config:
  service: |
    [SERVICE]
        Daemon Off
        Flush {{ .Values.flush }}
        Log_Level {{ .Values.logLevel }}
        Parsers_File /fluent-bit/etc/parsers.conf
        Parsers_File /fluent-bit/etc/conf/custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port {{ .Values.metricsPort }}
        Health_Check On

  ## https://docs.fluentbit.io/manual/pipeline/inputs
  inputs: |
    [INPUT]
        Name tail
        Path /var/log/containers/*.log
        multiline.parser docker, cri
        Tag kube.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On

    [INPUT]
        Name systemd
        Tag host.*
        Systemd_Filter _SYSTEMD_UNIT=kubelet.service
        Read_From_Tail On

  ## https://docs.fluentbit.io/manual/pipeline/filters
  filters: |
    [FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On
        Keep_Log Off
        K8S-Logging.Parser On
        K8S-Logging.Exclude On


  ## https://docs.fluentbit.io/manual/pipeline/outputs
  outputs: |
    [OUTPUT]
        Name es
        Match kube.*
        Index fluent-bit
        Type  _doc
        Host elasticsearch-master
        Port 9200
        HTTP_User elastic
        HTTP_Passwd e7uafnP9WARJEaZX
        tls On
        tls.verify Off
        Logstash_Format On
        Logstash_Prefix logstash
        Retry_Limit False
        Suppress_Type_Name On

    [OUTPUT]
        Name es
        Match host.*
        Index fluent-bit
        Type  _doc
        Host elasticsearch-master
        Port 9200
        HTTP_User elastic
        HTTP_Passwd e7uafnP9WARJEaZX
        tls On
        tls.verify Off
        Logstash_Format On
        Logstash_Prefix node
        Retry_Limit False
        Suppress_Type_Name On

......
......
......
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We can use a Lua script and a simple Lua filter for adding the index field in the log event itself and then use that field in the output plugin in the Logstash_Prefix_Key&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.........
.........
.........
luaScripts:
  setIndex.lua: |
    function set_index(tag, timestamp, record)
        index = "somePrefix-"
        if record["kubernetes"] ~= nil then
            if record["kubernetes"]["namespace_name"] ~= nil then
                if record["kubernetes"]["container_name"] ~= nil then
                    record["es_index"] = index
                        .. record["kubernetes"]["namespace_name"]
                        .. "-"
                        .. record["kubernetes"]["container_name"]
                    return 1, timestamp, record
                end
                record["es_index"] = index
                    .. record["kubernetes"]["namespace_name"]
                return 1, timestamp, record
            end
        end
        return 1, timestamp, record
    end
## https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/configuration-file
config:
  service: |
    [SERVICE]
        Daemon Off
        Flush {{ .Values.flush }}
        Log_Level {{ .Values.logLevel }}
        Parsers_File /fluent-bit/etc/parsers.conf
        Parsers_File /fluent-bit/etc/conf/custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port {{ .Values.metricsPort }}
        Health_Check On

  ## https://docs.fluentbit.io/manual/pipeline/inputs
  inputs: |
    [INPUT]
        Name tail
        Path /var/log/containers/*.log
        multiline.parser docker, cri
        Tag kube.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On

    [INPUT]
        Name systemd
        Tag host.*
        Systemd_Filter _SYSTEMD_UNIT=kubelet.service
        Read_From_Tail On

  ## https://docs.fluentbit.io/manual/pipeline/filters
  filters: |
    [FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On
        Keep_Log Off
        K8S-Logging.Parser On
        K8S-Logging.Exclude On

    [FILTER]
        Name lua
        Match kube.*
        script /fluent-bit/scripts/setIndex.lua
        call set_index

  ## https://docs.fluentbit.io/manual/pipeline/outputs
  outputs: |
    [OUTPUT]
        Name es
        Match kube.*
        Type  _doc
        Host elasticsearch-master
        Port 9200
        HTTP_User elastic
        HTTP_Passwd e7uafnP9WARJEaZX
        tls On
        tls.verify Off
        Logstash_Format On
        Logstash_Prefix logstash
        Retry_Limit False
        Suppress_Type_Name On

    [OUTPUT]
        Name es
        Match host.*
        Type  _doc
        Host elasticsearch-master
        Port 9200
        HTTP_User elastic
        HTTP_Passwd e7uafnP9WARJEaZX
        tls On
        tls.verify Off
        Logstash_Format On
        Logstash_Prefix node
        Retry_Limit False
        Suppress_Type_Name On

......
......
......
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installing helm chart of Fluentbit using custom value file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install fluent-bit fluent/fluent-bit -f fluentbit-values.yaml -n efk

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By default, Fluentbit will start gathering container logs from all the pods that are present in the cluster and will push these to the newly deployed ES cluster.&lt;/p&gt;

&lt;p&gt;It also listens to the systemd metrics and pushes them to the same ES cluster.&lt;/p&gt;

&lt;p&gt;For exploring the logs, first, verify whether the newly created indices are showing on Kibana.&lt;/p&gt;

&lt;p&gt;Go to Kibana → Stack Management → Index Management, and under the Indices tab, you should see the 2 newly created indices with the names logstash-yyyy.MM.dd and node-yyyy.MM.dd&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7b3r2pqas8d1bje0fpkf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7b3r2pqas8d1bje0fpkf.png" alt=" " width="720" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For checking your application logs on Kibana, you need to create an Index Pattern for your app (one-time activity).&lt;/p&gt;

&lt;p&gt;Index patterns can be created by: Go to Kibana → Stack Management → Data Views → Create data view→ Specify your index pattern and select a timestamp field → save data view to Kibana&lt;br&gt;
Now, you can check your logs by going to Discover → Select your newly created index pattern from the dropdown → Search Logs&lt;br&gt;
now you will see your logs&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>monitoring</category>
      <category>devops</category>
    </item>
    <item>
      <title>🛡️Implementing Pod Security Admission in Kubernetes</title>
      <dc:creator>Santhosh S</dc:creator>
      <pubDate>Tue, 30 Sep 2025 08:06:22 +0000</pubDate>
      <link>https://dev.to/santhosh_004/1-implementing-pod-security-admission-in-kubernetes-5e3n</link>
      <guid>https://dev.to/santhosh_004/1-implementing-pod-security-admission-in-kubernetes-5e3n</guid>
      <description>&lt;p&gt;This guide covers:&lt;/p&gt;

&lt;p&gt;Why PSA replaces PodSecurityPolicies (PSPs)&lt;/p&gt;

&lt;p&gt;How PSA works using namespace labels&lt;/p&gt;

&lt;p&gt;The three enforcement modes: enforce, audit, and warn&lt;/p&gt;

&lt;p&gt;Real-world examples of applying PSA to production and dev environments&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“PSA allows cluster administrators to enforce standardized controls without relying on third-party tools or custom configurations.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Audit Mode:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl label namespace &amp;lt;ns-name&amp;gt;l pod-security.kubernetes.io/enforce=restricted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It does not block pod creation.&lt;br&gt;
It does not show warnings to users.&lt;br&gt;
It does log violations in the Kubernetes audit logs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enforce Mode (Blocks non-compliant pods):
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl label namespace &amp;lt;your-namespace&amp;gt; \
  pod-security.kubernetes.io/enforce=restricted \
  pod-security.kubernetes.io/enforce-version=latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Warn Mode (Allows pods but shows warnings):
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl label namespace &amp;lt;your-namespace&amp;gt; \
  pod-security.kubernetes.io/warn=restricted \
  pod-security.kubernetes.io/warn-version=latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  To test this Block mode and warn mode run sample workload with test namespace
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: insecure-pod
spec:
  containers:
    - name: nginx
      image: nginx
      securityContext:
        runAsUser: 0  # Violates restricted policy
        allowPrivilegeEscalation: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Note:
&lt;/h2&gt;

&lt;p&gt;This PSA will apply only for new workloads or pods&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Implementing Pod Security Admission (PSA)&lt;br&gt;
Pod Security Admission (PSA) is a powerful built-in Kubernetes feature that replaces the deprecated PodSecurityPolicy (PSP) mechanism. By using namespace-level labels and enforcement modes (enforce, audit, warn), PSA enables cluster administrators to apply consistent, standards-based security controls across workloads—without relying on external tools.&lt;/p&gt;

&lt;p&gt;Implementing PSA helps:&lt;/p&gt;

&lt;p&gt;Strengthen pod-level security posture&lt;/p&gt;

&lt;p&gt;Simplify policy management using native Kubernetes constructs&lt;/p&gt;

&lt;p&gt;Gradually roll out restrictions using audit and warn modes before enforcing&lt;/p&gt;

&lt;p&gt;Whether you're securing production workloads or sandboxing development environments, PSA offers a flexible and transparent way to enforce best practices. As Kubernetes continues to evolve, PSA is the recommended path forward for pod security.&lt;/p&gt;

</description>
      <category>containers</category>
      <category>kubernetes</category>
      <category>security</category>
    </item>
    <item>
      <title>Karpenter Deployment - on EKS NodeGroup</title>
      <dc:creator>Santhosh S</dc:creator>
      <pubDate>Thu, 25 Sep 2025 11:18:21 +0000</pubDate>
      <link>https://dev.to/santhosh_004/karpenter-deployment-plan-phased-approach-31fc</link>
      <guid>https://dev.to/santhosh_004/karpenter-deployment-plan-phased-approach-31fc</guid>
      <description>&lt;p&gt;Karpenter is an open-source, high-performance Kubernetes cluster autoscaler developed by AWS. Amazon Elastic Kubernetes Service (EKS) provides a powerful and flexible platform for running containerized applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1 : Create an IAM Role
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Role Name : KarpenterNodeRole-santhosh" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;KarpenterNodeRole-santhosh" &lt;br&gt;
Role attached below policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AmazonEKSWorkerNodePolicy
AmazonEKS_CNI_Policy
AmazonEC2ContainerRegistryReadOnly
AmazonSSMManagedInstanceCore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2 : Create a Controller IAM Role
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;Role Name : KarpenterControllereRole-santhosh&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Modify Trust relationships&lt;br&gt;
Add OIDC&lt;br&gt;
Account ID&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/oidc.eks.ap-south-1.amazonaws.com/id/6B407ED9BFC9CE681546033D7AD4156A"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.eks.ap-south-1.amazonaws.com/id/6B407ED9BFC9C46033D7AD4156A:aud": "sts.amazonaws.com",
                    "oidc.eks.ap-south-1.amazonaws.com/id/6B407ED9BFC9CE633D7AD4156A:sub": "system:serviceaccount:karpenter:karpenter"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Create a KarpenterControllerPolicy-santhosh" policy:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Karpenter",
      "Effect": "Allow",
      "Action": [
        "ssm:GetParameter",
        "ec2:DescribeImages",
        "ec2:RunInstances",
        "ec2:DescribeSubnets",
        "ec2:DescribeSecurityGroups",
        "ec2:DescribeLaunchTemplates",
        "ec2:DescribeInstances",
        "ec2:DescribeInstanceTypes",
        "ec2:DescribeInstanceTypeOfferings",
        "ec2:DeleteLaunchTemplate",
        "ec2:CreateTags",
        "ec2:CreateLaunchTemplate",
        "ec2:CreateFleet",
        "ec2:DescribeSpotPriceHistory",
        "pricing:GetProducts"
      ],
      "Resource": "*"
    },
    {
      "Sid": "ConditionalEC2Termination",
      "Effect": "Allow",
      "Action": "ec2:TerminateInstances",
      "Resource": "*",
      "Condition": {
        "StringLike": {
          "ec2:ResourceTag/karpenter.sh/nodepool": "*"
        }
      }
    },
    {
      "Sid": "PassNodeIAMRole",
      "Effect": "Allow",
      "Action": "iam:PassRole",
      "Resource": "arn:aws:iam::961489959441:role/MF-MS-PRE-PROD-CLUSTER-NodeInstanceRole"
    },
    {
      "Sid": "EKSClusterEndpointLookup",
      "Effect": "Allow",
      "Action": "eks:DescribeCluster",
      "Resource": "arn:aws:eks:ap-south-1:961489959441:cluster/MF-MS-PRE-PROD-CLUSTER"
    },
    {
      "Sid": "AllowScopedInstanceProfileCreationActions",
      "Effect": "Allow",
      "Action": [
        "iam:CreateInstanceProfile"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/kubernetes.io/cluster/MF-MS-PRE-PROD-CLUSTER": "owned",
          "aws:RequestTag/topology.kubernetes.io/region": "ap-south-1"
        },
        "StringLike": {
          "aws:RequestTag/karpenter.k8s.aws/ec2nodeclass": "*"
        }
      }
    },
    {
      "Sid": "AllowScopedInstanceProfileTagActions",
      "Effect": "Allow",
      "Action": [
        "iam:TagInstanceProfile"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/kubernetes.io/cluster/MF-MS-PRE-PROD-CLUSTER": "owned",
          "aws:ResourceTag/topology.kubernetes.io/region": "ap-south-1",
          "aws:RequestTag/kubernetes.io/cluster/MF-MS-PRE-PROD-CLUSTER": "owned",
          "aws:RequestTag/topology.kubernetes.io/region": "ap-south-1"
        },
        "StringLike": {
          "aws:ResourceTag/karpenter.k8s.aws/ec2nodeclass": "*",
          "aws:RequestTag/karpenter.k8s.aws/ec2nodeclass": "*"
        }
      }
    },
    {
      "Sid": "AllowScopedInstanceProfileActions",
      "Effect": "Allow",
      "Action": [
        "iam:AddRoleToInstanceProfile",
        "iam:RemoveRoleFromInstanceProfile",
        "iam:DeleteInstanceProfile"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/kubernetes.io/cluster/MF-MS-PRE-PROD-CLUSTER": "owned",
          "aws:ResourceTag/topology.kubernetes.io/region": "ap-south-1"
        },
        "StringLike": {
          "aws:ResourceTag/karpenter.k8s.aws/ec2nodeclass": "*"
        }
      }
    },
    {
      "Sid": "AllowInstanceProfileReadActions",
      "Effect": "Allow",
      "Action": "iam:GetInstanceProfile",
      "Resource": "*"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4 : Add tags to subnets and security groups
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-tags --tags "Key=karpenter.sh/discovery,Value=MF-MS-PRE-PROD-CLUSTER" --resources "sg-06ee27bb7de43cf6e"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5 : Update aws-auth ConfigMap:
&lt;/h2&gt;

&lt;p&gt;We need to allow nodes that are using the node IAM role we just created to join the cluster. To do that we have to modify the aws-auth ConfigMap in the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl edit configmap aws-auth -n kube-system

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you will need to add a section to the mapRoles that looks something like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- groups:
  - system:bootstrappers
  - system:nodes
  # - eks:kube-proxy-windows
  rolearn: arn:aws:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-santhosh
  username: system:node:{{EC2PrivateDNSName}}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full aws-auth configmap should have two groups. One for your Karpenter node role and one for your existing node group.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Deploy Karpenter
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install karpenter oci://public.ecr.aws/karpenter/karpenter  --namespace "karpenter" --create-namespace \
    --set "settings.clusterName=santhosh" \
    --set "serviceAccount.annotations.eks\.amazonaws\.com/role-arn=arn:aws:iam::${AWS_ACCOUNT_ID}:role/KarpenterControllerRole-santhosh" \
    --set controller.resources.requests.cpu=1 \
    --set controller.resources.requests.memory=1Gi \
    --set controller.resources.limits.cpu=1 \
    --set controller.resources.limits.memory=1Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
[ec2-user@ip-172-31-0-244 ~]$  kubectl get po -A
NAMESPACE     NAME                         READY   STATUS    RESTARTS   AGE
karpenter     karpenter-7d4c9cbd84-vpbfw   1/1     Running   0          29m
karpenter     karpenter-7d4c9cbd84-zjwz4   1/1     Running   0          29m
kube-system   aws-node-889mt               2/2     Running   0          16m
kube-system   aws-node-rnzsk               2/2     Running   0          51m
kube-system   coredns-6c55b85fbb-4cj87     1/1     Running   0          54m
kube-system   coredns-6c55b85fbb-nxwrg     1/1     Running   0          54m
kube-system   kube-proxy-8jmbr             1/1     Running   0          16m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7 : Fetch AMI ID
&lt;/h2&gt;

&lt;p&gt;We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads.&lt;/p&gt;

&lt;p&gt;You can retrieve the image ID of the latest recommended Amazon EKS optimized Amazon Linux AMI with the following command&lt;/p&gt;

&lt;p&gt;fetch AMI ID using command line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.30/amazon-linux-2-arm64/recommended/image_id --query Parameter.Value --output text

aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.30/amazon-linux-2/recommended/image_id --query Parameter.Value --output text

aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.30/amazon-linux-2-gpu/recommended/image_id --query Parameter.Value --output text

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 8: Create nodeclass with KMS encryption:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
  name: karpenter-core-class
spec:
  amiFamily: AL2023
  role: arn:aws:iam::961489959441:role/Axis-MF-MSIL-PRE-PROD-CLUSTER-NodeInstanceRole
  subnetSelectorTerms:
    - tags:
        kubernetes.io/cluster/Axis-MF-MSIL-PRE-PROD-CLUSTER: owned
  securityGroupSelectorTerms:
    - tags:
        karpenter.sh/discovery: Axis-MF-MSIL-PRE-PROD-CLUSTER
  amiSelectorTerms:
    - id: ami-060175f7c2d4690ba
  blockDeviceMappings:
    - deviceName: /dev/xvda
    ebs:
        volumeSize: 50Gi
        volumeType: gp3
        encrypted: true
        kmsKeyID: arn:aws:kms:ap-south-1:3668421:key/c2b329ac-bf-47cd-9e054379b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 9: Ceate nodepool
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: karpenter-core-pool
spec:
  template:
    spec:
      requirements:
        - key: kubernetes.io/arch
          operator: In
          values: ["amd64"]
        - key: kubernetes.io/os
          operator: In
          values: ["linux"]
        - key: karpenter.sh/capacity-type
          operator: In
          values: ["on-demand"]
        - key: karpenter.k8s.aws/instance-type
          operator: In
          values:
            - m5.xlarge
            - r5.xlarge
            - m5.2xlarge
            - r5.2xlarge
      nodeClassRef:
        group: karpenter.k8s.aws
        kind: EC2NodeClass
        name: karpenter-core-class
      expireAfter: 720h
  limits:
    cpu: 1000
  disruption:
    consolidationPolicy: WhenEmptyOrUnderutilized
    consolidateAfter: 2m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🚀 Conclusion: Why Karpenter Stands Out ?
&lt;/h2&gt;

&lt;p&gt;Karpenter offers a modern, intelligent approach to Kubernetes node provisioning that significantly enhances cluster performance and operational efficiency. Compared to traditional solutions like Cluster Autoscaler, Karpenter delivers:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚙️ Improved Node Scheduling Efficiency By dynamically selecting the optimal instance types and sizes, Karpenter ensures your workloads run on the most suitable infrastructure—eliminating the rigidity of predefined node groups.&lt;/p&gt;

&lt;p&gt;⚡ Faster Scaling Karpenter reacts to cluster changes in seconds, enabling rapid scaling during traffic spikes or workload surges—far outpacing the slower response times of Cluster Autoscaler.&lt;/p&gt;

&lt;p&gt;💰 Cost Optimization With its ability to choose cost-effective instance types and avoid over-provisioning, Karpenter helps organizations reduce cloud spend while maintaining performance.&lt;/p&gt;

&lt;p&gt;🛠️ Simpler Configuration Developers benefit from a streamlined setup process, as Karpenter removes the need to manage complex node group configurations and handles provisioning automatically.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>karpenter</category>
    </item>
    <item>
      <title>Karpenter Deployment Plan – Phased Approach</title>
      <dc:creator>Santhosh S</dc:creator>
      <pubDate>Thu, 25 Sep 2025 11:15:00 +0000</pubDate>
      <link>https://dev.to/santhosh_004/karpenter-deployment-plan-phased-approach-34j9</link>
      <guid>https://dev.to/santhosh_004/karpenter-deployment-plan-phased-approach-34j9</guid>
      <description>&lt;p&gt;Step 1: Karpenter controller role create and  attach policy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Karpenter",
      "Effect": "Allow",
      "Action": [
        "ssm:GetParameter",
        "ec2:DescribeImages",
        "ec2:RunInstances",
        "ec2:DescribeSubnets",
        "ec2:DescribeSecurityGroups",
        "ec2:DescribeLaunchTemplates",
        "ec2:DescribeInstances",
        "ec2:DescribeInstanceTypes",
        "ec2:DescribeInstanceTypeOfferings",
        "ec2:DeleteLaunchTemplate",
        "ec2:CreateTags",
        "ec2:CreateLaunchTemplate",
        "ec2:CreateFleet",
        "ec2:DescribeSpotPriceHistory",
        "pricing:GetProducts"
      ],
      "Resource": "*"
    },
    {
      "Sid": "ConditionalEC2Termination",
      "Effect": "Allow",
      "Action": "ec2:TerminateInstances",
      "Resource": "*",
      "Condition": {
        "StringLike": {
          "ec2:ResourceTag/karpenter.sh/nodepool": "*"
        }
      }
    },
    {
      "Sid": "PassNodeIAMRole",
      "Effect": "Allow",
      "Action": "iam:PassRole",
      "Resource": "arn:aws:iam::961489959441:role/Axis-MF-MSIL-PRE-PROD-CLUSTER-NodeInstanceRole"
    },
    {
      "Sid": "EKSClusterEndpointLookup",
      "Effect": "Allow",
      "Action": "eks:DescribeCluster",
      "Resource": "arn:aws:eks:ap-south-1:961489959441:cluster/Axis-MF-MSIL-PRE-PROD-CLUSTER"
    },
    {
      "Sid": "AllowScopedInstanceProfileCreationActions",
      "Effect": "Allow",
      "Action": [
        "iam:CreateInstanceProfile"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/kubernetes.io/cluster/Axis-MF-MSIL-PRE-PROD-CLUSTER": "owned",
          "aws:RequestTag/topology.kubernetes.io/region": "ap-south-1"
        },
        "StringLike": {
          "aws:RequestTag/karpenter.k8s.aws/ec2nodeclass": "*"
        }
      }
    },
    {
      "Sid": "AllowScopedInstanceProfileTagActions",
      "Effect": "Allow",
      "Action": [
        "iam:TagInstanceProfile"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/kubernetes.io/cluster/Axis-MF-MSIL-PRE-PROD-CLUSTER": "owned",
          "aws:ResourceTag/topology.kubernetes.io/region": "ap-south-1",
          "aws:RequestTag/kubernetes.io/cluster/Axis-MF-MSIL-PRE-PROD-CLUSTER": "owned",
          "aws:RequestTag/topology.kubernetes.io/region": "ap-south-1"
        },
        "StringLike": {
          "aws:ResourceTag/karpenter.k8s.aws/ec2nodeclass": "*",
          "aws:RequestTag/karpenter.k8s.aws/ec2nodeclass": "*"
        }
      }
    },
    {
      "Sid": "AllowScopedInstanceProfileActions",
      "Effect": "Allow",
      "Action": [
        "iam:AddRoleToInstanceProfile",
        "iam:RemoveRoleFromInstanceProfile",
        "iam:DeleteInstanceProfile"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/kubernetes.io/cluster/Axis-MF-MSIL-PRE-PROD-CLUSTER": "owned",
          "aws:ResourceTag/topology.kubernetes.io/region": "ap-south-1"
        },
        "StringLike": {
          "aws:ResourceTag/karpenter.k8s.aws/ec2nodeclass": "*"
        }
      }
    },
    {
      "Sid": "AllowInstanceProfileReadActions",
      "Effect": "Allow",
      "Action": "iam:GetInstanceProfile",
      "Resource": "*"
    }
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step2: Tag security group of cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-tags --tags "Key=karpenter.sh/discovery,Value=MF-MS-PRE-PROD-CLUSTER" --resources "sg-06ee27bb7de43cf6e"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Jenkins on EKS using EFS</title>
      <dc:creator>Santhosh S</dc:creator>
      <pubDate>Thu, 25 Sep 2025 08:08:21 +0000</pubDate>
      <link>https://dev.to/santhosh_004/jenkins-on-eks-using-efs-21ci</link>
      <guid>https://dev.to/santhosh_004/jenkins-on-eks-using-efs-21ci</guid>
      <description>&lt;p&gt;In this guide, we’ll walk through deploying Jenkins on Amazon EKS with persistent storage backed by AWS EFS using the CSI driver. This setup ensures scalable, durable, and shared storage for Jenkins builds.&lt;/p&gt;

&lt;p&gt;Step 1: Install AWS EFS CSI Driver&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 2: Set Up AWS Resources&lt;/p&gt;

&lt;p&gt;Get VPC ID&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks describe-cluster \
  --name hulk-santhosh-cluster \
  --query "cluster.resourcesVpcConfig.vpcId" \
  --output text \
  --region ap-south-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get VPC CIDR Range&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 describe-vpcs \
  --vpc-ids vpc-07937adc3227e4b54 \
  --query "Vpcs[].CidrBlock" \
  --output text \
  --region ap-south-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create Security Group&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-security-group \
  --description efs-test-sg \
  --group-name efs-sg \
  --vpc-id vpc-07937adc3227e4b54 \
  --region ap-south-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Authorize Ingress&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws ec2 authorize-security-group-ingress \&lt;br&gt;
  --group-id sg-0be281b6c437376c5 \&lt;br&gt;
  --protocol tcp \&lt;br&gt;
  --port 2049 \&lt;br&gt;
  --cidr 192.168.0.0/16&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Step 3: Create EFS File System&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
aws efs create-file-system \
  --creation-token eks-efs \
  --region ap-south-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create Mount Target&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws efs create-mount-target \
  --file-system-id fs-04ec113cee81e30b2 \
  --subnet-id subnet-0a6d27e06ff1e24ed \
  --security-group sg-0be281b6c437376c5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 4: Kubernetes Storage Setup&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-04ec113cee81e30b2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;PersistentVolumeClaim&lt;br&gt;
`&lt;br&gt;
apiVersion: v1&lt;br&gt;
kind: PersistentVolumeClaim&lt;br&gt;
metadata:&lt;br&gt;
  name: jenkins-claim&lt;br&gt;
spec:&lt;br&gt;
  accessModes:&lt;br&gt;
    - ReadWriteMany&lt;br&gt;
  storageClassName: efs-sc&lt;br&gt;
  resources:&lt;br&gt;
    requests:&lt;br&gt;
      storage: 5Gi&lt;/p&gt;

&lt;p&gt;`&lt;/p&gt;

&lt;p&gt;Step 5: RBAC for Jenkins&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: jenkins
rules:
  # Add relevant rules here
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: jenkins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: jenkins
subjects:
- kind: ServiceAccount
  name: jenkins
  namespace: jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 6: Jenkins Service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
apiVersion: v1
kind: Service
metadata:
  name: jenkins
  labels:
    app: jenkins
spec:
  type: ClusterIP
  ports:
    - name: ui
      port: 8080
      targetPort: 8080
    - name: slave
      port: 50000
    - name: http
      port: 80
      targetPort: 8080
  selector:
    app: jenkins

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 7: Jenkins Deployment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
`
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jenkins
  namespace: jenkins
  labels:
    app: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      securityContext:
        fsGroup: 1000
      initContainers:
      - name: volume-permission-fix
        image: busybox
        command: ["sh", "-c", "chown -R 1000:1000 /var/jenkins_home"]
        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: jenkins-home
          mountPath: /var/jenkins_home
      containers:
      - name: jenkins
        image: jenkins/jenkins:lts
        ports:
        - containerPort: 8080
        - containerPort: 50000
        volumeMounts:
        - name: jenkins-home
          mountPath: /var/jenkins_home
      volumes:
      - name: jenkins-home
        persistentVolumeClaim:
          claimName: jenkins-claim

`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 8: Jenkins Credentials&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -it &amp;lt;jenkins-pod-name&amp;gt; -n jenkins -- cat /var/jenkins_home/secrets/initialAdminPassword 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 9: Service Account Token&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadata:
  name: jenkins-token
  namespace: jenkins
  annotations:
    kubernetes.io/service-account.name: jenkins
type: kubernetes.io/service-account-token

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 10: Configure Jenkins Kubernetes Cloud&lt;br&gt;
Kubernetes URL: &lt;a href="https://kubernetes.default.svc.cluster.local" rel="noopener noreferrer"&gt;https://kubernetes.default.svc.cluster.local&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Namespace: jenkins&lt;/p&gt;

&lt;p&gt;Credentials: Service account token&lt;/p&gt;

&lt;p&gt;Jenkins URL: &lt;a href="http://jenkins.jenkins.svc.cluster.local:8080" rel="noopener noreferrer"&gt;http://jenkins.jenkins.svc.cluster.local:8080&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jenkins Tunnel: jenkins.jenkins.svc.cluster.local:50000&lt;/p&gt;

&lt;p&gt;Test connection — it should say Connected to Kubernetes.&lt;/p&gt;

&lt;p&gt;Pod Template for Jenkins Agents&lt;br&gt;
Name: jenkins-agent&lt;/p&gt;

&lt;p&gt;Namespace: jenkins&lt;/p&gt;

&lt;p&gt;Labels: jenkins-agent&lt;/p&gt;

&lt;p&gt;Usage: Only build jobs with matching label&lt;/p&gt;

&lt;p&gt;Container Template&lt;br&gt;
Name: jnlp&lt;/p&gt;

&lt;p&gt;Image: jenkins/inbound-agent:latest&lt;/p&gt;

&lt;p&gt;Working Dir: /home/jenkins/agent&lt;/p&gt;

&lt;p&gt;Allocate pseudo-TTY: &lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
