<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vinod Kumar</title>
    <description>The latest articles on DEV Community by Vinod Kumar (@vinod827).</description>
    <link>https://dev.to/vinod827</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vinod827"/>
    <language>en</language>
    <item>
      <title>[Boost]</title>
      <dc:creator>Vinod Kumar</dc:creator>
      <pubDate>Tue, 01 Jul 2025 01:21:35 +0000</pubDate>
      <link>https://dev.to/vinod827/-10gp</link>
      <guid>https://dev.to/vinod827/-10gp</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/vinod827/getting-started-with-amazon-eks-auto-mode-simplifying-kubernetes-node-management-h1i" class="crayons-story__hidden-navigation-link"&gt;🧠 Getting Started with Amazon EKS Auto Mode: Simplifying Kubernetes Node Management&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/vinod827" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F631801%2F4a443ef4-45e9-4f4a-8638-1c1d3b7fb14c.jpeg" alt="vinod827 profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/vinod827" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Vinod Kumar
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Vinod Kumar
                
              
              &lt;div id="story-author-preview-content-2639624" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/vinod827" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F631801%2F4a443ef4-45e9-4f4a-8638-1c1d3b7fb14c.jpeg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Vinod Kumar&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/vinod827/getting-started-with-amazon-eks-auto-mode-simplifying-kubernetes-node-management-h1i" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Jun 30 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/vinod827/getting-started-with-amazon-eks-auto-mode-simplifying-kubernetes-node-management-h1i" id="article-link-2639624"&gt;
          🧠 Getting Started with Amazon EKS Auto Mode: Simplifying Kubernetes Node Management
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/aws"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;aws&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/eks"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;eks&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/containers"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;containers&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/fargate"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;fargate&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/vinod827/getting-started-with-amazon-eks-auto-mode-simplifying-kubernetes-node-management-h1i" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;1&lt;span class="hidden s:inline"&gt; reaction&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/vinod827/getting-started-with-amazon-eks-auto-mode-simplifying-kubernetes-node-management-h1i#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            2 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>aws</category>
      <category>eks</category>
      <category>containers</category>
      <category>fargate</category>
    </item>
    <item>
      <title>🧠 Getting Started with Amazon EKS Auto Mode: Simplifying Kubernetes Node Management</title>
      <dc:creator>Vinod Kumar</dc:creator>
      <pubDate>Mon, 30 Jun 2025 13:56:50 +0000</pubDate>
      <link>https://dev.to/vinod827/getting-started-with-amazon-eks-auto-mode-simplifying-kubernetes-node-management-h1i</link>
      <guid>https://dev.to/vinod827/getting-started-with-amazon-eks-auto-mode-simplifying-kubernetes-node-management-h1i</guid>
      <description>&lt;p&gt;With Kubernetes adoption growing fast, managing worker nodes, capacity scaling, and right-sizing clusters is becoming increasingly complex. Enter Amazon EKS Auto Mode – a powerful new capability designed to simplify cluster operations by automating infrastructure provisioning and scaling.&lt;/p&gt;

&lt;h2&gt;
  
  
  In this blog, we'll dive into:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;What is EKS Auto Mode?&lt;/li&gt;
&lt;li&gt;How it works under the hood&lt;/li&gt;
&lt;li&gt;How to create an EKS Auto Mode cluster&lt;/li&gt;
&lt;li&gt;Deploying workloads on it&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🚀 What is Amazon EKS Auto Mode?
&lt;/h2&gt;

&lt;p&gt;Amazon EKS Auto Mode is a fully managed option to provision Kubernetes clusters without having to manage EC2 instances or Fargate profiles yourself. It automatically provisions serverless compute capacity that matches your pod specifications using AWS-managed node groups under the hood.&lt;/p&gt;

&lt;p&gt;It’s ideal for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Teams looking to reduce infrastructure overhead&lt;/li&gt;
&lt;li&gt;Rapid autoscaling workloads&lt;/li&gt;
&lt;li&gt;Cost-efficient, just-in-time compute&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔧 How EKS Auto Mode Works
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36th3kxafk0myp8ginbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36th3kxafk0myp8ginbs.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Behind the scenes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS Auto Mode leverages Karpenter, AWS's open-source autoscaler.&lt;/li&gt;
&lt;li&gt;It provisions compute (EC2 or Fargate) based on pod resource requests.&lt;/li&gt;
&lt;li&gt;It automatically handles zone selection, instance types, and right-sizing.&lt;/li&gt;
&lt;li&gt;You get an ephemeral node group with no configuration overhead.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛠️ Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI configured&lt;/li&gt;
&lt;li&gt;eksctl installed (brew install eksctl)&lt;/li&gt;
&lt;li&gt;IAM role with EKS and EC2 permissions&lt;/li&gt;
&lt;li&gt;kubectl installed and configured&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🧪 Creating an EKS Cluster in Auto Mode&lt;br&gt;
Amazon introduced a new --auto-mode flag in eksctl. Here’s how to launch a cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create cluster --name eks-auto-cluster \
  --region us-west-2 \
  --auto-mode \
  --kubernetes-version 1.29
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A control plane&lt;/li&gt;
&lt;li&gt;VPC, subnets, security groups&lt;/li&gt;
&lt;li&gt;Auto-managed capacity that scales with your workloads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No need to define node groups!&lt;/p&gt;

&lt;h2&gt;
  
  
  📦 Deploying a Sample App
&lt;/h2&gt;

&lt;p&gt;Let’s deploy a basic NGINX app to test Auto Mode scheduling:&lt;/p&gt;

&lt;p&gt;Create a Deployment YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        resources:
          requests:
            cpu: "250m"
            memory: "256Mi"
          limits:
            cpu: "500m"
            memory: "512Mi"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the Deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f nginx-deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💡 As soon as you apply this, EKS Auto Mode will dynamically provision compute based on your resource requests. It can choose optimized EC2 instances behind the scenes.&lt;/p&gt;

&lt;h2&gt;
  
  
  📈 Observe Auto Scaling in Action
&lt;/h2&gt;

&lt;p&gt;You can watch pods being scheduled and nodes being created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -w
kubectl get nodes -w
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also use the EKS console to view events and node activity.&lt;/p&gt;

&lt;h2&gt;
  
  
  💸 Pricing Note
&lt;/h2&gt;

&lt;p&gt;You're billed only for the actual compute (EC2 instances) that Auto Mode provisions. There’s no separate charge for Auto Mode itself. Spot instances may also be used to optimize cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  📌 Limitations (as of 2025)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Only available in select AWS regions&lt;/li&gt;
&lt;li&gt;Some custom scheduling features (like affinity) might need tweaks&lt;/li&gt;
&lt;li&gt;Works best with recent Kubernetes versions (&amp;gt;=1.28)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🧹 Clean Up
&lt;/h2&gt;

&lt;p&gt;To delete the cluster:&lt;br&gt;
&lt;code&gt;eksctl delete cluster --name eks-auto-cluster --region us-west-2&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🎉 Conclusion
&lt;/h2&gt;

&lt;p&gt;Amazon EKS Auto Mode is a game-changer for teams who want to focus on workloads, not infrastructure. It delivers Kubernetes scalability with zero node group management. Just define your pods and let AWS handle the rest.&lt;/p&gt;

&lt;p&gt;If you're deploying microservices, running dev/test environments, or just starting with Kubernetes – EKS Auto Mode is worth trying out.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>containers</category>
      <category>fargate</category>
    </item>
    <item>
      <title>Securing Kubernetes Workloads with Falco</title>
      <dc:creator>Vinod Kumar</dc:creator>
      <pubDate>Sun, 11 May 2025 06:49:20 +0000</pubDate>
      <link>https://dev.to/vinod827/securing-kubernetes-workloads-with-falco-1mod</link>
      <guid>https://dev.to/vinod827/securing-kubernetes-workloads-with-falco-1mod</guid>
      <description>&lt;p&gt;In today’s cloud-native world, security is paramount. It’s not enough to secure applications only during deployment; runtime security is crucial. Falco, a cloud-native security tool, helps detect threats in real-time across hosts, containers, Kubernetes, and cloud environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Falco?
&lt;/h2&gt;

&lt;p&gt;Falco is a cloud-native security tool that helps in detecting the threats in real time. It can detect threats across hosts, containers, Kubernetes &amp;amp; cloud environments.&lt;/p&gt;

&lt;p&gt;Falco uses the eBPF technology that continuously monitors the events (such as Linux syscalls), and reports for any suspicious activities like abnormal behaviours, potential security threats and compliance violations from apps by triggering the alerts to the team. It’s an open-source CNCF project originally developed by Sysdig.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Falco?
&lt;/h2&gt;

&lt;p&gt;Falco offers several advantages for runtime security:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real-time threat detection&lt;/strong&gt;: It can detect threats like reverse shells, RBAC abuse, and file access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lightweight and powerful&lt;/strong&gt;: Falco leverages eBPF for efficient monitoring.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Highly customizable&lt;/strong&gt;: You can tailor Falco rules to your specific needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Versatile&lt;/strong&gt;: It supports various event sources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy integration&lt;/strong&gt;: Falco integrates seamlessly with alerting systems like Slack and webhooks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How does Falco Works
&lt;/h2&gt;

&lt;p&gt;Falco continuously monitors the event sources, reporting any suspicious activities or potential security threats based on the defined Falco rule. If any of the Falco rule is evaluated to true by the Falco Engine then the event is send as the &lt;strong&gt;Falco outputs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;By default, Falco performs a standard output of such events which can then be configured to send it to Slack (over HTTP/S), gRPC protocol, file outputs, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jgt8tcu10ogloomzh9b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jgt8tcu10ogloomzh9b.png" alt="Image description" width="720" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Falco Event Sources
&lt;/h2&gt;

&lt;p&gt;Falco intelligently monitors events from various sources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🧠 &lt;strong&gt;Linux SysCalls&lt;/strong&gt;: Observes low-level system activity using Kernel Modules or eBPF.&lt;/li&gt;
&lt;li&gt;☸️ &lt;strong&gt;Kubernetes Audit Logs&lt;/strong&gt;: Detects unauthorized access attempts and resource manipulation.&lt;/li&gt;
&lt;li&gt;📦 &lt;strong&gt;Container Lifecycle&lt;/strong&gt;: Monitors container start, stop, and execution events.&lt;/li&gt;
&lt;li&gt;☁️ &lt;strong&gt;Cloud Provider Logs&lt;/strong&gt;: Ingests logs from cloud providers like AWS CloudTrail.&lt;/li&gt;
&lt;li&gt;🔧 &lt;strong&gt;Custom Event Sources&lt;/strong&gt;: Supports additional custom event sources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Falco Rules
&lt;/h2&gt;

&lt;p&gt;Falco Rules are the core of its detection mechanism. These YAML-based rules define the suspicious anomalies to detect. The Falco Rule Engine evaluates these rules, raising alerts based on syscalls and metadata. You can use predefined rules or customize them to meet your specific security requirements.&lt;/p&gt;

&lt;p&gt;A Falco rule consists of several components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;rule&lt;/strong&gt;: Name of the rule&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;desc&lt;/strong&gt;: Description of the behavior&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;condition&lt;/strong&gt;: Logical expression based on syscalls and fields&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;output&lt;/strong&gt;: Alert message format&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;priority&lt;/strong&gt;: Severity Level&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tags&lt;/strong&gt;: For organizing the rules&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sample Falco rule:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv6q24j1vegmskumk6ev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv6q24j1vegmskumk6ev.png" alt="Image description" width="720" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Falco Outputs
&lt;/h2&gt;

&lt;p&gt;Falco generates outputs that can be used for alerting and further analysis. By default, Falco sends as a standard output.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fonz2t9nfem0p7wsktdho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fonz2t9nfem0p7wsktdho.png" alt="Image description" width="720" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Falcosidekick
&lt;/h2&gt;

&lt;p&gt;Falcosidekick is a companion tool that acts as a central endpoint for Falco instances, forwarding events to external systems. It can run as daemon process, or can be deployed in Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Falcosidekickui
&lt;/h2&gt;

&lt;p&gt;Falcosidekick UI provides visualization for real-time event monitoring and analysis to the Falco Events using a centralized user-friendly dashboard.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time Event Monitoring&lt;/li&gt;
&lt;li&gt;Advances search and filtering options&lt;/li&gt;
&lt;li&gt;Categorized view based on priority, source, rules&lt;/li&gt;
&lt;li&gt;Integration Support with third-party services like Slack, etc&lt;/li&gt;
&lt;li&gt;Simplifies incident response workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Demo of Falco
&lt;/h2&gt;

&lt;p&gt;In this demo, a custom rule of Falco has been configured to detect if a pod is reading the senstive files or not in a Kubernetes cluster. And when a suspicious pod violates the defined rule then the Falco Engine detects the event and forwards it to the Slack by Falcosidekick.&lt;br&gt;
All these events can also be then visualized using the dashboard of falcosidekick-ui.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhurk7lvorgxcw69fby55.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhurk7lvorgxcw69fby55.png" alt="Image description" width="720" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please note the recorded demo can be found in this link and the source code in the given GitHub repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://drive.google.com/file/d/1Svv0lSg4_Xgf6cVzdXJeduvUmRqtkQRy/view?pli=1" rel="noopener noreferrer"&gt;https://drive.google.com/file/d/1Svv0lSg4_Xgf6cVzdXJeduvUmRqtkQRy/view?pli=1&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations and Challenges of Falco
&lt;/h2&gt;

&lt;p&gt;While powerful, Falco has some limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🧠 &lt;strong&gt;Kernel dependency&lt;/strong&gt;: It relies on the Linux kernel.&lt;/li&gt;
&lt;li&gt;⛔ &lt;strong&gt;Detection, not prevention&lt;/strong&gt;: Falco only detects anomalies, it doesn’t prevent them.&lt;/li&gt;
&lt;li&gt;🔔 &lt;strong&gt;Potentially noisy alerts&lt;/strong&gt;: Rules need careful tuning to avoid excessive alerts.&lt;/li&gt;
&lt;li&gt;🪟 &lt;strong&gt;Limited Windows support&lt;/strong&gt;: Primarily focused on Linux.&lt;/li&gt;
&lt;li&gt;🛠️ &lt;strong&gt;Rule management&lt;/strong&gt;: Custom rule writing requires expertise.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Reference Links
&lt;/h2&gt;

&lt;p&gt;Falco Documentation: &lt;a href="https://falco.org/" rel="noopener noreferrer"&gt;https://falco.org&lt;/a&gt;&lt;br&gt;
GitHub Projects: &lt;a href="https://github.com/falcosecurity/falco" rel="noopener noreferrer"&gt;https://github.com/falcosecurity/falco&lt;/a&gt;&lt;br&gt;
Falco Rules: &lt;a href="https://falco.org/docs/concepts/rules/" rel="noopener noreferrer"&gt;https://falco.org/docs/concepts/rules/&lt;/a&gt;&lt;br&gt;
Falco Installation on Kubernetes: &lt;a href="https://falco.org/docs/setup/kubernetes/" rel="noopener noreferrer"&gt;https://falco.org/docs/setup/kubernetes/&lt;/a&gt;&lt;br&gt;
Falcosidekick: &lt;a href="https://github.com/falcosecurity/falcosidekick" rel="noopener noreferrer"&gt;https://github.com/falcosecurity/falcosidekick&lt;/a&gt;&lt;br&gt;
Falcosidekick UI: &lt;a href="https://github.com/falcosecurity/falcosidekick-ui" rel="noopener noreferrer"&gt;https://github.com/falcosecurity/falcosidekick-ui&lt;/a&gt;&lt;br&gt;
Demo Recording &amp;amp; Code: &lt;a href="https://github.com/vinod827/cloudkit-lab/tree/main/iac/k8s/falco" rel="noopener noreferrer"&gt;https://github.com/vinod827/cloudkit-lab/tree/main/iac/k8s/falco&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>security</category>
      <category>falco</category>
      <category>cncf</category>
    </item>
    <item>
      <title>AI Agents and the Future of App Development: A Beginner's Guide (101)</title>
      <dc:creator>Vinod Kumar</dc:creator>
      <pubDate>Sun, 20 Apr 2025 02:55:23 +0000</pubDate>
      <link>https://dev.to/vinod827/ai-agents-and-the-future-of-app-development-a-beginners-guide-101-597c</link>
      <guid>https://dev.to/vinod827/ai-agents-and-the-future-of-app-development-a-beginners-guide-101-597c</guid>
      <description>&lt;p&gt;Software development is evolving—and fast. The latest game-changer? AI agents. These are not just fancy chatbots; they are smart digital coworkers who can think, reason, and act. If you're a developer, tech enthusiast, or just curious about the future of apps, this beginner's guide will walk you through the basics—with no jargon.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. The Old Way of Building Apps: The 3-Tier Model
&lt;/h2&gt;

&lt;p&gt;Before AI came into play, most apps followed a basic three-part architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend&lt;/strong&gt;: The user interface (web, mobile).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend&lt;/strong&gt;: The logic or brain (business rules, workflows).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database&lt;/strong&gt;: Where all the data is stored.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0240zv8ub1c6zzyd82tw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0240zv8ub1c6zzyd82tw.png" alt="Image description" width="501" height="615"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This model worked well but had a limitation: everything had to be explicitly coded. Apps couldn't "think" or adapt—they just did what they were programmed to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Meet AI Agents: Smarter, More Flexible Workers
&lt;/h2&gt;

&lt;p&gt;AI agents are like intelligent digital interns. They can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand natural language commands.&lt;/li&gt;
&lt;li&gt;Break down vague tasks into concrete steps.&lt;/li&gt;
&lt;li&gt;Use tools (like APIs or databases) to take action.&lt;/li&gt;
&lt;li&gt;Learn and adapt from feedback.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They don’t just follow strict rules—they figure things out, often in creative ways. Give them the right context and tools, and they can handle everything from customer support to DevOps.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Tools: Giving AI Agents Superpowers
&lt;/h2&gt;

&lt;p&gt;To actually perform tasks, AI agents need tools. Think of these as plugins or APIs that allow them to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fetch data (from databases or APIs)&lt;/li&gt;
&lt;li&gt;Send notifications (via email, Slack, etc.)&lt;/li&gt;
&lt;li&gt;Perform transactions (process payments, file reports)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nq7dyseejy52rv3rgmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nq7dyseejy52rv3rgmr.png" alt="Image description" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each tool includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A clear purpose (e.g., get today's weather).&lt;/li&gt;
&lt;li&gt;Input/output definitions.&lt;/li&gt;
&lt;li&gt;Boundaries to keep everything safe and controlled.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You don’t need to rewrite your whole app—just give the agent the tools it needs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🛠️ Want to try building your own AI agent? Frameworks like &lt;a href="https://github.com/langchain-ai/langchain" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt; and &lt;a href="https://github.com/langchain-ai/langgraph" rel="noopener noreferrer"&gt;LangGraph&lt;/a&gt; give you building blocks for AI workflows. LangChain connects agents to tools and data. LangGraph lets you design agent-based systems using stateful workflows, memory, retries, and branching logic.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  4. What is MCP (Model Context Protocol)?
&lt;/h2&gt;

&lt;p&gt;If you’re assigning 10, 50, or 100 tools to an AI agent, you need a way to manage the connections. That’s where &lt;strong&gt;MCP (Model Context Protocol)&lt;/strong&gt; comes in.&lt;/p&gt;

&lt;p&gt;MCP is like a universal translator that sits between your AI agent and all its tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezoqu5fkbem0ef2myz2i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezoqu5fkbem0ef2myz2i.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It helps you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect agents to tools and data sources easily.&lt;/li&gt;
&lt;li&gt;Keep things consistent, secure, and reusable.&lt;/li&gt;
&lt;li&gt;Avoid writing custom glue code for every new integration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s like USB for AI apps—plug and play.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Real-World Use Cases
&lt;/h2&gt;

&lt;p&gt;Here are just a few examples of how AI agents are already transforming apps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Customer Support&lt;/strong&gt;: Answer questions, fetch order status, escalate issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Finance Apps&lt;/strong&gt;: Monitor spending and suggest savings tips.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the right design, AI agents can truly become part of your team.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Why This Matters: The Future of Apps
&lt;/h2&gt;

&lt;p&gt;With AI agents + tools + MCP, we’re moving toward apps that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are faster and cheaper to build.&lt;/li&gt;
&lt;li&gt;Understand users more naturally.&lt;/li&gt;
&lt;li&gt;Adapt to change without full rewrites.&lt;/li&gt;
&lt;li&gt;Act like collaborative teammates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whether you’re building a chatbot, customer experience flow, or backend automation, this model is more modular, intelligent, and flexible than traditional development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;AI agents are no longer a futuristic idea—they’re here and evolving fast. Combined with frameworks like &lt;strong&gt;LangChain&lt;/strong&gt; and &lt;strong&gt;LangGraph&lt;/strong&gt;, and unified by MCP, they are changing how we design, build, and operate applications.&lt;/p&gt;

&lt;p&gt;Want to start exploring? Begin by thinking about what tools your app could offer to an agent—and what tasks you'd love to automate or simplify.&lt;/p&gt;

&lt;p&gt;The future of app development isn’t just code. It’s collaboration—between humans and intelligent agents.&lt;/p&gt;

</description>
      <category>agenticai</category>
      <category>mcp</category>
      <category>machinelearning</category>
      <category>ai</category>
    </item>
    <item>
      <title>Secure S3 Access for AWS EKS Pods via IAM Role</title>
      <dc:creator>Vinod Kumar</dc:creator>
      <pubDate>Mon, 31 Mar 2025 09:21:23 +0000</pubDate>
      <link>https://dev.to/vinod827/secure-s3-access-for-aws-eks-pods-via-iam-role-2ena</link>
      <guid>https://dev.to/vinod827/secure-s3-access-for-aws-eks-pods-via-iam-role-2ena</guid>
      <description>&lt;p&gt;The &lt;strong&gt;AWS IAM (Identity &amp;amp; Access Management)&lt;/strong&gt; service allows AWS services to interact with each other based on the policies given to its attached role(s).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nrvr4m9t3wllt3t9hgi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nrvr4m9t3wllt3t9hgi.png" alt="AWS IAM" width="355" height="142"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also use the &lt;strong&gt;IAM role&lt;/strong&gt; with a &lt;strong&gt;Kubernetes (k8s) native Service Account (SA)&lt;/strong&gt; which will allow the Pods running in the Kubernetes cluster or &lt;strong&gt;AWS Elastic Kubernetes Service (EKS)&lt;/strong&gt; to talk to AWS service(s).&lt;br&gt;
In this blog, we will see how we can allow a Pod running in AWS EKS to list the objects in the AWS S3 bucket by using the IAM role with Kubernetes native Service Account.&lt;br&gt;
Go to AWS S3 service and create a bucket and then add few objects to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yfddbgekoz2j86ufgv5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yfddbgekoz2j86ufgv5.png" alt="Image description" width="720" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0w8r0pxdj74cdrb54qa3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0w8r0pxdj74cdrb54qa3.png" alt="Image description" width="720" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fvs5u8yp36dmq4oafui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fvs5u8yp36dmq4oafui.png" alt="Image description" width="720" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create an Identity Provider:-&lt;/strong&gt;&lt;br&gt;
a) &lt;strong&gt;Copy the OIDC (OpenID Connect) provider URL&lt;/strong&gt; from the existing AWS EKS cluster, for instance, in my case this is the URL &lt;a href="https://oidc.eks.us-east-1.amazonaws.com/id/8B7D06AD395F38CE1EE8EF0AF2922255" rel="noopener noreferrer"&gt;https://oidc.eks.us-east-1.amazonaws.com/id/8B7D06AD395F38CE1EE8EF0AF2922255&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe93i2co4ylfwzu4y4urc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe93i2co4ylfwzu4y4urc.png" alt="Image description" width="720" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;b) Go to &lt;strong&gt;IAM&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Identity Providers&lt;/strong&gt; and click on the ‘&lt;strong&gt;Add providers&lt;/strong&gt;’ button and then select ‘&lt;strong&gt;OpenID Connect&lt;/strong&gt;’.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdn56ndg0xebc2j7mc7l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdn56ndg0xebc2j7mc7l.png" alt="Image description" width="720" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Paste the copied OIDC URL (from EKS) under the ‘&lt;strong&gt;Provider URL&lt;/strong&gt;’ option and click the ‘&lt;strong&gt;Get thumbprint&lt;/strong&gt;’ button and put &lt;code&gt;sts.amazonaws.com&lt;/code&gt; under the ‘&lt;strong&gt;Audience&lt;/strong&gt;’ option as shown below. Give tags as necessary and click the ‘&lt;strong&gt;Add provider&lt;/strong&gt;’ button at the end to add the identity provider.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqto7nyi6o4etufx73oy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqto7nyi6o4etufx73oy.png" alt="Image description" width="720" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create an IAM Policy to read objects in the S3 bucket:-&lt;/strong&gt;&lt;br&gt;
Go to IAM Policy and create a custom policy with the following JSON to read the objects from the S3 bucket:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
 “Version”: “2012–10–17”,
 “Statement”: [
    {
      “Effect”: “Allow”,
      “Action”: [
               “s3:ListBucket”,
               “s3:GetObject”
       ],
       “Resource”: “arn:aws:s3:::k8s-nest”
    }
 ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create an IAM Role with Trust Relationship with Identity Provider:-&lt;/strong&gt;&lt;br&gt;
Go to the IAM and create a new role with ‘&lt;strong&gt;Web identity&lt;/strong&gt;’ trust type and select the right Identity Provider (from the dropdown) that we created earlier and Audience as &lt;code&gt;sts.amazonaws.com&lt;/code&gt; and click on the ‘&lt;strong&gt;Next: Permissions&lt;/strong&gt;’ button as shown below:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhhciw20ci3u5k2qi6k1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhhciw20ci3u5k2qi6k1.png" alt="Image description" width="720" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Attach the previously created custom policy to read the S3 bucket objects:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm15r80vy4wm2zjtxu87s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm15r80vy4wm2zjtxu87s.png" alt="Image description" width="720" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give a name to the role and create it.&lt;/p&gt;

&lt;p&gt;Then open the same role again and edit its trust relationship to make sure that only a &lt;strong&gt;specific Kubernetes Service Account&lt;/strong&gt; can assume this Role.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjil0vdrfg6xcslmspkyq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjil0vdrfg6xcslmspkyq.png" alt="Image description" width="720" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoe2ua1w4s7kvtl6bcxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoe2ua1w4s7kvtl6bcxf.png" alt="Image description" width="720" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvefsc6wp0d6p0ob7z66i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvefsc6wp0d6p0ob7z66i.png" alt="Image description" width="720" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Following policy, will &lt;strong&gt;only allow&lt;/strong&gt; the Kubernetes service account named ‘&lt;strong&gt;my-sa&lt;/strong&gt;’ (in the namespace ‘&lt;strong&gt;dev&lt;/strong&gt;’) to assume the role ‘&lt;strong&gt;custom-read-s3-bucket-objects&lt;/strong&gt;’ using the &lt;strong&gt;AWS STS AssumeRoleWithWebIdentity&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::195725532069:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/8B7D06AD395F38CE1EE8EF0AF2922255"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "oidc.eks.us-east-1.amazonaws.com/id/8B7D06AD395F38CE1EE8EF0AF2922255:sub": "system:serviceaccount:dev:my-sa"
        }
      }
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All configurations at the AWS side are done. Now head towards, your CLI (Command Line Interface) to create a Service Account and Pod in the same AWS EKS Cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a namespace ‘dev’ in Kubernetes:-&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create ns dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create a Service Account named ‘my-sa’:-&lt;/strong&gt;&lt;br&gt;
Create a service account annotating the IAM Role ARN created before. as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzak5x74hzv4yuzznv5f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzak5x74hzv4yuzznv5f.png" alt="Image description" width="720" height="108"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ServiceAccount
metadata:
 name: my-sa
 namespace: dev
 annotations:
   eks.amazonaws.com/role-arn:  arn:aws:iam::195725532069:role/custom-read-s3-bucket-objects
kubectl create -f my-sa.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm3rz0i6k8lkqg7cp6yje.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm3rz0i6k8lkqg7cp6yje.png" alt="Image description" width="720" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Schedule a new Pod using this Service Account:-&lt;/strong&gt;&lt;br&gt;
Instead of the default service account, use the one which was created above i.e. &lt;strong&gt;my-sa&lt;/strong&gt;, and attach it to the Pod as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  labels:
    run: my-pod
  name: my-pod
  namespace: dev
spec:
  serviceAccountName: my-sa
  initContainers:
  - image: amazon/aws-cli
    name: my-aws-cli
    command: ['aws', 's3', 'ls', 's3://k8s-nest/']
  containers:
  - image: nginx
    name: my-pod
    ports:
    - containerPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
kubectl create -f my-pod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6m2tv38pr7m8aqq6qids.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6m2tv38pr7m8aqq6qids.png" alt="Image description" width="720" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, check the logs of the init container, you will notice that the Pod was successfully able to assume the role and communicate with AWS S3 to list all the objects in it securely with the &lt;strong&gt;principle of least privilege&lt;/strong&gt; reducing the blast radius.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl logs my-pod my-aws-cli -n dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hope you liked this article :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this blog, we learned that how we can use the AWS IAM Role and create Trust Relationship with the Kubernetes Service Account and use that service account with the Pods running inside the AWS EKS to make calls to AWS S3 with the principle of least privilege.&lt;br&gt;
Behind the scene, the OIDC federation allows the Pod to assume the IAM role via the Kubernetes Service Account with AWS STS to receive a JSON Web Token (JWT). In Kubernetes, we then use the projected service account tokens which are valid OIDC tokens, giving each Pod encrypted signed tokens that can be verified by STS against the OIDC provider for establishing its identity by exchanging the OIDC tokens for IAM Role credentials using AssumeRoleWithWebIdentity of AWS STS.&lt;br&gt;
As usual, you will find the complete source code here at my GitHub repository. Please feel free to fork it and add more IaC (Infrastructure as Code) to it:-&lt;br&gt;
&lt;a href="https://github.com/vinod827/cloudkit-lab/tree/main/iac/k8s/iam-roles-with-k8s-service-account" rel="noopener noreferrer"&gt;https://github.com/vinod827/cloudkit-lab/tree/main/iac/k8s/iam-roles-with-k8s-service-account&lt;/a&gt;&lt;/p&gt;

</description>
      <category>awss3</category>
      <category>aws</category>
      <category>awsiam</category>
      <category>iamrole</category>
    </item>
    <item>
      <title>Enforcing Kubernetes Deployments: A Deep Dive into Policy as Code with Kyverno</title>
      <dc:creator>Vinod Kumar</dc:creator>
      <pubDate>Wed, 11 Sep 2024 16:22:08 +0000</pubDate>
      <link>https://dev.to/vinod827/enforcing-kubernetes-deployments-a-deep-dive-into-policy-as-code-with-kyverno-5ch6</link>
      <guid>https://dev.to/vinod827/enforcing-kubernetes-deployments-a-deep-dive-into-policy-as-code-with-kyverno-5ch6</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This is an introductory blog on Cloud Native Computing Foundation's or CNCF’s incubating project called &lt;strong&gt;&lt;a href="https://www.cncf.io/projects/kyverno/" rel="noopener noreferrer"&gt;Kyverno&lt;/a&gt;&lt;/strong&gt; (greek word which means ‘&lt;strong&gt;Govern&lt;/strong&gt;’). In this blog, we will learn what Kyverno is and how we can leverage it in our Kubernetes cluster to ensure the security and governs of our application deployments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdoiemgkq43h0o0gbfhz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdoiemgkq43h0o0gbfhz.png" alt="Image description" width="383" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Authorization in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Before heading to Kyverno, just a quick refreshment on how the authorization (or AuthZ) works internally in Kubernetes.&lt;br&gt;
If you are a Kubernetes administrator then you can decide and customize different authorization modes in Kube API Server configuration. It is &lt;strong&gt;AlwaysAllow&lt;/strong&gt; as shown below ( &lt;code&gt;— authorization-mode=AlwaysAllow&lt;/code&gt;) however we can always change it to a different mode like &lt;strong&gt;AlwaysDeny&lt;/strong&gt;, &lt;strong&gt;Node Authorizer&lt;/strong&gt;, &lt;strong&gt;ABAC&lt;/strong&gt; (attribute based), &lt;strong&gt;RBAC&lt;/strong&gt; or &lt;strong&gt;WebHook&lt;/strong&gt;. You can read more about each modes &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="noopener noreferrer"&gt;here&lt;/a&gt; but we are interested with the mode as &lt;strong&gt;WebHook&lt;/strong&gt; where the authorization works with the help of the third-party popular engines like &lt;strong&gt;Kyverno&lt;/strong&gt; or &lt;strong&gt;&lt;a href="https://kubernetes.io/blog/2019/08/06/opa-gatekeeper-policy-and-governance-for-kubernetes/" rel="noopener noreferrer"&gt;OPA&lt;/a&gt; (Open Policy Agent)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ge2tzrm9b5ts15fh23h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ge2tzrm9b5ts15fh23h.png" alt="Kube API server configuration" width="763" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;Admission Controller&lt;/strong&gt; is the most critical module in Kubernetes that decides how an incoming request from a user/machine is validated before it grants the permission. Not just validation, the Admission Controller is also useful to implement the better security measures to protect the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F776rsm3we2eg5ux61m2g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F776rsm3we2eg5ux61m2g.png" alt="Admission Controller" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Admission Controller performs &lt;strong&gt;ValidatingAdmissionWebHook&lt;/strong&gt; and &lt;strong&gt;MutatingAdmissionWebhook&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Following is the flow when Kyverno is used in the cluster:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfph8skkgkd8edozyp3b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfph8skkgkd8edozyp3b.png" alt="Admission Controller using Kyverno" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;During these(&lt;strong&gt;Mutating&lt;/strong&gt;/&lt;strong&gt;Validating Admission&lt;/strong&gt;) phases of Admission Controller, we can consult the incoming request with the third-party policy enforcement engines like &lt;strong&gt;Kyverno&lt;/strong&gt; to actually perform the mutation or validation based on the rules (or policies) defined.&lt;/p&gt;
&lt;h2&gt;
  
  
  Kyverno Overview
&lt;/h2&gt;

&lt;p&gt;Kyverno is a Kubernetes-native policy engine designed to validate, mutate, and generate configurations for Kubernetes resources. Unlike the other popular enforcement engines like &lt;strong&gt;OPA&lt;/strong&gt;, you do not need to learn a different language like &lt;strong&gt;Rego&lt;/strong&gt; to write the policies. You can define the policies (as Code) natively just like another Kubernetes resources in YAML format.&lt;/p&gt;

&lt;p&gt;Kyverno operates as an admission controller, enforcing custom policies when a resource is created, updated, or deleted. You can use Kyverno to enforce security best practices, manage resource quotas, apply custom validation rules, or even automate Kubernetes operations.&lt;/p&gt;
&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu94y88d66t08140ewawn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu94y88d66t08140ewawn.png" alt="Architecture" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Kyverno Policies and Rules
&lt;/h2&gt;

&lt;p&gt;A policy in Kyverno is a &lt;strong&gt;set of rule(s)&lt;/strong&gt; where each rule consists of a &lt;strong&gt;match&lt;/strong&gt; declaration with an optional &lt;strong&gt;exclude&lt;/strong&gt; declaration and only one of &lt;strong&gt;validate&lt;/strong&gt;, &lt;strong&gt;mutate&lt;/strong&gt;, &lt;strong&gt;generate&lt;/strong&gt; or &lt;strong&gt;verifyImages&lt;/strong&gt; as child declarations, as shown below:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfnsrnil5eaxbk8kwm33.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfnsrnil5eaxbk8kwm33.png" alt="Image description" width="494" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Kyverno Policy can be applied at cluster level (with kind as &lt;code&gt;ClusterPolicy&lt;/code&gt;) or a namespace level (with kind as &lt;code&gt;Policy&lt;/code&gt;). Once a policy has been applied then Kyverno enforces the best practices ensuring the governance in the cluster during the application deployments.&lt;/p&gt;
&lt;h2&gt;
  
  
  Application and Testing of Kyverno
&lt;/h2&gt;

&lt;p&gt;This sample Kyverno policy is a cluster level policy (as the kind is &lt;code&gt;ClusterPolicy&lt;/code&gt;) which ensures that all the deployments created in the cluster will be tagged appropriately with the specific label, &lt;strong&gt;app&lt;/strong&gt;. And if not then it throws an appropriate error message because &lt;code&gt;validationFailureAction=Enforce&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Note, when &lt;code&gt;validationFailureAction=Audit&lt;/code&gt; then Kyverno only warns but not fails the deployment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: enforce-app-deployment-label
spec:
  validationFailureAction: Enforce 
  rules:
    - name: check-for-label
      match:
        resources:
          kinds:            
            - Deployment
      validate:
        message: "You must have the label, 'app' for all deployments."
        pattern:
          metadata:
            labels:
              app: "?*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is just one simple example however you can create complex policies like image pull from authorized source, minimum number of replica count, no latest tag on images, etc. as required, refer to &lt;a href="https://kyverno.io/policies/" rel="noopener noreferrer"&gt;doc&lt;/a&gt; for more details.&lt;/p&gt;

&lt;h3&gt;
  
  
  Applying the Policy
&lt;/h3&gt;

&lt;p&gt;To apply this policy, you need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First install the Kyverno in your Kubernetes cluster. Your best friend here is the official documentation for installation, head towards this  &lt;a href="https://kyverno.io/docs/installation/methods/" rel="noopener noreferrer"&gt;link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Apply the policy using kubectl
&lt;code&gt;kubectl apply -f enforce-app-deployment-label.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Verify if the Kyverno cluster policy has been applied successfully or not
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ » kubectl get clusterpolicy                                                                                                                                                                                                                     
NAME                           ADMISSION   BACKGROUND   VALIDATE ACTION   READY   AGE    MESSAGE
enforce-app-deployment-label   true        true         Enforce           True    107s   Ready
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Testing the Policy
&lt;/h3&gt;

&lt;p&gt;Let us now test the policy by creating a Deployment without the required label:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - image: vinod827/myawesomeapp:3.0
        name: my-simple-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you try to creating it, the Kube API server will reject the request with an error message because it is missing the required app label:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ » kubectl create -f deploy.yaml                                                                                                                                                                                                                 
Error from server: error when creating "deploy.yaml": admission webhook "validate.kyverno.svc-fail" denied the request:

resource Deployment/default/my-app-deploy was blocked due to the following policies

enforce-app-deployment-label:
  check-for-label: 'validation error: You must have the label, ''app'' for all deployments.
    rule check-for-label failed at path /metadata/labels/'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By using Kyverno for Webhook based admission control, the Kubernetes administrators can easily define and enforce custom policies on Kubernetes resources. It offers an intuitive, Kubernetes-native approach to managing policies like label enforcement, resource validation, and security policies, all while simplifying governance in cloud-native environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/reference/access-authn-authz/authorization/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://kyverno.io/docs/installation/" rel="noopener noreferrer"&gt;https://kyverno.io/docs/installation/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>kyverno</category>
      <category>aws</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Harnessing Apache Kafka on Kubernetes with Strimzi</title>
      <dc:creator>Vinod Kumar</dc:creator>
      <pubDate>Thu, 29 Aug 2024 13:40:46 +0000</pubDate>
      <link>https://dev.to/vinod827/harnessing-apache-kafka-on-kubernetes-with-strimzi-5fjg</link>
      <guid>https://dev.to/vinod827/harnessing-apache-kafka-on-kubernetes-with-strimzi-5fjg</guid>
      <description>&lt;p&gt;&lt;strong&gt;Strimzi&lt;/strong&gt; is a CNCF incubating project (moved on Feb 08, 2024) that allows us to run Apache Kafka on the Kubernetes cluster.&lt;br&gt;
The Apache Kafka is a leading tool for building the real-time, event-driven applications. You can also develop and run highly scalable &amp;amp; fault-tolerant data streaming, and data pipelines using Kafka. However, deploying and managing the Kafka infrastructure is always a bit tricky &amp;amp; complicated and this is where Strimzi comes into the picture where it simplifies the whole experience of Kafka on Kubernetes.&lt;br&gt;
In this blog, we will understand what Strimzi is all about and how it is helpful to both the developers &amp;amp; the organization in our Engineering domain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3ugmhdr36ommtg72o0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3ugmhdr36ommtg72o0e.png" alt="Strimzi logo" width="601" height="157"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Strimzi to run Kafka on Kubernetes?
&lt;/h2&gt;

&lt;p&gt;Strimzi enables running an Apache Kafka cluster on Kubernetes with various deployment configurations. It uses the operator pattern that makes it easy for running Kafka on Kubernetes.&lt;br&gt;
As a developer, you can leverage Strimzi to run a local Kafka cluster on Minikube on your machine for your end-to-end application development locally without making use of real Kafka servers, say, running on the Cloud or on-premise thus saving the potential cost to the company.&lt;br&gt;
For production, you can customize Strimzi configuration as per your requirements, for instance, rack awareness to distribute Kafka brokers across various AZ (Availability Zones) on Cloud or use Kubernetes' native taints and tolerations to run Kafka on the dedicated nodes. The Kafka can also be exposed outside Kubernetes using NodePort, Load Balancer, or Ingress all of which can be easily secured using TLS certificates.&lt;br&gt;
Strimzi's Kubernetes-native management extends beyond the broker to include Kafka topics, users, and Kafka Connect through Custom Resources. This allows you to manage the complete Kafka applications using your existing Kubernetes processes and tools like kubectl.&lt;/p&gt;
&lt;h2&gt;
  
  
  Core Principles of Strimzi
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High Security:&lt;/strong&gt; Ensures a secure Kafka cluster with support for TLS, certificate management, and OAuth authentication&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplicity and Flexibility:&lt;/strong&gt; Offers a straightforward yet highly configurable setup, enabling Kafka access through Kubernetes-native NodePort, Ingress, and LoadBalancer options&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dedicated Node Deployment:&lt;/strong&gt; Runs Kafka on dedicated nodes within a Kubernetes cluster&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Integration:&lt;/strong&gt; Allows interaction and management of the cluster using Kubernetes-native tools like kubectl, operator-based approaches, and GitOps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seamless Integration:&lt;/strong&gt; Integrates smoothly with other projects like OpenTelemetry, Prometheus, OPA, and more&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Architecture - Strimzi Operators
&lt;/h2&gt;

&lt;p&gt;Strimzi provides three operators to manage Kafka on Kubernetes and they are broadly classified as:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cluster Operator&lt;/strong&gt; - This is the main operator, responsible for deploying the Kafka cluster and managing broker configurations. It is also responsible for managing Kafka versions upgrades by rolling out newer version on one broker at a time. It also supports other operands like Kafka Connect, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Topic Operator&lt;/strong&gt; - Using the KafkaTopic custom resource, this operator is responsible for creating, deleting, and managing the topic(s) on the Kafka cluster created by users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User Operator&lt;/strong&gt; - It is responsible for managing the cluster users and related ACL (Access Control List) permissions on the topic using KafkaUser custom resource.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0qr5xnail0z86tvr842.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0qr5xnail0z86tvr842.png" alt="Strimzi Architecture" width="800" height="633"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Advantages of using Strimzi
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Local Development Support:&lt;/strong&gt; Developers can use Strimzi with Minikube on their local machines to quickly set up a Kafka cluster for development and testing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Security:&lt;/strong&gt; Strimzi offers TLS encryption and strong authentication/authorization to safeguard data streams.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exceptional Performance:&lt;/strong&gt; Kafka, backed by Strimzi, delivers high throughput and low latency, enabling efficient real-time data processing and analytics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Robust Data Handling:&lt;/strong&gt; Strimzi ensures data resilience with features like message ordering, replay capabilities, and message compaction, providing reliable data storage and processing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplified Deployment:&lt;/strong&gt; Strimzi streamlines Kafka cluster management with custom resources, allowing easy configuration through YAML files. This approach speeds up deployment and minimizes errors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability and Reliability:&lt;/strong&gt; Leveraging Kafka's scalability and fault tolerance, Strimzi supports seamless data streaming as your infrastructure grows.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Setting up Kafka using Strimzi on Minikube
&lt;/h2&gt;

&lt;p&gt;As a pre-requisites, you need Docker for Desktop and Minikube configured on your local machine. First start the Minikube using minikube start command on terminal and then follow the steps below to setup Kafka on it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Create a new Kubernetes namespace for Kafka deployment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace kafka 
kubectl config set-context --current --namespace=kafka # To default to kafka ns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;(2)&lt;/strong&gt; Deploy the Strimzi operator on this newly created namespace, kafka.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;(3)&lt;/strong&gt; Now, deploy the Apache Kafka Cluster using the Strimzi CRD (Custom Resource Definition)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://strimzi.io/examples/latest/kafka/kraft/kafka-single-node.yaml -n kafka
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify the setup of Kafka on Minikube, run the following commands as shown below:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ » kubectl get po -n kafka                                                                                                                                                                                       vinod827@Vinods-MacBook-Pro NAME                                          READY   STATUS    RESTARTS   AGE kafka-consumer                                1/1     Running   0          4m33s my-cluster-dual-role-0                        1/1     Running   0          7m22s my-cluster-entity-operator-6b5c9f5764-s5xbc   2/2     Running   0          6m58s strimzi-cluster-operator-865f986d89-tplcs     1/1     Running   0          8m47s                                                                                                                                                                                                              vinod827@Vinods-MacBook-Pro ~ » kubectl get kafka -n kafka                                                                                                                                                                                    vinod827@Vinods-MacBook-Pro NAME         DESIRED KAFKA REPLICAS   DESIRED ZK REPLICAS   READY   METADATA STATE   WARNINGS my-cluster                                                  True    KRaft (base) ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Voila! it is all done in a very simplified manner.&lt;/p&gt;

&lt;p&gt;Let's test the connectivity by sending and receiving the message from this Kafka cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Producer, sending a message:-&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ » kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.43.0-kafka-3.8.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic If you don't see a command prompt, try pressing enter. &amp;gt;hello, Vinod! Demo for Strimzi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Receiver, receiving the message:-&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ » kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.43.0-kafka-3.8.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning If you don't see a command prompt, try pressing enter. hello, Vinod! Demo for Strimzi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Strimzi simplifies deploying and managing Apache Kafka on Kubernetes, making it easier for organizations and developers to handle complex data streaming needs. Its Kubernetes-native approach enhances Kafka's flexibility, scalability, and reliability, whether for local development or large-scale production. By embracing Strimzi, you can harness the full power of Kafka on Kubernetes, ensuring your data infrastructure is both robust and future-proof. Start exploring Strimzi today to see how it can transform your Kafka deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://strimzi.io/" rel="noopener noreferrer"&gt;https://strimzi.io/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://strimzi.io/docs/operators/latest/overview" rel="noopener noreferrer"&gt;https://strimzi.io/docs/operators/latest/overview&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>strimzi</category>
      <category>cncf</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Creating custom VPC on AWS using OpenTofu</title>
      <dc:creator>Vinod Kumar</dc:creator>
      <pubDate>Mon, 13 May 2024 02:36:58 +0000</pubDate>
      <link>https://dev.to/vinod827/creating-custom-vpc-on-aws-using-opentofu-3760</link>
      <guid>https://dev.to/vinod827/creating-custom-vpc-on-aws-using-opentofu-3760</guid>
      <description>&lt;p&gt;The OpenTofu is a Linux Foundation project which is a complete opensource Infrastructure as Code tool, an alternative to the popular Terraform. This essentially means it supports natively Terraform’s HCL (HashiCorp Configuration Language) to write the infrastructure as code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovt31e8aowdlfg2gvc0t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovt31e8aowdlfg2gvc0t.png" alt="OpenTofu"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog, we will see how we can you OpenTofu as Infrastructure as Code (IaC) to provision a custom Virtual Private Cloud (VPC) on Amazon Web Services.&lt;/p&gt;

&lt;p&gt;Following is the internal architecture of our custom VPC that we are going to provision on AWS using the OpenTofu:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9xt1hgq3jlhdt8fd879.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9xt1hgq3jlhdt8fd879.png" alt="AWS Virtual Private Cloud (Custom) in Northern Virginia Region"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, let us first compare OpenTofu with other popular IaC tools like AWS Cloud Formation or Terraform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsrbj9smne9gqb2vs4fb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsrbj9smne9gqb2vs4fb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now let us go ahead and create the VPC using OpenTofu.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1.&lt;/strong&gt; Install OpenTofu in your system first.&lt;/p&gt;

&lt;p&gt;You can execute the following command on MacOS terminal to install the binary for OpenTofu. For other operating systems, refer this documentation link.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

brew update
brew install opentofu


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 2.&lt;/strong&gt; Setup the AWS provider configuration&lt;/p&gt;

&lt;p&gt;Create a file, say &lt;code&gt;00_provider.tf&lt;/code&gt; and copy the following code. Here we have used a variable for AWS region with default as us-east-1 (Northern Virginia) and AWS as the required provider. Also, to connect to our account, we have mapped this to the profile name myaws (which is there in the local path &lt;code&gt;$HOME/.aws/profile&lt;/code&gt;).&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

variable aws_region {
    default = "us-east-1"
    description = "AWS region where the resources will be provisioned"
}


# Configure the AWS Provider
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 5.0"
    }
    # helm = {
    #     source = "hashicorp/aws"
    #     version = "~&amp;gt; 2.6"
    # }
  }
}

# Configure region and profile
provider "aws" {
  region = var.aws_region
  profile = "myaws"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 3.&lt;/strong&gt; Create a custom VPC configuration and save it in a file, say &lt;code&gt;01_vpc.tf&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_vpc" "mycustomvpc" {
    cidr_block = "10.0.0.0/16"
    enable_dns_support = true
    enable_dns_hostnames = true

    tags = {
        "owner" = "vinod"
        "Name" = "my custom VPC"
    }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 4.&lt;/strong&gt; Create Internet Gateway and attach it to the VPC&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_internet_gateway" "igw" { 
    vpc_id = aws_vpc.mycustomvpc.id
    tags = {
        "owner" = "vinod"
        "Name" = "IGW"
    }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 5.&lt;/strong&gt; Create Subnets for the VPC&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_subnet" "private-us-east-1a" {
  vpc_id     = aws_vpc.mycustomvpc.id
  cidr_block = "10.0.1.0/24"
  availability_zone = "us-east-1a"

  tags = {
    "subnet" = "private-us-east-1a"
    "Name" = "Private Subnet"
  }
}

resource "aws_subnet" "private-us-east-1b" {
  vpc_id     = aws_vpc.mycustomvpc.id
  cidr_block = "10.0.2.0/24"
  availability_zone = "us-east-1b"

  tags = {
    "subnet" = "private-us-east-1b"
    "Name" = "Private Subnet"
  }
}

resource "aws_subnet" "public-us-east-1a" {
  vpc_id     = aws_vpc.mycustomvpc.id
  cidr_block = "10.0.3.0/24"
  availability_zone = "us-east-1a"
  map_public_ip_on_launch = true

  tags = {
    "subnet" = "public-us-east-1a"
    "Name" = "Public Subnet"
  }
}

resource "aws_subnet" "public-us-east-1b" {
  vpc_id     = aws_vpc.mycustomvpc.id
  cidr_block = "10.0.4.0/24"
  availability_zone = "us-east-1b"
  map_public_ip_on_launch = true

  tags = {
    "subnet" = "public-us-east-1b"
    "Name" = "Public Subnet"
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I have created 4 subnets (2 private and 2 public).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6.&lt;/strong&gt; Create a NAT Gateway and EIP configuration&lt;/p&gt;

&lt;p&gt;Create a NAT gateway and attach it to the public subnet. The NAT Gateway allows our instances running within private subnets to access Public Internet for Operating Systems and other software patch updates.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_eip" "nat" {
  vpc = true

  tags = {
    "Name" = "EIP"
    "Owner" = "Vinod"
  }

}

resource "aws_nat_gateway" "nat" {
  allocation_id = aws_eip.nat.id
  subnet_id     = aws_subnet.public-us-east-1a.id

  tags = {
    "Name" = "NAT Gateway"
    "Owner" = "Vinod"
  }

  # To ensure proper ordering, it is recommended to add an explicit dependency
  # on the Internet Gateway for the VPC.
  depends_on = [aws_internet_gateway.igw]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 7.&lt;/strong&gt; Create route configuration and its association with subnets&lt;/p&gt;

&lt;p&gt;Create two route tables (one as private and another as public) with route to NAT Gateway and Internet Gateway respectively. Associate them to their respective private and public subnets.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_route_table" "privateroute" {
  vpc_id = aws_vpc.mycustomvpc.id

  route {
    cidr_block = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.nat.id
  }

  tags = {
    Name = "private"
  }
}

resource "aws_route_table" "publicroute" {
  vpc_id = aws_vpc.mycustomvpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.igw.id
  }

  tags = {
    Name = "public"
  }
}

resource "aws_route_table_association" "privateassociation_a" {
  subnet_id      = aws_subnet.private-us-east-1a.id
  route_table_id = aws_route_table.privateroute.id
}
resource "aws_route_table_association" "privateassociation_b" {
  subnet_id      = aws_subnet.private-us-east-1b.id
  route_table_id = aws_route_table.privateroute.id
}
resource "aws_route_table_association" "publicassociation_a" {
  subnet_id      = aws_subnet.public-us-east-1a.id
  route_table_id = aws_route_table.publicroute.id
}
resource "aws_route_table_association" "publicassociation_b" {
  subnet_id      = aws_subnet.public-us-east-1b.id
  route_table_id = aws_route_table.publicroute.id
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Initialize the tofu project to install all dependencies, modules, etc. by executing on the same directory where all the above .tf files are present&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

tofu init


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To validate our configuration and doing a dry run (without actually provisioning any resources), execute&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

tofu validate
tofu plan


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will output a summary plan of our change like below for us to review:-&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

OpenTofu will perform the following actions:

  # aws_eip.nat will be created
  + resource "aws_eip" "nat" {
      + allocation_id        = (known after apply)
      + arn                  = (known after apply)
      + association_id       = (known after apply)
      + carrier_ip           = (known after apply)
      + customer_owned_ip    = (known after apply)
      + domain               = (known after apply)
      + id                   = (known after apply)
      + instance             = (known after apply)
      + network_border_group = (known after apply)
      + network_interface    = (known after apply)
      + private_dns          = (known after apply)
      + private_ip           = (known after apply)
      + ptr_record           = (known after apply)
      + public_dns           = (known after apply)
      + public_ip            = (known after apply)
      + public_ipv4_pool     = (known after apply)
      + tags                 = {
          + "Name"  = "EIP"
          + "Owner" = "Vinod"
        }
      + tags_all             = {
          + "Name"  = "EIP"
          + "Owner" = "Vinod"
        }
      + vpc                  = true
    }

  # aws_internet_gateway.igw will be created
  + resource "aws_internet_gateway" "igw" {
      + arn      = (known after apply)
      + id       = (known after apply)
      + owner_id = (known after apply)
      + tags     = {
          + "Name"  = "IGW"
          + "owner" = "vinod"
        }
      + tags_all = {
          + "Name"  = "IGW"
          + "owner" = "vinod"
        }
      + vpc_id   = (known after apply)
    }

  # aws_nat_gateway.nat will be created
  + resource "aws_nat_gateway" "nat" {
      + allocation_id                      = (known after apply)
      + association_id                     = (known after apply)
      + connectivity_type                  = "public"
      + id                                 = (known after apply)
      + network_interface_id               = (known after apply)
      + private_ip                         = (known after apply)
      + public_ip                          = (known after apply)
      + secondary_private_ip_address_count = (known after apply)
      + secondary_private_ip_addresses     = (known after apply)
      + subnet_id                          = (known after apply)
      + tags                               = {
          + "Name"  = "NAT Gateway"
          + "Owner" = "Vinod"
        }
      + tags_all                           = {
          + "Name"  = "NAT Gateway"
          + "Owner" = "Vinod"
        }
    }

  # aws_route_table.privateroute will be created
  + resource "aws_route_table" "privateroute" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = [
          + {
              + carrier_gateway_id         = ""
              + cidr_block                 = "0.0.0.0/0"
              + core_network_arn           = ""
              + destination_prefix_list_id = ""
              + egress_only_gateway_id     = ""
              + gateway_id                 = ""
              + ipv6_cidr_block            = ""
              + local_gateway_id           = ""
              + nat_gateway_id             = (known after apply)
              + network_interface_id       = ""
              + transit_gateway_id         = ""
              + vpc_endpoint_id            = ""
              + vpc_peering_connection_id  = ""
            },
        ]
      + tags             = {
          + "Name" = "private"
        }
      + tags_all         = {
          + "Name" = "private"
        }
      + vpc_id           = (known after apply)
    }

  # aws_route_table.publicroute will be created
  + resource "aws_route_table" "publicroute" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = [
          + {
              + carrier_gateway_id         = ""
              + cidr_block                 = "0.0.0.0/0"
              + core_network_arn           = ""
              + destination_prefix_list_id = ""
              + egress_only_gateway_id     = ""
              + gateway_id                 = (known after apply)
              + ipv6_cidr_block            = ""
              + local_gateway_id           = ""
              + nat_gateway_id             = ""
              + network_interface_id       = ""
              + transit_gateway_id         = ""
              + vpc_endpoint_id            = ""
              + vpc_peering_connection_id  = ""
            },
        ]
      + tags             = {
          + "Name" = "public"
        }
      + tags_all         = {
          + "Name" = "public"
        }
      + vpc_id           = (known after apply)
    }

  # aws_route_table_association.privateassociation_a will be created
  + resource "aws_route_table_association" "privateassociation_a" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_route_table_association.privateassociation_b will be created
  + resource "aws_route_table_association" "privateassociation_b" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_route_table_association.publicassociation_a will be created
  + resource "aws_route_table_association" "publicassociation_a" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_route_table_association.publicassociation_b will be created
  + resource "aws_route_table_association" "publicassociation_b" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_subnet.private-us-east-1a will be created
  + resource "aws_subnet" "private-us-east-1a" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.1.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"   = "Private Subnet"
          + "subnet" = "private-us-east-1a"
        }
      + tags_all                                       = {
          + "Name"   = "Private Subnet"
          + "subnet" = "private-us-east-1a"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_subnet.private-us-east-1b will be created
  + resource "aws_subnet" "private-us-east-1b" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.2.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"   = "Private Subnet"
          + "subnet" = "private-us-east-1b"
        }
      + tags_all                                       = {
          + "Name"   = "Private Subnet"
          + "subnet" = "private-us-east-1b"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_subnet.public-us-east-1a will be created
  + resource "aws_subnet" "public-us-east-1a" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.3.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"   = "Public Subnet"
          + "subnet" = "public-us-east-1a"
        }
      + tags_all                                       = {
          + "Name"   = "Public Subnet"
          + "subnet" = "public-us-east-1a"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_subnet.public-us-east-1b will be created
  + resource "aws_subnet" "public-us-east-1b" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.4.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"   = "Public Subnet"
          + "subnet" = "public-us-east-1b"
        }
      + tags_all                                       = {
          + "Name"   = "Public Subnet"
          + "subnet" = "public-us-east-1b"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_vpc.mycustomvpc will be created
  + resource "aws_vpc" "mycustomvpc" {
      + arn                                  = (known after apply)
      + cidr_block                           = "10.0.0.0/16"
      + default_network_acl_id               = (known after apply)
      + default_route_table_id               = (known after apply)
      + default_security_group_id            = (known after apply)
      + dhcp_options_id                      = (known after apply)
      + enable_dns_hostnames                 = true
      + enable_dns_support                   = true
      + enable_network_address_usage_metrics = (known after apply)
      + id                                   = (known after apply)
      + instance_tenancy                     = "default"
      + ipv6_association_id                  = (known after apply)
      + ipv6_cidr_block                      = (known after apply)
      + ipv6_cidr_block_network_border_group = (known after apply)
      + main_route_table_id                  = (known after apply)
      + owner_id                             = (known after apply)
      + tags                                 = {
          + "Name"  = "my custom VPC"
          + "owner" = "vinod"
        }
      + tags_all                             = {
          + "Name"  = "my custom VPC"
          + "owner" = "vinod"
        }
    }

Plan: 14 to add, 0 to change, 0 to destroy.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, execute the following command to create the custom VPC on AWS&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

tofu apply



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You will need to confirm with yes when prompted on the terminal. If you wish to avoid that prompt then use the flag as &lt;code&gt;—-auto-approve&lt;/code&gt; like shown below&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

tofu apple --auto-approve


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Voila! its all done :-)&lt;/p&gt;

&lt;p&gt;All your custom VPC resources will be created.&lt;/p&gt;

&lt;h2&gt;
  
  
  Output
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1t79ezw4fvzbox4thv6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1t79ezw4fvzbox4thv6.png" alt="AWS VPC"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you wish to delete all the resources of the custom VPC, then execute:-&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;tofu destroy&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Summary&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;In this blog, we have seen what OpenTofu is, how it compares as an open source project with other popular IaC tools and how we can install it in our system to create a custom VPC on Amazon Web Services.&lt;/p&gt;

&lt;p&gt;Hope you like the article. Please do share your feedback.&lt;/p&gt;

&lt;p&gt;Like always, you will find all the source code used in this blog as a reference at this GitHub project. You can star this GitHub repository to get all updates happening on this active project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/vinod827/k8s-nest/tree/main/iac/aws/terraform/creating-custom-vpc" rel="noopener noreferrer"&gt;https://github.com/vinod827/k8s-nest/tree/main/iac/aws/terraform/creating-custom-vpc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All k8s manifests lives here. Contribute to vinod827/k8s-nest development by creating an account on GitHub.&lt;br&gt;
github.com&lt;/p&gt;

&lt;p&gt;Connect me on &lt;a href="https://www.linkedin.com/in/vinod827/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; and &lt;a href="https://twitter.com/vinod827" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>opentofu</category>
      <category>infrastructureascode</category>
      <category>aws</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Building a Custom VPC and EKS Cluster on AWS with Terraform</title>
      <dc:creator>Vinod Kumar</dc:creator>
      <pubDate>Wed, 01 May 2024 11:54:40 +0000</pubDate>
      <link>https://dev.to/vinod827/building-a-custom-vpc-and-eks-cluster-on-aws-with-terraform-4e16</link>
      <guid>https://dev.to/vinod827/building-a-custom-vpc-and-eks-cluster-on-aws-with-terraform-4e16</guid>
      <description>&lt;p&gt;In today’s rapidly evolving cloud infrastructure landscape, managing resources efficiently and securely is paramount. Infrastructure as Code (IaC) tools like Terraform empower developers and operators to automate the provisioning and management of cloud resources. In this tech blog, we will delve into how Terraform can be utilized to create a custom Virtual Private Cloud (VPC) and an Amazon Elastic Kubernetes Service (EKS) cluster atop it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Configuration
&lt;/h2&gt;

&lt;p&gt;You can visit HashiCorp site here to install the Terraform on your machine depending upon your Operating System.&lt;/p&gt;

&lt;p&gt;In this blog, we are going to provision a Custom VPC with Security Group and then provision and EKS (Elastic Kubernetes Service) with autoscaling on AWS.&lt;/p&gt;

&lt;p&gt;Now, let’s take a closer look at the Terraform script used to orchestrate our infrastructure.&lt;/p&gt;

&lt;p&gt;It is always recommended to split these resources into multiple .tf files (terraform files) as they easy to manage and debug later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Defining variables&lt;/strong&gt; — Variables like the version of the EKS Cluster (1.29), VPC CIDR range and AWS region can be defined in &lt;code&gt;variables.tf&lt;/code&gt; file and can be used across other terraform configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## Variables
variable "eks_version" {
  default     = 1.29
  description = "EKS version"
}

variable "vpc_cidr" {
  default     = "10.0.0.0/16"
  description = "CIDR range of the VPC"
}

variable "aws_region" {
  default = "us-east-1"
  description = "AWS Region"
}

# Configure the AWS Provider
provider "aws" {
  region = var.aws_region
  profile = "myaws"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Custom VPC and Security Group&lt;/strong&gt; — The script starts by defining a custom VPC using the terraform-aws-modules/vpc/aws module. It specifies VPC characteristics like CIDR block, availability zones, public and private subnets, and DNS settings. An AWS security group is created to manage inbound and outbound traffic for EKS worker nodes. Ingress and egress rules are defined to control traffic flow. Copy the below script into &lt;code&gt;vpc.tf&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## VPC setup
data "aws_availability_zones" "available" {}

locals {
  cluster_name = "myekscluster-${random_string.suffix.result}"
}

resource "random_string" "suffix" {
  length  = 9
  special = false
}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.7.0"

  name                 = "myvpc-for-eks"
  cidr                 = var.vpc_cidr
  azs                  = data.aws_availability_zones.available.names
  private_subnets      = ["10.0.1.0/24", "10.0.2.0/24"]
  public_subnets       = ["10.0.4.0/24", "10.0.5.0/24"]
  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "owner" = "Vinod"
  }

  public_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/elb"                      = "1"
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"             = "1"
  }

}

## Security group setup
resource "aws_security_group" "all_worker_mgmt" {
  name_prefix = "all_worker_management"
  vpc_id      = module.vpc.vpc_id
}

resource "aws_security_group_rule" "all_worker_mgmt_ingress" {
  description       = "Allow inbound traffic from eks"
  from_port         = 0
  protocol          = "-1"
  to_port           = 0
  security_group_id = aws_security_group.all_worker_mgmt.id
  type              = "ingress"
  cidr_blocks = [
    "10.0.0.0/8",
    "172.16.0.0/12",
    "192.168.0.0/16",
  ]
}

resource "aws_security_group_rule" "all_worker_mgmt_egress" {
  description       = "Allow outbound traffic to anywhere"
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.all_worker_mgmt.id
  to_port           = 0
  type              = "egress"
  cidr_blocks       = ["0.0.0.0/0"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;EKS Cluster&lt;/strong&gt; — The EKS cluster is provisioned using the terraform-aws-modules/eks/aws module. A file &lt;code&gt;eks.tf&lt;/code&gt; can be created with below content like cluster name, version, subnet IDs, and other configurations such as managed node groups to provision the resources using terraform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## EKS setup with autoscaling on worker nodes
module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = local.cluster_name
  cluster_version = var.eks_version
  subnet_ids      = module.vpc.private_subnets

  enable_irsa = true

  tags = {
    cluster = "demo-cluster"
  }

  vpc_id = module.vpc.vpc_id

  eks_managed_node_group_defaults = {
    ami_type               = "AL2_x86_64"
    instance_types         = ["t3.medium"]
    vpc_security_group_ids = [aws_security_group.all_worker_mgmt.id]
  }

  eks_managed_node_groups = {
    node_group = {
      min_size     = 1
      max_size     = 3
      desired_size = 2
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Outputs&lt;/strong&gt; — The outputs are optional though but very useful to refer the resources created through terraform later like the cluster endpoint, cluster id, etc. It can be created with following contents in a file &lt;code&gt;outputs.tf&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## Outputs
output "cluster_id" {
  description = "EKS cluster id"
  value       = module.eks.cluster_id
}

output "cluster_endpoint" {
  description = "Endpoint for EKS control plane."
  value       = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
  description = "Security group ids attached to the cluster control plane."
  value       = module.eks.cluster_security_group_id
}

output "region" {
  description = "AWS region"
  value       = var.aws_region
}

output "oidc_provider_arn" {
  description = "ARN of OIDC Provider"
  value = module.eks.oidc_provider_arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And finally, you need &lt;code&gt;versions.tf&lt;/code&gt; file with following script:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    random = {
      source  = "hashicorp/random"
      version = "~&amp;gt; 3.1.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "&amp;gt;=2.7.1"
    }
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 5.0"
    }
    local = {
      source  = "hashicorp/local"
      version = "~&amp;gt; 2.1.0"
    }
    null = {
      source  = "hashicorp/null"
      version = "~&amp;gt; 3.1.0"
    }
    cloudinit = {
      source  = "hashicorp/cloudinit"
      version = "~&amp;gt; 2.2.0"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have all these terraform files ready then you can run following command to initialize the terraform provider plugin (AWS in this example) and modules.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform init&lt;/code&gt;&lt;br&gt;
Perform a dry run using following command to see all the changes that would be applied:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform plan&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And then execute following to finally creating the resources on AWS:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform apply --auto-approve&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By leveraging Terraform, we have automated the creation of a custom VPC and an EKS cluster, simplifying infrastructure management and ensuring consistency across environments. This script serves as a foundation for building scalable and resilient Kubernetes clusters on AWS. Whether you’re deploying a development sandbox or a production-grade environment, Terraform provides the flexibility and control needed to meet your requirements.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>infrastructureascode</category>
      <category>terraform</category>
      <category>eks</category>
    </item>
    <item>
      <title>Designing a scalable Webhook using AWS Serverless Stack</title>
      <dc:creator>Vinod Kumar</dc:creator>
      <pubDate>Sun, 03 Mar 2024 14:08:59 +0000</pubDate>
      <link>https://dev.to/vinod827/designing-a-scalable-webhook-using-aws-serverless-stack-hd5</link>
      <guid>https://dev.to/vinod827/designing-a-scalable-webhook-using-aws-serverless-stack-hd5</guid>
      <description>&lt;p&gt;We have often used Webhook a lot during our interactions with the systems unknowingly but do we know what exactly a Webhook is and how it is better than other system design patterns like short/long polling?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmykyqny8ybudxxlah5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmykyqny8ybudxxlah5d.png" alt="Webhook" width="360" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Webhook is an HTTP/S call that gets triggered between the two systems where the source system triggers a specific event that leads to the notification of the targeted application. A very common example is when you swipe your credit card at the merchant for a Point of Sale (POS) transaction, you receive a notification in your phone about the transaction that you made.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ajmsgxi7wxz15yhd8hr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ajmsgxi7wxz15yhd8hr.png" alt="A typical point-of-sale transaction using a Webhook Pattern" width="720" height="581"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just think of the same scenario of using a Point of Sale (POS) transaction using the concept of a short/long polling pattern where the mobile application or client (in this above example) has to constantly poll the status of your bank account frequently to see any latest updates for any transaction. This leads to the wastage of computing power costing huge money to the company.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of using Webhooks
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Realtime communication&lt;/li&gt;
&lt;li&gt;Loose-coupled architecture&lt;/li&gt;
&lt;li&gt;No compute resource wastage unlike in the case of short/long polling&lt;/li&gt;
&lt;li&gt;Encryption of the communication in transit over HTTPS&lt;/li&gt;
&lt;li&gt;Tokenized, only authorized system to invoke the Webhook&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this blog, I have designed a Webhook system using the AWS serverless services: Highly Scalable, Cost-effective, Resilient, and Highly Available system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0762je3jyf7asot9776.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0762je3jyf7asot9776.png" alt="Serverless Webhook design pattern" width="800" height="303"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Note: You can also use your own Auth system to secure this Webhook Endpoint using a serverless AWS Cognito or SAML-based system (not shown in the architecture) at the API Gateway&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How does this serverless Webhook work?
&lt;/h2&gt;

&lt;p&gt;The DNS (Domain Name System) for the Webhook Endpoint is configured with the &lt;strong&gt;AWS Route 53&lt;/strong&gt; service with an alias record pointing to the endpoint of the API Gateway.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;API Gateway&lt;/strong&gt; is a regional service from AWS and serves as the entry point for incoming requests efficiently routing them to the AWS SQS (Simple Queue Service). The API Gateway has a default rate limit of &lt;strong&gt;10,000 requests per second (RPS) and a default burst limit of 5,000 requests&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The SQS Queue used here is &lt;strong&gt;FIFO (First In First Out)&lt;/strong&gt; based as the ordering of every request is critical to the Webhook system and not just that the &lt;strong&gt;SQS FIFO provides content-based deduplication out of the box&lt;/strong&gt;, which means if a request from the producer system is triggered twice by mistake within 5-minute span then it will be considered as duplicate and will not reach to the queue. The SQS will act as a reliable buffer, decoupling components and ensuring no data loss during traffic spikes or service failures, allowing the system to retry.&lt;/p&gt;

&lt;p&gt;All the messages received in the SQS FIFO will be processed in the same order by the &lt;strong&gt;AWS Lambda&lt;/strong&gt; which can &lt;strong&gt;scale out by itself up to 1,000 concurrent executions of Lambda every 10 seconds&lt;/strong&gt;. The AWS Lambda will be used for the actual processing of the Webhook business logic here. When handling potentially high volumes of requests, Lambda functions further enhance scalability by allowing for event-driven, serverless execution of code, automatically scaling up or down based on demand.&lt;/p&gt;

&lt;p&gt;All the logs generated by it will be streamed to the &lt;strong&gt;AWS CloudWatch Logs&lt;/strong&gt; for monitoring and debugging purposes.&lt;/p&gt;

&lt;p&gt;If a message is not processed timely or if the SQS queue receives a bad message then it will be moved to the &lt;strong&gt;Dead Letter Queue (DLQ)&lt;/strong&gt; for re-processing of them.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;AWS Certificate Manager&lt;/strong&gt; will be used to manage the SSL certificates and ensure that they are valid and rotated before their expiration date ensuring that all HTTPS requests are encrypted in transit&lt;/p&gt;

&lt;p&gt;This combination of API Gateway, SQS, and Lambda enables robust, cost-effective resilient, and scalable architectures capable of handling varying workloads and maintaining high availability under challenging conditions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, implementing a scalable Webhook system using AWS serverless services offers numerous advantages in terms of real-time communication, loose-coupled architecture, and efficient resource utilization. By leveraging AWS API Gateway, SQS, and Lambda, we can design a robust, cost-effective, resilient, and highly available architecture capable of handling varying workloads and maintaining high availability under challenging conditions. By adopting this architecture, organizations can streamline their workflows, improve system reliability, and reduce operational costs associated with traditional polling-based approaches. Embracing serverless technologies on AWS empowers developers to focus on building and innovating, while AWS handles the underlying infrastructure management, enabling scalable and resilient solutions for diverse use cases.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>lambda</category>
      <category>sqs</category>
      <category>webhooks</category>
    </item>
    <item>
      <title>Scale your apps using KEDA in AWS EKS</title>
      <dc:creator>Vinod Kumar</dc:creator>
      <pubDate>Sun, 11 Feb 2024 12:51:09 +0000</pubDate>
      <link>https://dev.to/vinod827/scale-your-apps-using-keda-in-kubernetes-4i3h</link>
      <guid>https://dev.to/vinod827/scale-your-apps-using-keda-in-kubernetes-4i3h</guid>
      <description>&lt;p&gt;&lt;strong&gt;KEDA&lt;/strong&gt; (or, &lt;strong&gt;Kubernetes Event-Driven Autoscaling&lt;/strong&gt;) is a Kubernetes-based event-driven auto-scaler for Pods. With KEDA, we can scale out our application easily and then scale back to 0 which is not possible when it comes to the default HPA (Horizontal Pod Autoscaler) of Kubernetes. With HPA, we can only bring it down to 1 pod and not 0 with metric support of only CPU &amp;amp; Memory. Whereas, KEDA has a huge support of external metrics/services that can act as a source of events providing the event data to scale the apps. For instance, we can have KEDA Scalers like AWS SQS, Datadog, RabbitMQ, Kafka, CloudWatch, DynamoDB, Elasticsearch, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwuo6htx8ez9gwovxwtk6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwuo6htx8ez9gwovxwtk6.png" alt="KEDA" width="720" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The major advantage to go with KEDA over simple Kubernetes HPA are:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Supports multiple metrics/events for scaling&lt;/li&gt;
&lt;li&gt;Pods can be scaled down to Zero&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Architecture:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2cp6i4es3k9weiuv50ay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2cp6i4es3k9weiuv50ay.png" alt="Image description" width="720" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, we will see how we can use KEDA to scale our application running in the AWS EKS (Elastic Kubernetes Service) based on the message received in the AWS SQS (Simple Queue Service).&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create an EKS Cluster (optional, in case the cluster is not available)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
 name: eks-keda-test
 region: us-east-2
 version: '1.21'
managedNodeGroups:
 - name: ng
   instanceType: m4.xlarge
   minSize: 1
   maxSize: 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Install KEDA in Kubernetes using Helm
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Adding the Helm repo
helm repo add kedacore https://kedacore.github.io/charts
#Update the Helm repo
helm repo update
#Install Keda helm chart
kubectl create namespace keda
helm install keda kedacore/keda --namespace keda
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check if the KEDA Operators and Metric API Server is up or not in the KEDA namespace:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pod -n keda
NAME                                               READY   STATUS    RESTARTS   AGE
keda-operator-68cd48977c-swztq                     1/1     Running   0          23h
keda-operator-metrics-apiserver-7d888bf9b5-s2fqs   1/1     Running   0          23h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Deploy a test appliation in a Kubernetes cluster
&lt;/h3&gt;

&lt;p&gt;Here, I’m running the nginx deployment with 1 replica&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
 labels:
  app: my-nginx
 name: my-nginx
spec:
 replicas: 1
 selector:
  matchLabels:
   app: my-nginx
 strategy: {}
 template:
   metadata:
     labels:
       app: my-nginx
   spec:
     containers:
     - image: nginx
       name: nginx
       resources: {}
       status: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Create an AWS SQS Queue (Standard Queue)
&lt;/h3&gt;

&lt;p&gt;Head to AWS Console -&amp;gt; AWS SQS and create a Standard Queue. Copy its region, Queue URL&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v97izomypvln6uk7i46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v97izomypvln6uk7i46.png" alt="Image description" width="720" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Create KEDA Scaler (SQS) using the ScaledObject
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;(Note, SQS-based KEDA Scaler for different types of integration of scaler (say CloudWatch, Datadog, etc), please refer to the official documentation here.)&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
 name: aws-sqs-queue-scaledobject
 namespace: default
spec:
 scaleTargetRef:
   name: my-nginx
 pollingInterval: 5 #Interval for polling
 cooldownPeriod: 10
 idleReplicaCount: 0 # When idle, scale-in to 0 pods
 minReplicaCount: 1
 maxReplicaCount: 3
 fallback: # Fallback strategy when metrics are unavailable for the apps
 failureThreshold: 5 #when metrics are unavailable, match the desired state of replicas -&amp;gt; 2
replicas: 2 #Keep this desired state when metrics are unavailable
triggers:
- type: aws-sqs-queue
  authenticationRef:
    name: keda-trigger-auth-aws-credentials
  metadata:
    queueURL: https://sqs.us-east-2.amazonaws.com/711164302624/my-sqs-keda
    queueLength: "5" #batch size
    awsRegion: "us-east-2"
    identityOwner: operator #when node role has required permission
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;KEDA has support for both ScaledObjects (like Kubernetes Deployment, StatefulSet, Custom Resource) and ScaledJobs (like Kubernetes Job). The above example is based on ScaledObject for Deployment.&lt;/p&gt;

&lt;p&gt;Few things to note here in the above ScaledObject:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The ScaledObject will be created in the default namespace&lt;/li&gt;
&lt;li&gt;It will manage the deployment called ‘&lt;strong&gt;my-nginx&lt;/strong&gt;’&lt;/li&gt;
&lt;li&gt;The event source is AWS SQS with queue URL as &lt;a href="https://sqs.us-east-2.amazonaws.com/711164302624/my-sqs-keda" rel="noopener noreferrer"&gt;https://sqs.us-east-2.amazonaws.com/711164302624/my-sqs-keda&lt;/a&gt; in region us-east-2. Note, that you can give the Queue name as well. The URL is better in case of any ambiguity.&lt;/li&gt;
&lt;li&gt;The queue length is 5 which means it will get triggered once the batch size of 5 messages arrives in the queue.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;identityOwner&lt;/strong&gt; is the operator (or can be pod as a default) which means the EKS Node role should have permission to communicate with the SQS. The required policy is &lt;strong&gt;sqs:GetQueueAttributes&lt;/strong&gt; attached to its node role.&lt;/li&gt;
&lt;li&gt;The idleReplicaCount is 0 which brings the pod to 0 when idle&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 6: Creating KEDA TriggerAuthentication (optional)
&lt;/h3&gt;

&lt;p&gt;If the KEDA operator requires to authenticate with the source event (in my case SQS) then we need to follow one of the following authentication mechanisms here:-&lt;/p&gt;

&lt;p&gt;6.1) Using IAM role attached to EKS Node — This is the most simplest way by creating an IAM policy of &lt;strong&gt;sqs:GetQueueAttributes&lt;/strong&gt; and then attaching it with the existing/new IAM role to Node Instance. We need to ensure that we keep identityOwner as operator in the ScaledObject (my above example follows this). This gives KEDA access to SQS to get the metric data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpza8dzi5gmjrkaczecb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpza8dzi5gmjrkaczecb.png" alt="Image description" width="720" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.2) Using IAM User credentials — In this, we need to create an IAM user in AWS, create a secret in Kubernetes, create a TriggerAuthentication in Kubernetes, and use this TriggerAuthentication in the ScaledObject in Kubernetes as shown below:-&lt;/p&gt;

&lt;p&gt;(a) Create an IAM User in AWS with a policy to access the SQS. Note down its Base64 encoded version of access and secret access keys and use it to create the secret.&lt;/p&gt;

&lt;p&gt;(b) Create a secret in Kubernetes with the IAM user’s BASE64 encoded access and secret access keys&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadata:
  name: iam-user-secret
  namespace: default
data:
  AWS_ACCESS_KEY_ID: &amp;lt;base64-encoded-key&amp;gt;
  AWS_SECRET_ACCESS_KEY: &amp;lt;base64-encoded-secret-key&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(c) Create a Trigger Authentication&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
  name: keda-trigger-auth-aws-credentials
  namespace: default
spec:
  secretTargetRef:
  - parameter: awsAccessKeyID     # Required.
    name: iam-user-secret         # Required.
    key: AWS_ACCESS_KEY_ID        # Required.
  - parameter: awsSecretAccessKey # Required.
    name: test-secrets            # Required.
    key: AWS_SECRET_ACCESS_KEY    # Required.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(d) Create a ScaledObject using the TriggerAuthentication&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: aws-sqs-queue-scaledobject
  namespace: default
spec:
  scaleTargetRef:
    name: my-nginx
  minReplicaCount: 0
  maxReplicaCount: 2
  triggers:
  - type: aws-sqs-queue
    authenticationRef:
      name: keda-trigger-auth-aws-credentials
    metadata:
      queueURL: https://sqs.us-east-1.amazonaws.com/012345678912/Queue
      queueLength: "5"
      awsRegion: "us-east-2"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Voila! It is all ready now. Let’s test it :)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Demo Time:-&lt;/strong&gt;&lt;br&gt;
Step 1 — Keep the deployment in the watch mode&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deploy --watch
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx   0/0     0            0           4h52m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 2 — Add some messages to SQS queue (note put more than 5 messages in the queue, as the batch size given is 5 in the ScaledObject)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fealeevxkgm25f6xq0ukk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fealeevxkgm25f6xq0ukk.png" alt="Image description" width="720" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Result:-
&lt;/h3&gt;

&lt;p&gt;KEDA scaled the application to 2 replicas automatically as messages arrived in the queue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME       READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx   0/0     0            0           4h52m
my-nginx   0/1     0            0           4h54m
my-nginx   0/1     0            0           4h54m
my-nginx   0/1     0            0           4h54m
my-nginx   0/1     1            0           4h54m
my-nginx   1/1     1            1           4h54m
my-nginx   1/2     1            1           4h54m
my-nginx   1/2     1            1           4h54m
my-nginx   1/2     1            1           4h54m
my-nginx   1/2     2            1           4h54m
my-nginx   2/2     2            2           4h54m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a while during the idle state, the pod will scale down to 0 automatically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my-nginx   0/0     0            0           4h57m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Summary:
&lt;/h3&gt;

&lt;p&gt;In this article, we have seen how we can use KEDA to scale our apps from 0 replicas to the ’n’ number of replicas based on external metrics/events especially based on the messages in the AWS SQS Queue.&lt;/p&gt;

&lt;p&gt;As always, you can find the entire source code used in this article in this GitHub link:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/vinod827/k8s-nest/tree/main/iac/aws/eks/keda" rel="noopener noreferrer"&gt;https://github.com/vinod827/k8s-nest/tree/main/iac/aws/eks/keda&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to fork this project and add more IaC (Infrastructure as Code), feedback, and issues in it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reference:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://keda.sh/docs/2.7/deploy/" rel="noopener noreferrer"&gt;https://keda.sh/docs/2.7/deploy/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>keda</category>
      <category>sqs</category>
      <category>eks</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>A 3-tier web application using AWS Serverless Application Model (SAM) tool</title>
      <dc:creator>Vinod Kumar</dc:creator>
      <pubDate>Sun, 11 Feb 2024 12:15:43 +0000</pubDate>
      <link>https://dev.to/vinod827/a-3-tier-web-application-using-aws-serverless-application-model-sam-tool-3n2j</link>
      <guid>https://dev.to/vinod827/a-3-tier-web-application-using-aws-serverless-application-model-sam-tool-3n2j</guid>
      <description>&lt;p&gt;With &lt;strong&gt;AWS SAM (Serverless Application Model)&lt;/strong&gt;, we can create serverless resources on AWS to build &amp;amp; develop serverless applications for instance serverless resources like AWS Lambda, AWS API Gateway, etc. Not just that instead of writing the Lambda directly on AWS Console, we can use SAM to develop, build and test Lambda locally before it can be pushed to AWS Cloud, this saves a lot of time in debugging the Lambda at the local system itself and ensures productivity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqw704w9cdlabwtd1prl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqw704w9cdlabwtd1prl.png" alt="AWS Serverless Application Model" width="149" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, we will see how we can use AWS SAM to build a simple 3-tier serverless web application and with the help of a few CLI (Command Line Interface) commands, how we can deploy the same to AWS Cloud from our local system and then access it at this domain &lt;a href="https://acloudtiger.com"&gt;https://acloudtiger.com&lt;/a&gt;.&lt;br&gt;
(Please note I will be decommissioning the resources by the time this blog is live so our serverless app would not be accessible at this given domain)&lt;/p&gt;
&lt;h2&gt;
  
  
  Advantages of going with AWS SAM:-
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Extension of AWS CloudFormation&lt;/li&gt;
&lt;li&gt;Develop, build, debug and test locally&lt;/li&gt;
&lt;li&gt;Supports integration with CI/CD servers like Jenkins, Atlassian Bamboo, etc as a single deployment configuration YAML file&lt;/li&gt;
&lt;li&gt;Native support of other AWS tools/resources like AWS Cloud9 IDE, Code Build, CodeDeploy and CodePipeline&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Our 3-tier Serverless Web Application Architecture:-
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipuv8t3zh8oabii2ecj9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipuv8t3zh8oabii2ecj9.png" alt="An AWS 3-tier typical Serverless Web Architecture" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Ensure that AWS CLI &amp;amp; SAM CLI is installed &amp;amp; configured
&lt;/h2&gt;

&lt;p&gt;Visit AWS documentation here &lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html"&gt;https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html&lt;/a&gt; and install the binaries in your local system depending upon your Operating Systems. You also need to ensure that IAM User is configured in your AWS CLI with access keys and secret access keys.&lt;br&gt;
(To configure access/secret access keys of IAM User with your CLI, you can refer to &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html"&gt;https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html&lt;/a&gt;)&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Initialize a SAM application
&lt;/h2&gt;

&lt;p&gt;Use the following command to initialize a SAM application:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam init --runtime nodejs14.x --name &amp;lt;application name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, I have chosen NodeJS as runtime however you are free to choose any other runtimes like Python, etc. You also need to give an application name in the flag called name.&lt;br&gt;
Once executed, then it will give you a few options on the terminal as a base starter project to start with. You can select basic Hello World as an example.&lt;br&gt;
Once done, then it generates a skeleton project structure locally with a &lt;strong&gt;template.YAML&lt;/strong&gt; file.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3: Create AWS resources using the SAM template.YAML file
&lt;/h2&gt;

&lt;p&gt;To create a Lambda function using the template add the below configuration inside the Resources tag as shown below:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resources:
MyAppFunction:
    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
    Properties:
      CodeUri: my-app/
      Handler: app.lambdaHandler
      Runtime: nodejs14.x
      MemorySize: !Ref DefaultLambdaMemory
      Timeout: !Ref DefaultLambdaTimeout
      Role: !GetAtt MyAppLambdaExecutionRole.Arn
      Environment:
        Variables:
          RUNTIME_DDB_TABLE_NAME: !Ref DynamoDBTable
      Architectures:
        - x86_64
      Events:
        MyApp:
          Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
          Properties:
            Path: /hello
            Method: get
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note, &lt;strong&gt;MyAppFunction&lt;/strong&gt; is a logical name given to Lambda. You can give any name to it and can refer these resources anywhere inside this template to any other AWS resources by using ARN (Amazon Resource Name) i.e.&lt;strong&gt;!GetAtt MyAppFunction.Arn&lt;/strong&gt;&lt;br&gt;
The filename of my Lambda is app.js inside the folder called my-app (refer CodeUri: my-app/ in the above template.yaml definition). You can write your own logic inside that Lambda function. For this demo purpose, I have put below the logic of writing an item to the DynamoDB table whenever this Lambda is invoked for the first time as shown below:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const AWS = require("aws-sdk");

//All Cold Start Code of Lambda goes here
const dynamodb = new AWS.DynamoDB.DocumentClient();
const ddbTable = process.env.RUNTIME_DDB_TABLE_NAME;

//All Warm Start Code of Lambda goes withing LambdaHandler
exports.lambdaHandler = async (event, context) =&amp;gt; {
    let body;
    let statusCode = 200;
    const headers = {
      'Content-Type': 'application/json',     
      'Access-Control-Allow-Headers': 'Content-Type',
      'Access-Control-Allow-Origin': '*',
      'Access-Control-Allow-Methods': 'GET'
    };

    try {
        // Adding random data to DynamoDB table - Adding static data just for this Sample App otherwise data will be populated based on parsing the Request
        await dynamodb
          .put({
            TableName: ddbTable,
            Item: {
              'id': 'E1001',             
              'name': 'Vinod Kumar',
              'company': 'My Company',
              'email': 'vinod827@yahoo.com'
            }
          })
          .promise();

        // Retrieving data from DynamoDB table - Just from demo purpose only otherwise data will be fetched based on given employee id sent dynamically through headers
        body = await dynamodb.get({ TableName: ddbTable, Key: { id: 'E1001' }}).promise(); // To retrive 1 item

    } catch (err) {
        console.log(err);
        statusCode = 400;
        body = err.message;
        return err;

    } finally {
        body = JSON.stringify(body);

    }

    //return response
    return {
        statusCode,
        body,
        headers
    };

};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, to add another serverless resource like DynamoDB table, add below in the same template:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  DynamoDBTable:
      Type: AWS::DynamoDB::Table
      Properties: 
        AttributeDefinitions: 
          - AttributeName: id
            AttributeType: S
        KeySchema: 
          - AttributeName: id
            KeyType: HASH
        ProvisionedThroughput: 
          ReadCapacityUnits: 5
          WriteCapacityUnits: 5
        Tags: # Optional
          - Key: Namespace
            Value: '@MyApp'
          - Key: Name
            Value: !Sub
            - '${ExportPrefix_}DynamoDBTable'
            - ExportPrefix_: !If
              - HasExportPrefix
              - !Join ['-', [!Ref ExportPrefix, !Ref Environment, '']]
              - !Join ['-', [!Select [0, !Split ["-", !Ref "AWS::StackName"]], !Ref Environment, '']]
          - Key: Environment
            Value: !FindInMap [Environment, FullForm, !Ref Environment]
        TimeToLiveSpecification: !If
          - HasTTLAttribute
          - AttributeName: !Ref TTLAttribute
            Enabled: true
          - !Ref 'AWS::NoValue'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To complete our 3-tier serverless web application architecture below is the completed version of the template.YAML file which would create the AWS resources such as Lambda, DynamoDB, API Gateway, CloudFront distribution, Route 53 Public Hosted Zone, etc. You can simply copy and paste the below to your template.YAML file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: &amp;gt;
  AWS Serverless Application Model app
  Sample application

Globals:
  Function:
    Timeout: !Ref DefaultLambdaTimeout # Limit 900 seconds (15 minutes)
    MemorySize: !Ref DefaultLambdaMemory # Limit 128 MB to 3,008 MB, in 64 MB increments

  Api:
    Cors:
      AllowMethods: "'DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT'"
      AllowHeaders: "'Content-Type,X-Amz-Date,X-Amz-Security-Token,Authorization,X-Api-Key,X-Requested-With,Accept,Access-Control-Allow-Methods,Access-Control-Allow-Origin,Access-Control-Allow-Headers'"
      AllowOrigin: "'*'"    

Parameters:
  Environment:
    Type: String
    Default: dev
    Description: Deployment Environment
    AllowedValues:
      - sbx
      - dev
      - qa
      - stage
      - prod
  ExportPrefix:
    Type: String
    Default: 'myapp'
    Description: &amp;gt;-
      Prefix for the managed resources and cloudformation exports.
      Provide it in `namespace-environment` format,
      where namespace can be product UPI or leave blank for defaults.
    AllowedPattern: ^[a-zA-Z]+(-?[a-zA-Z0-9]+)*$
    ConstraintDescription: &amp;gt;-
      Only hyphen (-) separated alphanumeric string is allowed. Should start with a letter.
    MinLength: 3
    MaxLength: 30
  BillingMode:
    AllowedValues:
    - PAY_PER_REQUEST
    - PROVISIONED
    Default: 'PROVISIONED'
    Description: &amp;gt;-
      Specify how you are charged for read and write throughput and how you manage capacity.
      Set to PROVISIONED for predictable workloads and PAY_PER_REQUEST for unpredictable workloads.
      PAY_PER_REQUEST sets the billing mode to On-Demand Mode
    Type: String
  DefaultLambdaTimeout:
    Default: 3
    MinValue: 3
    MaxValue: 900
    Description: &amp;gt;-
      The amount of time that Lambda allows a function to run before stopping it.
      The default is 3 seconds.
      The maximum allowed value is 900 seconds.
    Type: Number
  DefaultLambdaMemory:
    Default: 128
    MinValue: 128
    MaxValue: 3008
    Description: &amp;gt;-
      The amount of memory that your function has access to.
      Increasing the function's memory also increases its CPU allocation.
      The default value is 128 MB. The value must be a multiple of 64 MB.
    Type: Number
  TTLAttribute:
    Default: None
    Description: &amp;gt;-
      Attribute name for time to live. Provide None or leave blank for if not required.
    Type: String
  CDNHostedZoneId:
    Default: 'Z2FDTNDATAQYW2'
    Description: &amp;gt;-
      CloudFront hosted zone id. 
      Always Static Value. 
      Refer AWS Documentation here
      https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-route53-aliastarget.html#cfn-route53-aliastarget-hostedzoneid
    Type: String
  DomainName:
    Default: 'acloudtiger.com'
    Description: &amp;gt;-
      Registered domain name with Top Level Domain as .com
    Type: String


# --------------------------------------------------------------- #
#                                   Mappings                                   #
# --------------------------------------------------------------- #

Mappings:
  Environment:
    FullForm:
      sbx: sandbox
      dev: development
      stage: staging
      qa: qa
      prod: production



# ------------------------------------------------------------ #
#                                  Conditions                                  #
# ------------------------------------------------------------ #

Conditions:
  HasExportPrefix: !Not [!Equals [!Ref ExportPrefix, '']]
  HasNoTTLAttribute: !Or [!Equals [!Ref TTLAttribute, ''], !Equals [!Ref TTLAttribute, 'None']]
  HasTTLAttribute: !Not [!Condition HasNoTTLAttribute]

# -------------------------------------------------------------- #
#                                  Resources                                   #
# -------------------------------------------------------------- #

Resources:
# ---------------------------------------------------------------- #
#  MyApp Lambda Execution Role for reading writing to DynamoDB, CloudWatch, etc #
# ---------------------------------------------------------------- #
  MyAppLambdaExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Statement:
          - Action:
              - "sts:AssumeRole"
            Effect: Allow
            Principal:
              Service:
                - lambda.amazonaws.com
        Version: 2012-10-17
      Path: /
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
        - arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess
        - arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
      Policies:
        - PolicyName: !Sub
          - "${ExportPrefix_}MyAppGetLambdaPolicy"
          - ExportPrefix_: !If
            - HasExportPrefix
            - !Join ['-', [!Ref ExportPrefix, !Ref Environment, '']]
            - !Join ['-', [!Select [0, !Split ["-", !Ref "AWS::StackName"]], !Ref Environment, '']]
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Sid: MyAppLambdaExecutionAndXRayTracing
                Effect: Allow
                Action:
                  - cloudwatch:PutMetricData
                  - dynamodb:UpdateItem
                  - dynamodb:ConditionCheckItem
                  - dynamodb:Scan
                  - dynamodb:BatchWriteItem
                  - dynamodb:PutItem
                  - dynamodb:GetItem
                  - dynamodb:DescribeTable
                  - dynamodb:Query
                  - dynamodb:BatchGetItem
                  - dynamodb:DeleteItem
                  - s3:*
                  - lambda:*
                Resource: "*"

# -------------------------------------------------------------- #
#                          MyApp Lambda Function                                #
# -------------------------------------------------------------- #
  MyAppFunction:
    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
    Properties:
      CodeUri: my-app/
      Handler: app.lambdaHandler
      Runtime: nodejs14.x
      MemorySize: !Ref DefaultLambdaMemory
      Timeout: !Ref DefaultLambdaTimeout
      Role: !GetAtt MyAppLambdaExecutionRole.Arn
      Environment:
        Variables:
          RUNTIME_DDB_TABLE_NAME: !Ref DynamoDBTable
      Architectures:
        - x86_64
      Events:
        MyApp:
          Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
          Properties:
            Path: /hello
            Method: get           


# ------------------------------------------------------------ #
#                        DynamoDB Table                              #
# ------------------------------------------------------------ #
  DynamoDBTable:
      Type: AWS::DynamoDB::Table
      Properties: 
        AttributeDefinitions: 
          - AttributeName: id
            AttributeType: S
        KeySchema: 
          - AttributeName: id
            KeyType: HASH
        ProvisionedThroughput: 
          ReadCapacityUnits: 5
          WriteCapacityUnits: 5
        Tags: # Optional
          - Key: Namespace
            Value: '@MyApp'
          - Key: Name
            Value: !Sub
            - '${ExportPrefix_}DynamoDBTable'
            - ExportPrefix_: !If
              - HasExportPrefix
              - !Join ['-', [!Ref ExportPrefix, !Ref Environment, '']]
              - !Join ['-', [!Select [0, !Split ["-", !Ref "AWS::StackName"]], !Ref Environment, '']]
          - Key: Environment
            Value: !FindInMap [Environment, FullForm, !Ref Environment]
        TimeToLiveSpecification: !If
          - HasTTLAttribute
          - AttributeName: !Ref TTLAttribute
            Enabled: true
          - !Ref 'AWS::NoValue'

# --------------------------------------------------------------- #
#             S3 bucket for static website hosting                             #
# ---------------------------- ---------------------------------- #
  FEStaticWebsiteS3Bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: acloudtiger.com
      Tags: # Optional
          - Key: Namespace
            Value: '@MyApp'
          - Key: Name
            Value: !Sub
            - '${ExportPrefix_}FEStaticWebsiteS3Bucket'
            - ExportPrefix_: !If
              - HasExportPrefix
              - !Join ['-', [!Ref ExportPrefix, !Ref Environment, '']]
              - !Join ['-', [!Select [0, !Split ["-", !Ref "AWS::StackName"]], !Ref Environment, '']]
          - Key: Environment
            Value: !FindInMap [Environment, FullForm, !Ref Environment]

  S3BucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Ref FEStaticWebsiteS3Bucket
      PolicyDocument:
        Statement:
          - Effect: Allow
            Action: 's3:GetObject'
            Resource:
              - !Sub "arn:aws:s3:::${FEStaticWebsiteS3Bucket}/*"
            Principal:              
              AWS: !Sub "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity ${MyCDNOAI}"


# ------------------------------------------------------------- #
#                       CDN and its distribution                    #
# ------------------------------------------------------------- #
  MyCDNOAI:
    Type: 'AWS::CloudFront::CloudFrontOriginAccessIdentity'
    Properties:
      CloudFrontOriginAccessIdentityConfig:
        Comment: 'Serverless website OA'

  MyCDN:
    Type: 'AWS::CloudFront::Distribution'
    Properties:
      Tags: # Optional
          - Key: Namespace
            Value: '@MyApp'
          - Key: Name
            Value: !Sub
            - '${ExportPrefix_}MyCDN'
            - ExportPrefix_: !If
              - HasExportPrefix
              - !Join ['-', [!Ref ExportPrefix, !Ref Environment, '']]
              - !Join ['-', [!Select [0, !Split ["-", !Ref "AWS::StackName"]], !Ref Environment, '']]
          - Key: Environment
            Value: !FindInMap [Environment, FullForm, !Ref Environment]
      DistributionConfig:
        Comment: "CDN configuration for s3 static website"
        ViewerCertificate:
          AcmCertificateArn: arn:aws:acm:us-east-1:195725532069:certificate/742155d5-0040-4a85-a24d-ac248d93324e #Cert for my personal domain acloudtiger.com
          MinimumProtocolVersion: TLSv1.1_2016
          SslSupportMethod: sni-only
        DefaultRootObject: 'index.html'
        Aliases:
          - !Ref DomainName
          #- !Sub '*.${DomainName}'
          - !Sub 'www.${DomainName}'
        Enabled: true
        HttpVersion: http2
        IPV6Enabled: true
        Origins:
          - Id: my-s3-static-website
            DomainName: !GetAtt FEStaticWebsiteS3Bucket.DomainName
            S3OriginConfig:
              OriginAccessIdentity: 
                Fn::Sub: 'origin-access-identity/cloudfront/${MyCDNOAI}'
        DefaultCacheBehavior:
          Compress: 'true'
          AllowedMethods: 
            - DELETE
            - GET
            - HEAD
            - OPTIONS
            - PATCH
            - POST
            - PUT
          CachedMethods: 
            - GET
            - HEAD
            - OPTIONS
          ForwardedValues:
            QueryString: false
          TargetOriginId: my-s3-static-website
          ViewerProtocolPolicy : redirect-to-https

# ---------------------------------------------------#
#  Zone file for TLD - Public Hosted Zone            #
# ---------------------------------------------------#

  # PublicHostedZoneForZoneFile:
  #   Type: AWS::Route53::HostedZone
  #   Properties: 
  #     HostedZoneConfig: 
  #       Comment: Hosted zone acloudtiger.com
  #     Name: !Ref DomainName

# -------------------------------------------------------------#
#                Record sets for Public Hosted Zone            #
# -------------------------------------------------------------#
  # DNSAliasRecordForIPV4:
  #   Type: AWS::Route53::RecordSet
  #   DependsOn: MyCDN
  #   Properties:
  #     HostedZoneId: !Ref PublicHostedZoneForZoneFile
  #     Name: !Ref DomainName
  #     Type: A
  #     AliasTarget:
  #       DNSName: !GetAtt MyCDN.DomainName
  #       HostedZoneId: !Ref CDNHostedZoneId

  # DNSAliasRecordForIPV6:
  #   Type: AWS::Route53::RecordSet
  #   DependsOn: MyCDN
  #   Properties:
  #     HostedZoneId: !Ref PublicHostedZoneForZoneFile
  #     Name: !Ref DomainName
  #     Type: AAAA
  #     AliasTarget:
  #       DNSName: !GetAtt MyCDN.DomainName
  #       HostedZoneId: !Ref CDNHostedZoneId

  # DNSIP4RecordForMyApp:
  #   Type: AWS::Route53::RecordSet
  #   DependsOn: MyCDN
  #   Properties:
  #     HostedZoneId: !Ref PublicHostedZoneForZoneFile
  #     Name: www.acloudtiger.com
  #     Type: A
  #     AliasTarget:
  #       DNSName: !GetAtt MyCDN.DomainName
  #       HostedZoneId: !Ref CDNHostedZoneId

  # DNSIPV6RecordForMyApp:
  #   Type: AWS::Route53::RecordSet
  #   DependsOn: MyCDN
  #   Properties:
  #     HostedZoneId: !Ref PublicHostedZoneForZoneFile
  #     Name: www.acloudtiger.com
  #     Type: AAAA
  #     AliasTarget:
  #       DNSName: !GetAtt MyCDN.DomainName
  #       HostedZoneId: !Ref CDNHostedZoneId  

# ------------------------------------------------------------#
#                          Outputs                                             #
# ----------------------------------------------------------- #
Outputs:
  MyAppApi:
    Description: "API Gateway endpoint URL for Prod stage for this function"
    Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"

  MyAppFunction:
    Description: "Lambda Function ARN"
    Value: !GetAtt MyAppFunction.Arn

  # MyAppFunctionIamRole:
  #   Description: "Implicit IAM Role created for function"
  #   Value: !GetAtt MyAppFunctionRole.Arn

  DynamoDBTable:
    Description: Dynamodb table
    Export:
      Name: !Sub
        - ${ExportPrefix_}:${AWS::Region}:myapp:DynamoDBTable
        - ExportPrefix_: !If
          - HasExportPrefix
          - !Join ['-', [!Ref ExportPrefix, !Ref Environment]]
          - !Join ['-', [!Select [0, !Split ["-", !Ref "AWS::StackName"]], !Ref Environment]]
    Value: !Ref DynamoDBTable

  DynamoDBTableArn:
    Description: Dynamodb table Arn
    Export:
      Name: !Sub
        - ${ExportPrefix_}:${AWS::Region}:myapp:DynamoDBTable:Arn
        - ExportPrefix_: !If
          - HasExportPrefix
          - !Join ['-', [!Ref ExportPrefix, !Ref Environment]]
          - !Join ['-', [!Select [0, !Split ["-", !Ref "AWS::StackName"]], !Ref Environment]]
    Value: !GetAtt DynamoDBTable.Arn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Let’s do the deployment now :)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To deploy your resources there are two ways, interactive mode (recommended if you are new) or the non-interactive mode (if you are pro or want to integrate with your CI/CD servers like Jenkins).&lt;br&gt;
Run the below command and follow all the prompts to follow the interactive mode based deployment:-&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sam buildsam deploy --guided&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Or, for &lt;strong&gt;non-interactive mode&lt;/strong&gt;, execute the below command in your terminal:-&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sam buildsam package --template-file template.yaml --output-template-file deploy.yaml --s3-bucket acloudtiger-sam-template-cli&lt;br&gt;
sam deploy --template-file deploy.yaml --stack-name acloudtiger-sam-stack&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Please note, with the above step, you need to ensure that the S3 bucket (in my case, it is acloudtiger-sam-template-cli) exists.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jcmvbaiw343xmzz5mbo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jcmvbaiw343xmzz5mbo.png" alt="Image description" width="700" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will see the resources getting created on your terminal and once completed, you will have an application ready to be accessed at the public hosted zone domain you mentioned inside this template.YAML. In my case, it is at &lt;a href="https://acloudtiger.com"&gt;https://acloudtiger.com&lt;/a&gt;&lt;br&gt;
Besides, the above steps, if you want to test or debug your Lambda functions locally before deploying them to AWS Cloud, you can execute the below commands and keep them handy.&lt;br&gt;
If you want to execute your Lambda function with a pre-configured event:-&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sam local invoke &amp;lt;Lambda function name&amp;gt; --event &amp;lt;event.json&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;If you do not want to pass any event to your Lambda function:-&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sam local invoke &amp;lt;Lambda function name&amp;gt; --no-event&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;If a debugger is configured in your IDE (say, VS Code) and listening at the particular port (in my case, 1391) then execute the below command:-&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sam local invoke &amp;lt;Lambda function name&amp;gt; -d 1391 --event ./events/event_sqs.json&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hope you like this article :)&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary:
&lt;/h2&gt;

&lt;p&gt;In this blog, we have seen how we can use AWS SAM to build and create serverless resources like AWS Lambda, AWS API Gateway, and AWS DynamoDB table along with other AWS Global services like Route 53 public hosted zone with its record sets and CloudFront Distribution.&lt;br&gt;
Further, we can use this SAM Template (Serverless YAML file) with CI/CD servers like Atlassian Bamboo or Jenkins to build an end-to-end integration pipeline to automate this whole process of setting up the serverless apps on an Adhoc basis.&lt;br&gt;
As usual, you will find the code here in this GitHub repository:&lt;br&gt;
&lt;a href="https://github.com/vinod827/k8s-nest/tree/main/iac/aws/sam"&gt;https://github.com/vinod827/k8s-nest/tree/main/iac/aws/sam&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>lambda</category>
      <category>s3</category>
      <category>dynamodb</category>
    </item>
  </channel>
</rss>
