<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Abhishek Korde</title>
    <description>The latest articles on DEV Community by Abhishek Korde (@abhishek_korde_31).</description>
    <link>https://dev.to/abhishek_korde_31</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/abhishek_korde_31"/>
    <language>en</language>
    <item>
      <title>👉 🚀 GitOps in Action: Deploy Applications on Kubernetes using Argo CD</title>
      <dc:creator>Abhishek Korde</dc:creator>
      <pubDate>Sun, 25 Jan 2026 20:28:13 +0000</pubDate>
      <link>https://dev.to/abhishek_korde_31/deploy-application-using-gitops-tool-argocd-48f0</link>
      <guid>https://dev.to/abhishek_korde_31/deploy-application-using-gitops-tool-argocd-48f0</guid>
      <description>&lt;p&gt;&lt;strong&gt;🎤 INTRO&lt;/strong&gt;&lt;br&gt;
Hi everyone, welcome back to my channel!&lt;br&gt;
Today we’re going to deploy an application using GitOps with Argo CD, one of the most popular continuous delivery tools for Kubernetes.&lt;/p&gt;
&lt;h2&gt;
  
  
  If you’re learning DevOps or Kubernetes, this is a must-have skill.
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;🧠 WHAT IS GITOPS?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GitOps is a deployment approach where Git is the single source of truth.&lt;br&gt;
Whatever is written in Git is exactly what runs in the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Argo CD continuously watches your Git repository and automatically syncs changes to the cluster.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;🧰 PREREQUISITES&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before starting, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minikube running&lt;/li&gt;
&lt;li&gt;kubectl installed&lt;/li&gt;
&lt;li&gt;A Kubernetes cluster ready&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’m using Minikube on Windows for this demo.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;🚀 STEP 1: VERIFY MINIKUBE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, let’s check the Minikube status.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the control plane, kubelet, and API server are running.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;🚀 STEP 2: INSTALL ARGO CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now we’ll install Argo CD in the Kubernetes cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace argocd

kubectl apply -n argocd \
-f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command installs all Argo CD components like the API server, controller, repo server, and Redis.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzty87gzjla2f8t9xsa7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzty87gzjla2f8t9xsa7.png" alt=" " width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;🚀 STEP 3: VERIFY ARGO CD PODS&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All pods should be in Running state before moving forward.&lt;br&gt;
&lt;strong&gt;Note:Wait untill all pods should be running state..&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4t6cpl9nh9d8r6f5zdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4t6cpl9nh9d8r6f5zdu.png" alt=" " width="800" height="189"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;🚀 STEP 4: EXPOSE ARGO CD UI&lt;/strong&gt;&lt;br&gt;
Argo CD runs internally by default, so we need to access it.&lt;br&gt;
check all the service of ArgoCD using below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get svc -n argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v0skhowrzyjj4uirk2v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v0skhowrzyjj4uirk2v.png" alt=" " width="800" height="152"&gt;&lt;/a&gt;&lt;br&gt;
As you seen in above snapshot, There is *&lt;em&gt;argocd-server *&lt;/em&gt; change the service type to NodePort service of argocd-server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl edit svc argocd-server -n argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0aydpcr1zy01041icee.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0aydpcr1zy01041icee.png" alt=" " width="800" height="795"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube service argocd-server -n argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives us the Argo CD UI URL.&lt;br&gt;
But we need to do tunneling the get local ip address of argocd-server.&lt;br&gt;
To do tunneling use below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube service argocd-server -n argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will give you https and http ip address, using this ip you can use ArgoCD UI.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvf9teeu2houzrz19oqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvf9teeu2houzrz19oqu.png" alt=" " width="800" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjzuyhc4ei3lyoqnxtkiw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjzuyhc4ei3lyoqnxtkiw.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;🚀 STEP 5: LOGIN TO ARGO CD&lt;/strong&gt;&lt;br&gt;
Username is admin.&lt;br&gt;
To get the password:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get secret -n argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8l0kc41xyv28dd6o4e8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8l0kc41xyv28dd6o4e8.png" alt=" " width="799" height="226"&gt;&lt;/a&gt;&lt;br&gt;
to get password, edit the argocd-initial-admin-secret.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl edit secret argocd-initial-admin-secret -n argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;copy the password.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qke2l7vifxrfpkbc4a3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qke2l7vifxrfpkbc4a3.png" alt=" " width="800" height="306"&gt;&lt;/a&gt;&lt;br&gt;
but this password is encrypted so we need to decrypt it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[System.Text.Encoding]::UTF8.GetString(

)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpfcicjzdfraasgyy5o0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpfcicjzdfraasgyy5o0.png" alt=" " width="800" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This command decrypt the password and give you password from that password we can log in ArgoCD.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb52qfq8ymipon9kyfdwo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb52qfq8ymipon9kyfdwo.png" alt=" " width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;🚀 STEP 6: DEPLOY APPLICATION USING GITOPS&lt;/strong&gt;&lt;br&gt;
Now comes the GitOps part.&lt;/p&gt;

&lt;p&gt;I already have a Git repository containing Kubernetes manifests like:&lt;br&gt;
**&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployment&lt;/li&gt;
&lt;li&gt;Service
**
In Argo CD UI:
**&lt;/li&gt;
&lt;li&gt;Click New Application&lt;/li&gt;
&lt;li&gt;Provide Git repo URL&lt;/li&gt;
&lt;li&gt;Select branch and path&lt;/li&gt;
&lt;li&gt;Choose destination cluster and namespace&lt;/li&gt;
&lt;li&gt;Click Create
**
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjs06q79eguezounyqe6.png" alt=" " width="800" height="417"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyiszb784rylv4mia5jtw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyiszb784rylv4mia5jtw.png" alt=" " width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmkywz9t83p58iby17yr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmkywz9t83p58iby17yr.png" alt=" " width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbmrgebp1tqqqhycknwu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbmrgebp1tqqqhycknwu.png" alt=" " width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Argo CD automatically syncs Git with the cluster 🎯&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;🔄 AUTO SYNC &amp;amp; DRIFT DETECTION&lt;/strong&gt;&lt;br&gt;
If someone changes resources manually in Kubernetes, Argo CD detects the drift.&lt;/p&gt;

&lt;p&gt;Git always wins — that’s the power of GitOps.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;✅ FINAL RESULT&lt;/strong&gt;&lt;br&gt;
Our application is now deployed successfully using Argo CD.&lt;/p&gt;

&lt;p&gt;Any future change pushed to Git will automatically update the cluster.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;🎯 CONCLUSION&lt;/strong&gt;&lt;br&gt;
Argo CD makes Kubernetes deployments:&lt;/p&gt;

&lt;p&gt;**- Declarative&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Version controlled&lt;/li&gt;
&lt;li&gt;Secure&lt;/li&gt;
&lt;li&gt;Fully automated**&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is how modern DevOps teams deploy to production.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>development</category>
      <category>cloud</category>
      <category>opensource</category>
    </item>
    <item>
      <title>🚀 Different Types of Deployment Strategies in DevOps</title>
      <dc:creator>Abhishek Korde</dc:creator>
      <pubDate>Thu, 15 Jan 2026 08:22:44 +0000</pubDate>
      <link>https://dev.to/abhishek_korde_31/different-types-of-deployment-strategies-in-devops-2m6n</link>
      <guid>https://dev.to/abhishek_korde_31/different-types-of-deployment-strategies-in-devops-2m6n</guid>
      <description>&lt;p&gt;When deploying applications to production, choosing the right deployment strategy is critical. A good strategy minimizes downtime, reduces risk, and ensures a smooth user experience.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll explore the most commonly used deployment strategies in DevOps, how they work, and when to use each one.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sqd6xmmbrkwg3tljwen.png" alt=" " width="800" height="533"&gt;
&lt;/h2&gt;

&lt;p&gt;🔹 1. Recreate Deployment&lt;br&gt;
📌 How it works&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The old version of the application is stopped completely&lt;/li&gt;
&lt;li&gt;The new version is deployed after shutdown&lt;/li&gt;
&lt;li&gt;Causes downtime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Pros&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple and easy to implement&lt;/li&gt;
&lt;li&gt;No additional infrastructure needed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;❌ Cons&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application downtime&lt;/li&gt;
&lt;li&gt;Not suitable for production-critical apps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🧠 Use case&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Internal tools&lt;/li&gt;
&lt;li&gt;Development or test environments&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;🔹 2. Rolling Deployment&lt;br&gt;
📌 How it works&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New version is deployed gradually&lt;/li&gt;
&lt;li&gt;Instances are updated one by one&lt;/li&gt;
&lt;li&gt;Old and new versions run together temporarily&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Pros&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zero or minimal downtime&lt;/li&gt;
&lt;li&gt;Controlled rollout&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;❌ Cons&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Harder to rollback&lt;/li&gt;
&lt;li&gt;Mixed versions running simultaneously&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🧠 Use case&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes Deployments&lt;/li&gt;
&lt;li&gt;Microservices-based applications&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;🔹 3. Blue-Green Deployment&lt;br&gt;
📌 How it works&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Two environments: Blue (current) and Green (new)&lt;/li&gt;
&lt;li&gt;New version is deployed to Green&lt;/li&gt;
&lt;li&gt;Traffic is switched instantly after testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Pros&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zero downtime&lt;/li&gt;
&lt;li&gt;Easy rollback&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;❌ Cons&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requires double infrastructure&lt;/li&gt;
&lt;li&gt;Higher cost&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🧠 Use case&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Production systems&lt;/li&gt;
&lt;li&gt;Applications requiring high availability&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;🔹 4. Canary Deployment&lt;br&gt;
📌 How it works&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New version is released to a small subset of users&lt;/li&gt;
&lt;li&gt;Gradually increased if no issues are found&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Pros&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduced risk&lt;/li&gt;
&lt;li&gt;Real-user testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;❌ Cons&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complex monitoring setup&lt;/li&gt;
&lt;li&gt;Slower full rollout&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🧠 Use case&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Large-scale systems&lt;/li&gt;
&lt;li&gt;Feature validation in production&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;🔹 5. A/B Testing Deployment&lt;br&gt;
📌 How it works&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Two versions are deployed simultaneously&lt;/li&gt;
&lt;li&gt;Traffic is split between versions&lt;/li&gt;
&lt;li&gt;User behavior is analyzed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Pros&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data-driven decisions&lt;/li&gt;
&lt;li&gt;Great for UI/UX testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;❌ Cons&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complex routing logic&lt;/li&gt;
&lt;li&gt;Not ideal for backend-heavy changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🧠 Use case&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Marketing experiments&lt;/li&gt;
&lt;li&gt;UI/UX optimization&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;🔹 6. Shadow Deployment&lt;br&gt;
📌 How it works&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New version receives real traffic&lt;/li&gt;
&lt;li&gt;Responses are ignored&lt;/li&gt;
&lt;li&gt;Used only for testing performance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Pros&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zero risk to users&lt;/li&gt;
&lt;li&gt;Real-world testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;❌ Cons&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;High resource usage&lt;/li&gt;
&lt;li&gt;No user-visible benefit&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🧠 Use case&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Performance benchmarking&lt;/li&gt;
&lt;li&gt;Load testing new systems&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;🔹 7. Feature Toggle (Feature Flag)&lt;br&gt;
📌 How it works&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code is deployed with features disabled&lt;/li&gt;
&lt;li&gt;Features are enabled dynamically&lt;/li&gt;
&lt;li&gt;No redeployment required&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Pros&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instant rollback&lt;/li&gt;
&lt;li&gt;Safe feature releases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;❌ Cons&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Technical debt if flags aren’t cleaned&lt;/li&gt;
&lt;li&gt;Added code complexity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🧠 Use case&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Continuous delivery pipelines&lt;/li&gt;
&lt;li&gt;SaaS platforms&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;📊 Comparison Table&lt;br&gt;
| Strategy       | Downtime | Rollback | Complexity |&lt;br&gt;
| -------------- | -------- | -------- | ---------- |&lt;br&gt;
| Recreate       | Yes      | Easy     | Low        |&lt;br&gt;
| Rolling        | No       | Medium   | Medium     |&lt;br&gt;
| Blue-Green     | No       | Easy     | High       |&lt;br&gt;
| Canary         | No       | Easy     | High       |&lt;br&gt;
| A/B Testing    | No       | Medium   | High       |&lt;br&gt;
| Shadow         | No       | N/A      | High       |&lt;br&gt;
| Feature Toggle | No       | Easy     | Medium     |&lt;/p&gt;




&lt;p&gt;🎯 Conclusion&lt;/p&gt;

&lt;p&gt;There is no one-size-fits-all deployment strategy.&lt;br&gt;
The best choice depends on:&lt;/p&gt;

&lt;p&gt;Application criticality&lt;/p&gt;

&lt;p&gt;Infrastructure cost&lt;/p&gt;

&lt;p&gt;Risk tolerance&lt;/p&gt;

&lt;p&gt;User impact&lt;/p&gt;

&lt;p&gt;Modern DevOps teams often combine Canary + Feature Toggles or Blue-Green + Automation for safer deployments.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>cicd</category>
    </item>
    <item>
      <title>📊 AWS S3 + AWS Glue + Athena + Grafana — End-to-End Analytics Project</title>
      <dc:creator>Abhishek Korde</dc:creator>
      <pubDate>Fri, 05 Dec 2025 20:26:08 +0000</pubDate>
      <link>https://dev.to/abhishek_korde_31/aws-s3-aws-glue-athena-grafana-end-to-end-analytics-project-58cc</link>
      <guid>https://dev.to/abhishek_korde_31/aws-s3-aws-glue-athena-grafana-end-to-end-analytics-project-58cc</guid>
      <description>&lt;p&gt;🎯 Overview&lt;/p&gt;

&lt;p&gt;In this project, I built a complete analytics pipeline using AWS services.&lt;br&gt;
The goal was simple:&lt;/p&gt;

&lt;p&gt;Read CSV files from S3 → Convert to tables using Glue → Query using Athena → Visualize in Grafana&lt;/p&gt;

&lt;p&gt;This is a real-world data analytics workflow used widely in Cloud + DevOps environments.&lt;/p&gt;



&lt;p&gt;🚀 Architecture Diagram&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvkoe9fjd14d5ndaysys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvkoe9fjd14d5ndaysys.png" alt=" " width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72on8frmrcdjaqw93hhb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72on8frmrcdjaqw93hhb.png" alt=" " width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvw6gvniz5vn18wjrzu02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvw6gvniz5vn18wjrzu02.png" alt=" " width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;🏗️ Services Used&lt;br&gt;
| Service               | Purpose                               |&lt;br&gt;
| --------------------- | ------------------------------------- |&lt;br&gt;
| &lt;strong&gt;S3&lt;/strong&gt;                | Stores raw CSV files                  |&lt;br&gt;
| &lt;strong&gt;Glue Crawler&lt;/strong&gt;      | Detects schema &amp;amp; creates Athena table |&lt;br&gt;
| &lt;strong&gt;Glue Data Catalog&lt;/strong&gt; | Manages table metadata                |&lt;br&gt;
| &lt;strong&gt;Athena&lt;/strong&gt;            | SQL queries on S3 files               |&lt;br&gt;
| &lt;strong&gt;Grafana&lt;/strong&gt;           | Visualizes Athena queries             |&lt;/p&gt;



&lt;p&gt;🧩 Step 1 — Upload CSV Data to S3&lt;/p&gt;

&lt;p&gt;I created a folder structure:&lt;br&gt;
s3/&lt;br&gt;
 └── s3-athena-analytics-abhishek31/raw/ &lt;br&gt;
          └── bd-dec22-births-deaths-natural-increase.csv&lt;br&gt;
          └── bd-dec22-deaths-by-sex-and-age.csv&lt;br&gt;
          └── electronic-card-transactions.csv&lt;/p&gt;

&lt;p&gt;Upload using AWS CLI:&lt;br&gt;
aws s3 cp your_file_name.csv s3-athena-analytics-abhishek31/raw/ &lt;/p&gt;



&lt;p&gt;🤖 Step 2 — Create AWS Glue Crawler&lt;/p&gt;

&lt;p&gt;The crawler:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Points to S3 folder&lt;/li&gt;
&lt;li&gt;Detects CSV schema&lt;/li&gt;
&lt;li&gt;Creates a table inside AWS Data Catalog&lt;/li&gt;
&lt;li&gt;Database name: s3_log_db&lt;/li&gt;
&lt;li&gt;Table name: s3_athena_analytics_abhishek31
After running, I validated the schema.&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;🔍 Step 3 — Query Data in Athena&lt;/p&gt;

&lt;p&gt;Athena reads the S3 data using SQL.&lt;/p&gt;

&lt;p&gt;Example query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT 
    series_reference,
    period,
    series_title_2,
    value
FROM s3_log_db.s3_athena_analytics_abhishek31
ORDER BY series_title_2 ASC;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This verified the data is properly structured.&lt;br&gt;
📊 Step 4 — Connect Grafana to Athena&lt;/p&gt;

&lt;p&gt;In Grafana:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add datasource → AWS Athena&lt;/li&gt;
&lt;li&gt;Configure:&lt;/li&gt;
&lt;li&gt;AWS Region&lt;/li&gt;
&lt;li&gt;S3 query results bucket&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Workgroup&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test connection → Success&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;📈 Step 5 — Build Dashboards in Grafana&lt;/p&gt;

&lt;p&gt;I created multiple visualizations using raw queries (not JSON imports):&lt;/p&gt;

&lt;p&gt;Bar chart for age-group distribution&lt;/p&gt;

&lt;p&gt;Time series analysis&lt;/p&gt;

&lt;p&gt;Gender-wise population trends&lt;/p&gt;

&lt;p&gt;Total population per year&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8am5tlepb3zzasg3r43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8am5tlepb3zzasg3r43.png" alt=" " width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;🗂️ Project Folder Structure&lt;br&gt;
AWS-Athena-S3-Grafana-Analytics/&lt;br&gt;
│── docs/&lt;br&gt;
│    └── README.md&lt;br&gt;
│── s3/&lt;br&gt;
│    └── README.md&lt;br&gt;
│── sql/&lt;br&gt;
│    └── athena_queries.sql&lt;br&gt;
│── grafana/&lt;br&gt;
│    └── README.md&lt;br&gt;
└── README.md&lt;/p&gt;




&lt;p&gt;📌 Key Learnings&lt;/p&gt;

&lt;p&gt;✔ How to automate schema detection with Glue&lt;br&gt;
✔ How Athena queries S3 without a database server&lt;br&gt;
✔ How to integrate AWS &amp;amp; Grafana&lt;br&gt;
✔ How to visualize analytics with Live SQL&lt;/p&gt;

&lt;p&gt;Great project for Cloud + DevOps profile! 💯&lt;/p&gt;




&lt;p&gt;🔗 GitHub Repository:&lt;br&gt;
&lt;a href="https://github.com/abhikorde31/aws-s3-athena-grafana-analytics" rel="noopener noreferrer"&gt;https://github.com/abhikorde31/aws-s3-athena-grafana-analytics&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;🏁 Conclusion&lt;/p&gt;

&lt;p&gt;This project shows how to build a production-grade analytics pipeline using AWS serverless services + Grafana.&lt;/p&gt;

&lt;p&gt;If you want a visualization dashboard, real-time updates with Lambda, or Terraform automation — I can help you extend it.&lt;/p&gt;

</description>
      <category>grafana</category>
      <category>monitoring</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>🧩 Understanding Custom Resources in Kubernetes: Extending the Power of Your Cluster</title>
      <dc:creator>Abhishek Korde</dc:creator>
      <pubDate>Wed, 29 Oct 2025 11:22:59 +0000</pubDate>
      <link>https://dev.to/abhishek_korde_31/understanding-custom-resources-in-kubernetes-extending-the-power-of-your-cluster-3nop</link>
      <guid>https://dev.to/abhishek_korde_31/understanding-custom-resources-in-kubernetes-extending-the-power-of-your-cluster-3nop</guid>
      <description>&lt;p&gt;Kubernetes is incredibly powerful — it manages containers, networking, and scaling with ease. But what makes it truly flexible is its ability to go beyond built-in objects like Pods, Services, and Deployments.&lt;/p&gt;

&lt;p&gt;That’s where &lt;strong&gt;Custom Resources (CRs)&lt;/strong&gt; come in.&lt;/p&gt;

&lt;p&gt;In this post, we’ll break down what custom resources are, why they exist, and how you can create one to extend Kubernetes just like a pro.&lt;/p&gt;




&lt;p&gt;🔍 What Are Custom Resources?&lt;/p&gt;

&lt;p&gt;In simple terms, a &lt;strong&gt;Custom Resource (CR)&lt;/strong&gt; is a user-defined API object that extends the Kubernetes API.&lt;/p&gt;

&lt;p&gt;By default, Kubernetes comes with built-in resource types like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;pods&lt;/li&gt;
&lt;li&gt;deployments&lt;/li&gt;
&lt;li&gt;services&lt;/li&gt;
&lt;li&gt;configmaps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But what if you want Kubernetes to understand your own type of resource — say, Database, Cache, or CronJobTemplate?&lt;/p&gt;

&lt;p&gt;That’s exactly what a &lt;strong&gt;Custom Resource Definition (CRD)&lt;/strong&gt; allows you to do.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;🧱 CustomResourceDefinition (CRD)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A CRD is a YAML definition that tells Kubernetes about your new resource type — its name, structure, and how it should behave.&lt;/p&gt;

&lt;p&gt;Once a CRD is created, you can use kubectl commands to create, read, update, and delete your new resource — just like any built-in one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example CRD:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: crontabs.stable.example.com
spec:
  group: stable.example.com
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                schedule:
                  type: string
                image:
                  type: string
                replicas:
                  type: integer
  scope: Namespaced
  names:
    plural: crontabs
    singular: crontab
    kind: CronTab
    shortNames:
      - ct
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you apply this CRD:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f crd-crontab.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes now understands a new object type called CronTab.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;🚀 Creating a Custom Resource&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After defining your CRD, you can create actual instances of that resource.&lt;/p&gt;

&lt;p&gt;Here’s an example of a &lt;strong&gt;Custom Resource (CR)&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: stable.example.com/v1
kind: CronTab
metadata:
  name: my-cron
spec:
  schedule: "*/5 * * * *"
  image: my-cron-image
  replicas: 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply it using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f my-cron.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, try:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get crontabs

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME       AGE
my-cron    10s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just like you would for pods or services — except this is your custom resource!&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;⚙️ How Custom Resources Work Behind the Scenes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you create a CRD:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Kubernetes API server dynamically adds new REST endpoints (like /apis/stable.example.com/v1/crontabs).&lt;/li&gt;
&lt;li&gt;The Kubernetes control plane starts accepting your new object type.&lt;/li&gt;
&lt;li&gt;You can manage these objects just like native Kubernetes ones.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;However, &lt;strong&gt;a CRD alone doesn’t perform any action.&lt;/strong&gt;&lt;br&gt;
If you want automation (like creating pods or databases when a CR is created), you need a **Controller or Operator **that watches your CRs and acts accordingly.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;🤖 Custom Resources and Operators&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Operators are like “smart agents” that extend Kubernetes behavior using CRDs.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus Operator → adds CRDs like ServiceMonitor, Alertmanager.&lt;/li&gt;
&lt;li&gt;Cert-Manager → adds CRDs like Certificate and Issuer.&lt;/li&gt;
&lt;li&gt;ArgoCD → uses CRDs like Application to manage GitOps workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These operators continuously monitor custom resources and automatically take actions — just like Kubernetes manages pods.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;🧠 Why Use Custom Resources?&lt;/strong&gt;                                                &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Extensibility&lt;/strong&gt;: Add new resource types without changing Kubernetes core code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation&lt;/strong&gt; :Combine with controllers to automate complex tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Declarative Management&lt;/strong&gt;: Use YAML manifests to define custom workflows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration&lt;/strong&gt;: Ideal for building Operators, CI/CD systems, or internal DevOps tools.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;🧭 Conclusion&lt;/strong&gt;&lt;br&gt;
Custom Resources turn Kubernetes from just a container orchestrator into a &lt;strong&gt;powerful platform for automation.&lt;/strong&gt;&lt;br&gt;
They’re the foundation for Operators, GitOps, and advanced DevOps workflows.&lt;br&gt;
If you understand how CRDs and CRs work, you can extend Kubernetes to manage almost anything — not just containers.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;🚀 Quick Recap&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CRD (CustomResourceDefinition):&lt;/strong&gt; Defines your new resource type.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CR (Custom Resource):&lt;/strong&gt; An instance of that type.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Controller/Operator:&lt;/strong&gt; Automates behavior based on your CRs.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>cicd</category>
      <category>automation</category>
    </item>
    <item>
      <title>Mastering Kubernetes Services: ClusterIP, NodePort, and LoadBalancer Explained.</title>
      <dc:creator>Abhishek Korde</dc:creator>
      <pubDate>Tue, 07 Oct 2025 17:12:37 +0000</pubDate>
      <link>https://dev.to/abhishek_korde_31/mastering-kubernetes-services-clusterip-nodeport-and-loadbalancer-explained-1iep</link>
      <guid>https://dev.to/abhishek_korde_31/mastering-kubernetes-services-clusterip-nodeport-and-loadbalancer-explained-1iep</guid>
      <description>&lt;p&gt;🧩 Manage Kubernetes Services and Their Types&lt;/p&gt;

&lt;p&gt;Kubernetes is a powerful container orchestration platform, but what truly makes it shine is how it manages networking and services. In Kubernetes, Services are responsible for enabling communication between components inside and outside the cluster.&lt;/p&gt;

&lt;p&gt;This blog explains what Kubernetes Services are, the different types of Services, and how to manage them effectively.&lt;/p&gt;




&lt;p&gt;🚀 What Is a Kubernetes Service?&lt;/p&gt;

&lt;p&gt;In Kubernetes, a Service is an abstraction that defines a logical set of Pods and a policy to access them.&lt;/p&gt;

&lt;p&gt;Why do we need it?&lt;br&gt;
Because Pods are ephemeral — they can be created, destroyed, or replaced anytime. A Service provides a stable network endpoint (a fixed IP or DNS name) so other applications can reach the Pods reliably, even as Pods change.&lt;/p&gt;



&lt;p&gt;🔧 Types of Kubernetes Services&lt;/p&gt;

&lt;p&gt;Kubernetes supports different types of Services, depending on how you want your application to be accessed.&lt;/p&gt;

&lt;p&gt;Let’s explore each one 👇&lt;/p&gt;

&lt;p&gt;1️⃣ ClusterIP (Default)&lt;/p&gt;

&lt;p&gt;Purpose:&lt;br&gt;
Provides internal access between applications inside the cluster.&lt;/p&gt;

&lt;p&gt;How it works:&lt;/p&gt;

&lt;p&gt;Creates a virtual IP (ClusterIP) accessible only within the cluster.&lt;/p&gt;

&lt;p&gt;Distributes traffic between multiple Pods using round-robin load balancing.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: backend-service
spec:
  selector:
    app: backend
  ports:
    - port: 80
      targetPort: 8080
  type: ClusterIP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use Case:&lt;br&gt;
Backend-to-database communication or internal microservice communication.&lt;/p&gt;

&lt;p&gt;2️⃣ NodePort&lt;/p&gt;

&lt;p&gt;Purpose:&lt;br&gt;
Expose your application outside the cluster on a specific port.&lt;/p&gt;

&lt;p&gt;How it works:&lt;/p&gt;

&lt;p&gt;Opens a static port (between 30000–32767) on each Kubernetes node.&lt;/p&gt;

&lt;p&gt;Redirects traffic from that port to your service.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: frontend-service
spec:
  type: NodePort
  selector:
    app: frontend
  ports:
    - port: 80
      targetPort: 3000
      nodePort: 30007
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access:&lt;br&gt;
You can access the app using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://&amp;lt;NodeIP&amp;gt;:30007
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use Case:&lt;br&gt;
Local testing and development setups (like Minikube).&lt;/p&gt;

&lt;p&gt;3️⃣ LoadBalancer&lt;/p&gt;

&lt;p&gt;Purpose:&lt;br&gt;
Expose the application to the internet using a cloud load balancer.&lt;/p&gt;

&lt;p&gt;How it works:&lt;/p&gt;

&lt;p&gt;Works on top of NodePort and ClusterIP.&lt;/p&gt;

&lt;p&gt;Automatically provisions a cloud provider’s external load balancer (e.g., AWS ELB, Azure LB, GCP LoadBalancer).&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4️⃣ ExternalName (Special Case)&lt;/p&gt;

&lt;p&gt;Purpose:&lt;br&gt;
Map a Kubernetes service name to an external DNS name.&lt;/p&gt;

&lt;p&gt;How it works:&lt;/p&gt;

&lt;p&gt;No proxying or load balancing — it just returns a CNAME record.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: external-db
spec:
  type: ExternalName
  externalName: db.example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use Case:&lt;br&gt;
Connecting internal services to external resources (like managed databases or APIs).&lt;/p&gt;




&lt;p&gt;🎯 Conclusion&lt;/p&gt;

&lt;p&gt;Kubernetes Services simplify communication between Pods and make applications stable, scalable, and easy to manage.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use ClusterIP for internal traffic.&lt;/li&gt;
&lt;li&gt;Use NodePort for local or limited external access.&lt;/li&gt;
&lt;li&gt;Use LoadBalancer for production environments.&lt;/li&gt;
&lt;li&gt;Use ExternalName for connecting to external systems.
With these, you can manage and expose any application confidently — from development to production.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Setting Up Kubernetes on Windows with Minikube (Step-by-Step Guide)</title>
      <dc:creator>Abhishek Korde</dc:creator>
      <pubDate>Sat, 20 Sep 2025 22:00:25 +0000</pubDate>
      <link>https://dev.to/abhishek_korde_31/setting-up-kubernetes-on-windows-with-minikube-step-by-step-guide-460j</link>
      <guid>https://dev.to/abhishek_korde_31/setting-up-kubernetes-on-windows-with-minikube-step-by-step-guide-460j</guid>
      <description>&lt;p&gt;&lt;strong&gt;🌀 Prerequisites: Kubernetes, Docker, Containers, Cloud&lt;/strong&gt;&lt;br&gt;
There are many tools for managing the cluster in kubernetes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minikube → Local dev cluster (single-node).&lt;/li&gt;
&lt;li&gt;Kind (Kubernetes in Docker) → Lightweight, fast local clusters in Docker containers.&lt;/li&gt;
&lt;li&gt;k3s → Lightweight Kubernetes distribution for edge/IoT.&lt;/li&gt;
&lt;li&gt;Kops → Production cluster provisioning (mostly AWS).&lt;/li&gt;
&lt;li&gt;Kubadm → Official tool to bootstrap a Kubernetes cluster (often used for bare-metal or custom setup).&lt;/li&gt;
&lt;li&gt;Rancher → GUI and management for multiple Kubernetes clusters.&lt;/li&gt;
&lt;li&gt;OpenShift → Red Hat’s enterprise Kubernetes platform.&lt;/li&gt;
&lt;li&gt;Managed services → EKS (AWS), GKE (Google Cloud), AKS (Azure).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some of them are used for production, while others are mainly for local development.&lt;br&gt;
Today, we are using minikub(obviously) for our local development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install Minikube&lt;/strong&gt;&lt;br&gt;
The first step is to install Minikube using the link below, or by searching “Minikube install” in your browser.&lt;br&gt;
link of installing the minikube:&lt;a href="https://minikube.sigs.k8s.io/docs/start/?arch=%2Fwindows%2Fx86-64%2Fstable%2F.exe+download" rel="noopener noreferrer"&gt;Minikube Install&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Install kubectl&lt;/strong&gt;&lt;br&gt;
The next step is to install kubectl. kubectl is the command-line tool for Kubernetes, and it allows us to interact directly with the cluster.&lt;br&gt;
To install kubectl you can use below link or directly search in browser as &lt;em&gt;kubectl install&lt;/em&gt;.&lt;br&gt;
link for installing the kubectl:&lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;Kubectl Install&lt;/a&gt;&lt;br&gt;
Choose the installation instructions for your platform (Linux, macOS, or Windows).&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Install Docker Desktop&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minikube runs the Kubernetes cluster inside a Docker container (when you use the --driver=docker).&lt;/li&gt;
&lt;li&gt;You can install Docker Desktop either from the official website (browser download) or from the Microsoft Store.&lt;/li&gt;
&lt;li&gt;Docker Desktop is a free platform (but not fully open source).&lt;/li&gt;
&lt;li&gt;If your system does not support Hyper-V, you can use WSL2 as the backend for Docker.&lt;/li&gt;
&lt;li&gt;In Docker Desktop → Settings → Resources → WSL Integration → Clicking “Refetch distros” helps Docker detect available Linux distributions.&lt;/li&gt;
&lt;li&gt;If no distro is listed, you need to install a Linux distro like Ubuntu from the Microsoft Store (or manually). 
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoixos5q6fpohdz01svk.png" alt=" " width="800" height="420"&gt;
&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Start Minikube&lt;/strong&gt;&lt;br&gt;
Now that the basic setup is complete, we can create the Kubernetes cluster using the command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a single-node Kubernetes cluster.&lt;br&gt;
On Windows, instead of VirtualBox/Hyper-V, it runs inside Docker containers.&lt;/p&gt;

&lt;p&gt;⚠️ Sometimes, Minikube fails because it pulls images from registry.k8s.io. Use a faster mirror:&lt;br&gt;
In this case, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube start --driver=docker --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This flag tells Minikube to use the Alibaba Cloud mirror instead, which is often faster and more reliable.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 5: Verify the Cluster&lt;/strong&gt;&lt;br&gt;
To list all Pods in the cluster (including system Pods), use the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will show like this.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjijpjw0otuegtc4qhx90.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjijpjw0otuegtc4qhx90.png" alt=" " width="800" height="183"&gt;&lt;/a&gt;&lt;br&gt;
These are the default system pods that Minikube runs inside the kube-system namespace:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;coredns → Handles DNS (service discovery inside the cluster).&lt;/li&gt;
&lt;li&gt;etcd-minikube → Key-value database that stores the entire Kubernetes cluster state.&lt;/li&gt;
&lt;li&gt;kube-apiserver-minikube → The main API server; all kubectl commands talk to this.&lt;/li&gt;
&lt;li&gt;kube-controller-manager-minikube → Ensures desired state (like creating pods if one fails).&lt;/li&gt;
&lt;li&gt;kube-proxy → Manages networking rules for pod-to-pod and pod-to-service communication.&lt;/li&gt;
&lt;li&gt;kube-scheduler → Assigns new pods to nodes.&lt;/li&gt;
&lt;li&gt;storage-provisioner → Automatically provisions storage volumes when needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ All are in Running status → means your Minikube cluster is healthy and ready to run workloads.&lt;/p&gt;

&lt;p&gt;To get all the nodes in your cluster and their status using below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output of the above command looks like this:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ry78ydy0hz52u3xo06q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ry78ydy0hz52u3xo06q.png" alt=" " width="662" height="77"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NAME → minikube → This is your Kubernetes node (a VM/Container created by Minikube).&lt;/li&gt;
&lt;li&gt;STATUS → Ready → The node is healthy and can run pods.&lt;/li&gt;
&lt;li&gt;ROLES → control-plane → This node acts as the master/control plane (manages the cluster).&lt;/li&gt;
&lt;li&gt;AGE → 5m3s → Node has been running for about 5 minutes.&lt;/li&gt;
&lt;li&gt;VERSION → v1.34.0 → The Kubernetes version running on this node.
✅ In short → Your Minikube cluster has 1 control-plane node, it’s healthy, and ready to schedule pods.
After confirming that our nodes are up and running.&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;&lt;strong&gt;Step 6: What is a Pod?&lt;/strong&gt;&lt;br&gt;
Now we have to install pod. Inside that pod our application will run.&lt;/p&gt;

&lt;p&gt;First, let’s understand what a Pod is.?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Pod is the smallest deployable unit in Kubernetes.&lt;/li&gt;
&lt;li&gt;A Pod is essentially a definition of how to run one or more containers.&lt;/li&gt;
&lt;li&gt;The specification is written in a YAML manifest (like pod.yml), similar to how you give instructions in docker run.&lt;/li&gt;
&lt;li&gt;Most Pods contain a single container, but sometimes Pods have:&lt;/li&gt;
&lt;li&gt;Sidecar containers (for logging, monitoring, etc.).&lt;/li&gt;
&lt;li&gt;Init containers (run before the main container).&lt;/li&gt;
&lt;li&gt;Kubernetes assigns an IP address to the Pod, not to each individual container. All containers in the Pod share the same Pod IP and network namespace.&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;&lt;strong&gt;Step 7: Deploy Your First Application (Nginx)&lt;/strong&gt;&lt;br&gt;
We will now deploy our first Nginx application. The first step is to create a Pod by writing a pod.yml file.&lt;br&gt;
As I told earlier pod is basically yaml file so we will write first basic yaml file that will help to understand the structure.&lt;br&gt;
If you are on Windows, use the following command and paste the given YAML script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;notepad pod.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;because vi does not work on Windows (it is available only on Linux systems).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.14.2
    ports:
    - containerPort: 80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔎 In short:&lt;/p&gt;

&lt;p&gt;You’re creating a Pod called nginx.&lt;br&gt;
Inside it, you run 1 container using the image nginx:1.14.2.&lt;br&gt;
That container exposes port 80 (default HTTP port).&lt;br&gt;
✅ This YAML defines the smallest deployable unit in Kubernetes (a Pod) running Nginx.&lt;br&gt;
After writing yml file we have to create application on pod using give command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f pod.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After creating the application, check whether it is running using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and If we want to get entire details of pods with IP address of pod then use below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -o wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43yohdd561vuk25l7prc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43yohdd561vuk25l7prc.png" alt=" " width="800" height="204"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 8: Access the Minikube Cluster&lt;/strong&gt;&lt;br&gt;
If you want to log in to the cluster where the application is running, use the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube ssh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;It opens an SSH session into the VM / Docker container that Minikube created to run your Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;This lets you log in directly inside the Minikube node (the machine acting as your Kubernetes cluster).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🛠 What you can do inside:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run Linux commands (because Minikube runs on a Linux environment).&lt;/li&gt;
&lt;li&gt;Check processes, logs, or system info.&lt;/li&gt;
&lt;li&gt;Use tools like docker ps (to see the containers Kubernetes is running internally).&lt;/li&gt;
&lt;li&gt;Troubleshoot networking or storage inside the cluster.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa1d1wjzonqs0fthwiv0.png" alt=" " width="516" height="55"&gt;
After that, test if the Pod’s internal IP is reachable and serving traffic by using curl .
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl &amp;lt;POD_IP&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ip address of pod where your application is running.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lz3keisbypp8yexzwdk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lz3keisbypp8yexzwdk.png" alt=" " width="800" height="575"&gt;&lt;/a&gt;&lt;br&gt;
If you see the Nginx welcome page, 🎉 congratulations—your app is running successfully!&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;✅ Conclusion&lt;/strong&gt;&lt;br&gt;
You now have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minikube installed locally.&lt;/li&gt;
&lt;li&gt;kubectl configured to interact with your cluster.&lt;/li&gt;
&lt;li&gt;A working Nginx Pod running in Kubernetes.
This setup is perfect for learning and local development before moving on to production tools like Kops, EKS, GKE, or AKS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✨ Polished version:&lt;/p&gt;

&lt;p&gt;To explore all commonly used kubectl commands, refer to the official Kubernetes cheatsheet:&lt;br&gt;
&lt;a href="https://kubernetes.io/pt-br/docs/reference/kubectl/cheatsheet/" rel="noopener noreferrer"&gt;Kubectl sheet sheet&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>containers</category>
      <category>cloud</category>
    </item>
    <item>
      <title>🔎 Kubernetes Architecture Demystified: A Beginner-Friendly Guide</title>
      <dc:creator>Abhishek Korde</dc:creator>
      <pubDate>Wed, 17 Sep 2025 20:49:14 +0000</pubDate>
      <link>https://dev.to/abhishek_korde_31/understanding-kubernetes-architecture-a-complete-guide-h7n</link>
      <guid>https://dev.to/abhishek_korde_31/understanding-kubernetes-architecture-a-complete-guide-h7n</guid>
      <description>&lt;p&gt;Kubernetes has become the de facto standard for container orchestration. Whether you’re a DevOps engineer, cloud enthusiast, or software developer, understanding the &lt;strong&gt;Kubernetes architecture&lt;/strong&gt; is crucial for deploying and managing containerized applications at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🌐 What is Kubernetes?&lt;/strong&gt;&lt;br&gt;
Kubernetes (often abbreviated as K8s) is an open-source platform that automates the deployment, scaling, and management of containerized applications. It abstracts away the complexities of managing containers, allowing teams to focus on delivering applications faster and more reliably.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🚀 Kubernetes vs. Docker: What’s the Advantage?&lt;/strong&gt;&lt;br&gt;
Before comparing, let’s clarify:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker&lt;/strong&gt; → A containerization platform (build, package, and run containers).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kubernetes (K8s)&lt;/strong&gt; → A container orchestration platform (manages and scales containers across clusters).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You often use them together: Docker to build/run containers, Kubernetes to orchestrate them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔑 Advantages of Kubernetes over Docker&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1. Scalability&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker alone can run containers, but scaling across multiple servers is manual and complex.&lt;/li&gt;
&lt;li&gt;Kubernetes provides auto-scaling based on CPU, memory, or custom metrics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 Example: If traffic spikes, Kubernetes automatically adds more Pods.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. High Availability &amp;amp; Self-Healing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker: If a container crashes, you need manual intervention (or use Docker Swarm, but limited features).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;- Kubernetes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Restarts failed Pods automatically.&lt;/li&gt;
&lt;li&gt;Re-schedules Pods on healthy nodes if one node fails.&lt;/li&gt;
&lt;li&gt;Ensures the desired state (always keeps the right number of Pods running).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Load Balancing &amp;amp; Service Discovery&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker: Requires manual setup for container-to-container networking.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;- Kubernetes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides built-in service discovery.&lt;/li&gt;
&lt;li&gt;Automatically load-balances traffic between Pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Multi-Cloud &amp;amp; Hybrid Support&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker: Mostly tied to the host machine or a single environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;- Kubernetes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud-agnostic (works on AWS, Azure, GCP, on-premises).&lt;/li&gt;
&lt;li&gt;Supports hybrid deployments and seamless migration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Rolling Updates &amp;amp; Rollbacks&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker: Updating containers without downtime is tricky.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;- Kubernetes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handles zero-downtime deployments using rolling updates.&lt;/li&gt;
&lt;li&gt;Can instantly rollback to a previous version if something breaks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6. Infrastructure as Code &amp;amp; Automation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker: Focused on containers, not cluster-level automation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;- Kubernetes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full automation of deployments, scaling, and monitoring.&lt;/li&gt;
&lt;li&gt;Integrates well with CI/CD pipelines.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;📊 Kubernetes Architecture Diagram&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9tzzviaz8gxrkrlas2m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9tzzviaz8gxrkrlas2m.png" alt=" " width="800" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;⚙️ Components of Kubernetes Architecture&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1. Control Plane Components&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These components maintain the desired state of the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API Server (kube-apiserver):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The entry point for all Kubernetes commands (kubectl, UI, API calls).&lt;/li&gt;
&lt;li&gt;Acts as a communication hub between components.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;etcd:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A distributed key-value store.&lt;/li&gt;
&lt;li&gt;Stores cluster state, configurations, secrets, and metadata.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Scheduler (kube-scheduler):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decides which worker node should run a newly created Pod.&lt;/li&gt;
&lt;li&gt;Scheduling decisions are based on resource availability, policies, and constraints.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Controller Manager (kube-controller-manager):&lt;/strong&gt;&lt;br&gt;
Runs controllers that handle routine tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node Controller (monitors nodes):&lt;/li&gt;
&lt;li&gt;ReplicaSet Controller (ensures correct number of Pods)&lt;/li&gt;
&lt;li&gt;Job Controller, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cloud Controller Manager&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integrates with cloud providers (AWS, GCP, Azure).&lt;/li&gt;
&lt;li&gt;Manages cloud-specific resources like load balancers and storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Worker Node Components&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Worker nodes run your application workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Kubelet:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Agent running on each node.&lt;/li&gt;
&lt;li&gt;Ensures containers are running as expected inside Pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Kube-proxy:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handles networking inside the cluster.&lt;/li&gt;
&lt;li&gt;Manages communication between Pods and Services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Container Runtime (Docker, containerd, CRI-O)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The engine that actually runs containers.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Kubernetes Objects&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These are the building blocks of Kubernetes deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Pod&lt;/strong&gt;&lt;br&gt;
 → The smallest deployable unit (one or more containers).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Service&lt;/strong&gt;&lt;br&gt;
 → Exposes Pods to the network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Deployment&lt;/strong&gt;&lt;br&gt;
 → Ensures desired state of Pods.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- ConfigMap &amp;amp; Secret&lt;/strong&gt;&lt;br&gt;
 → Store configuration and sensitive data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Namespace&lt;/strong&gt;&lt;br&gt;
 → Logical separation within the cluster.&lt;/p&gt;

&lt;p&gt;🔄** How Kubernetes Works (Step-by-Step Flow)**&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You submit a deployment request using kubectl apply.&lt;/li&gt;
&lt;li&gt;The API Server receives the request and stores the desired state in etcd.&lt;/li&gt;
&lt;li&gt;The Scheduler assigns Pods to specific worker nodes.&lt;/li&gt;
&lt;li&gt;The Kubelet on each node communicates with the control plane to run containers via the container runtime.&lt;/li&gt;
&lt;li&gt;The Kube-proxy ensures networking and service discovery so Pods can talk to each other and external users.&lt;/li&gt;
&lt;li&gt;Controllers continuously monitor and adjust the system to match the desired state.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;🌟 Key Benefits of Kubernetes Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Self-healing (auto-restarts failed Pods)&lt;/li&gt;
&lt;li&gt;Auto-scaling based on load&lt;/li&gt;
&lt;li&gt;Load balancing across nodes&lt;/li&gt;
&lt;li&gt;Rolling updates and rollbacks&lt;/li&gt;
&lt;li&gt;Cloud-agnostic (runs on any platform)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
      <category>architecture</category>
    </item>
    <item>
      <title>AWS Cost Optimization- Identify Stale Resources using Lambda Function.</title>
      <dc:creator>Abhishek Korde</dc:creator>
      <pubDate>Tue, 01 Jul 2025 13:13:04 +0000</pubDate>
      <link>https://dev.to/abhishek_korde_31/aws-cost-optimization-using-lambda-function-20m1</link>
      <guid>https://dev.to/abhishek_korde_31/aws-cost-optimization-using-lambda-function-20m1</guid>
      <description>&lt;p&gt;&lt;strong&gt;what is lambda function in AWS services?&lt;/strong&gt;&lt;br&gt;
AWS Lambda is a serverless compute service that allows you to run code without provisioning or managing servers. It executes code in response to events and automatically manages the computing resources. You upload your code as a Lambda function and it runs only when triggered by an event, scaling automatically. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem Statement:&lt;/strong&gt; &lt;br&gt;
When a user creates an EC2 instance, an associated EBS volume is also created. The user typically takes snapshots of this volume for backup purposes. However, after a few months, the user deletes the EC2 instance and the volume but forgets to delete the associated snapshot. As a result, the unused snapshots continue to incur storage costs, leading to unnecessary and increasing expenses over time. As DevOps engineer, It is essential to address this by implementing cost optimization strategies to identify and clean up unused snapshots to reduce AWS costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; &lt;br&gt;
To solve this problem statement we use Lambda funtion fetches all EBS snapshots owned by the same account and also retrieves a list of active EC2 instances. For each snapshot, It checks if the associated volume(if exists) is not associated with any active instance. If it finds a stale snapshots, it deletes it, effectively optimizing storage costs.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;fetch all the EBS snapshots&lt;/li&gt;
&lt;li&gt;filter out snapshots that are stale.&lt;/li&gt;
&lt;li&gt;stale snapshot will be deleted&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;First step is to create the EC2 instance, while creating CE2 instance volume also created. In my case below is volume created.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1snk6k2hhkidypjfroco.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1snk6k2hhkidypjfroco.png" alt=" " width="800" height="330"&gt;&lt;/a&gt;&lt;br&gt;
After successfully created EC2 instance we have to manually create snapshot of volume.&lt;br&gt;
Snapshot is nothing but copy of your volume.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogtx1t3fj4jmyi01fptn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogtx1t3fj4jmyi01fptn.png" alt=" " width="800" height="294"&gt;&lt;/a&gt;&lt;br&gt;
Now first we create Lambda function&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflc986d224nc5qyxghu5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflc986d224nc5qyxghu5.png" alt=" " width="800" height="363"&gt;&lt;/a&gt;&lt;br&gt;
after creating lambda function, go to code section and write below code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

def lambda_handler(event, context):
    ec2 = boto3.client('ec2')

    # Get all EBS snapshots
    response = ec2.describe_snapshots(OwnerIds=['self'])

    # Get all active EC2 instance IDs
    instances_response = ec2.describe_instances(Filters=[{'Name': 'instance-state-name', 'Values': ['running']}])
    active_instance_ids = set()

    for reservation in instances_response['Reservations']:
        for instance in reservation['Instances']:
            active_instance_ids.add(instance['InstanceId'])

    # Iterate through each snapshot and delete if it's not attached to any volume or the volume is not attached to a running instance
    for snapshot in response['Snapshots']:
        snapshot_id = snapshot['SnapshotId']
        volume_id = snapshot.get('VolumeId')

        if not volume_id:
            # Delete the snapshot if it's not attached to any volume
            ec2.delete_snapshot(SnapshotId=snapshot_id)
            print(f"Deleted EBS snapshot {snapshot_id} as it was not attached to any volume.")
        else:
            # Check if the volume still exists
            try:
                volume_response = ec2.describe_volumes(VolumeIds=[volume_id])
                if not volume_response['Volumes'][0]['Attachments']:
                    ec2.delete_snapshot(SnapshotId=snapshot_id)
                    print(f"Deleted EBS snapshot {snapshot_id} as it was taken from a volume not attached to any running instance.")
            except ec2.exceptions.ClientError as e:
                if e.response['Error']['Code'] == 'InvalidVolume.NotFound':
                    # The volume associated with the snapshot is not found (it might have been deleted)
                    ec2.delete_snapshot(SnapshotId=snapshot_id)
                    print(f"Deleted EBS snapshot {snapshot_id} as its associated volume was not found.")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;after writing the above code click deploy. and create configure test event.&lt;br&gt;
after this click test but it will fail as shown delow.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbgfzaf5sdbvt76gf7a9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbgfzaf5sdbvt76gf7a9.png" alt=" " width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;
This is failing because of lambda execution time is by default 3 second and some permission error as well.&lt;br&gt;
we will solve this issue one by one.&lt;br&gt;
first go to configure tab and change lambda execution time upto 10 seconds &lt;br&gt;
another one is go to attached IAM policy and attached below policy.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F344tvly2oqh4l16l4v6l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F344tvly2oqh4l16l4v6l.png" alt=" " width="800" height="354"&gt;&lt;/a&gt;&lt;br&gt;
after attaching the policy come back and execute lambda function again &lt;br&gt;
after execution we will see snapshot will not be deleted because snapshot attached to volume and volume attached to EC2.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts30tk0iv528714ghg5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts30tk0iv528714ghg5u.png" alt=" " width="800" height="361"&gt;&lt;/a&gt;&lt;br&gt;
manually delete the EC2 instance, that EC2 will delete volume as well.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feveff6l1yuiv1krk7pgn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feveff6l1yuiv1krk7pgn.png" alt=" " width="800" height="373"&gt;&lt;/a&gt;&lt;br&gt;
Now I am executing the lambda function again.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozjnnauyip29hbbb49a6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozjnnauyip29hbbb49a6.png" alt=" " width="800" height="382"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Deleted EBS snapshot snap-02c1afa1d20b2c81b as its associated volume was not found.&lt;/strong&gt; is shown in above screenshot.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplvqnr2w8emcwlsbd15p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplvqnr2w8emcwlsbd15p.png" alt=" " width="800" height="355"&gt;&lt;/a&gt;&lt;br&gt;
as you see the snapshot also deleted if i delete EC2 instance. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt;&lt;br&gt;
By using an AWS Lambda function, we can automate the identification and cleanup of stale EBS snapshots, helping to optimize AWS costs by eliminating unused resources. This is a practical DevOps automation use case to maintain clean, efficient cloud infrastructure.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Getting Started with AWS IAM: Managing Users, Groups, and Policies for Secure Access Control.</title>
      <dc:creator>Abhishek Korde</dc:creator>
      <pubDate>Mon, 19 May 2025 17:46:36 +0000</pubDate>
      <link>https://dev.to/abhishek_korde_31/getting-started-with-aws-iam-managing-users-groups-and-policies-for-secure-access-control-jnl</link>
      <guid>https://dev.to/abhishek_korde_31/getting-started-with-aws-iam-managing-users-groups-and-policies-for-secure-access-control-jnl</guid>
      <description>&lt;p&gt;AWS IAM (Identity and Access Management) is a service that helps you securely control access to your AWS resources. It allows you to manage users, their permissions, and how they interact with AWS services. Essentially, it's the foundation for security and access control within your AWS account.&lt;/p&gt;

&lt;p&gt;Log in AWS console with your root username and password, after log in search IAM in search bar.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0245vbroouwex2qztuhi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0245vbroouwex2qztuhi.png" alt=" " width="800" height="351"&gt;&lt;/a&gt;&lt;br&gt;
Basically IAM containing 3 different components as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User &lt;/li&gt;
&lt;li&gt;Policies&lt;/li&gt;
&lt;li&gt;Group&lt;/li&gt;
&lt;li&gt;Roles
first we will go with Users, by creating user is nothing but authentication eg. entering any bank is means by authonticating user. user this user doesn't have any Authority for any servies so that Policies play big role in this. using Policies we can give any types of authorities to users. there are different already created by AWS to Help the Devops or cloud engineer. but we can customize our own policies by just writing json formating script.
next is groups, If in your orginization thousand of employee and lots of employee leaves the organization and new employee are joining the organization in that cases its very difficult to manages every employee Policies and Efficiency also reduces so AWS introduce groups, instead of changing each and every employee policies, create group and add them in different groups as per their role so by changing only their groups policies it will applicable to all employee. eg. In your organization, we have 3 roles 1. deveoper 2. QA 3. DBAdmin. in this case you have to only changing the groups policies and each user on in their respective groups as per their roles. 
Now lets see how to create Users, policies and groups in AWS
go to IAM section we will see below window:
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5p4zurzerl3oclsfpa17.png" alt=" " width="800" height="299"&gt;
click the users section then click create users
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzd15589noai68tplbzc.png" alt=" " width="800" height="371"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fgeo66g66d76unjov29.png" alt=" " width="800" height="367"&gt;
Note: always use Autogenerated password, it mean every log we have to create new password each time to more secure.
while creating you can directly add policies or later you can apply policies
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8c90ekmk6doxjlyy62kr.png" alt=" " width="800" height="356"&gt;
after clicking next you can review and create the user by clicking create user
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6kwedznkt8va71cgnfcj.png" alt=" " width="800" height="356"&gt;
after this you will feedback and your user is created
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfbxabfhk2au09cxbkgw.png" alt=" " width="800" height="288"&gt;
for login with IAM user download .csv file in your local storage
this .csv file contains below things
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8foaa6vurc579lscf23i.png" alt=" " width="800" height="103"&gt;
use this information for login with IAM users.
Now I am logout my root users and login with IAM users 
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cqfha632l30wyrs6ea2.png" alt=" " width="800" height="103"&gt;
after login you have to reset password by changes autogenerated password.
If you see carefully we will notice that left top corner your username is mentioned that means you login with your user.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkn0surgew14amltalzw0.png" alt=" " width="800" height="227"&gt;
but till now we did add any policies so after login with user test-user-01 all the permision denied is showing.
to give different types of permission to user to to user first with your root user by logout the user test-user-01. give the required permission then login with IAM user then you can access perticular service as per policies applied to your user.
In my case I apply full_access_S3 policy to the user test-user-01 so test-user-01 can access my S3 bucker list.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32mwibblbjlnp64q232q.png" alt=" " width="800" height="291"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this blog, we explored the fundamentals of AWS Identity and Access Management (IAM), a powerful tool for managing access to AWS resources securely. We learned how to create users, assign permissions through policies, and organize users into groups for streamlined access control. IAM not only helps in maintaining security but also simplifies user management, especially in large organizations. By assigning the right permissions using predefined or custom policies, you ensure that users have exactly the access they need—no more, no less. Mastering IAM is essential for any DevOps or cloud engineer aiming to implement best security practices in AWS.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>cloud</category>
      <category>linux</category>
    </item>
    <item>
      <title>What is bind mount and volume in docker, Hands-On with Docker Storage: Volumes &amp; Bind Mounts Demo</title>
      <dc:creator>Abhishek Korde</dc:creator>
      <pubDate>Mon, 21 Apr 2025 11:10:30 +0000</pubDate>
      <link>https://dev.to/abhishek_korde_31/what-is-bind-mount-and-volume-in-docker-hands-on-with-docker-storage-volumes-bind-mounts-demo-53gn</link>
      <guid>https://dev.to/abhishek_korde_31/what-is-bind-mount-and-volume-in-docker-hands-on-with-docker-storage-volumes-bind-mounts-demo-53gn</guid>
      <description>&lt;p&gt;Bind mounts may be stored anywhere on the host system. They may even be important system files or directories. Non-Docker processes on the Docker host or a Docker container can be modified at any time. You can create a file on the host system and attach it to the container where you want to maintain the state of the Docker. You can also use the bind mount for stateful applications.&lt;br&gt;
&lt;strong&gt;Use of Bind Mounts&lt;/strong&gt;:&lt;br&gt;
Bind mounts are appropriate for the following types of use case:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sharing source code or build artifacts between a development environment on the Docker host and a container.&lt;/li&gt;
&lt;li&gt;When you want to create or generate files in a container and persist the files onto the host's filesystem.&lt;/li&gt;
&lt;li&gt;Sharing configuration files from the host machine to containers. This is how Docker provides DNS resolution to containers by default, by mounting /etc/resolv.conf from the host machine into each container.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Bind mounts are also available for builds: you can bind mount source code from the host into the build container to test, lint, or compile a project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Docker Volume?&lt;/strong&gt;&lt;br&gt;
Applications can be run independently using Docker containers. By default, when a container is deleted or recreated, all of the modifications (data) within it are lost. Docker volumes can be useful if we wish to save data in between runs. In order to protect data created by the running container, Docker volumes are file systems mounted on Docker containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implimentation volume in container&lt;/strong&gt;:&lt;br&gt;
First step is to create the docker volume using following command:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpamqonoue0usqymszk1y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpamqonoue0usqymszk1y.png" alt=" " width="800" height="53"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;after creating the volume, now we can dedicate this volume to one or multiple containers&lt;br&gt;
to get the details information about volume, we can below command:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvk511ioz1yyllxjv9h7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvk511ioz1yyllxjv9h7.png" alt=" " width="800" height="200"&gt;&lt;/a&gt;&lt;br&gt;
If you want to delete the volume which you created then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Docker volume rm abhishek
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now I want to mount the volume on a container. so for that, Create the docker image to creating the docker image, we have to build docker file.&lt;br&gt;
so first build docker file:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbdhqd0rm2mi8a1cnj8c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbdhqd0rm2mi8a1cnj8c.png" alt=" " width="800" height="371"&gt;&lt;/a&gt;&lt;br&gt;
my docker file is in same directory so I use dot instead of filename, but If your docker file is in different directory then use filename instead of dot. &lt;br&gt;
Now I mount volume in container&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhv0iuqxrng4dhoeo5ib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhv0iuqxrng4dhoeo5ib.png" alt=" " width="800" height="21"&gt;&lt;/a&gt;&lt;br&gt;
after mounting the volume, we have to check the running the containers using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmzvid5ghziggyfk0d19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmzvid5ghziggyfk0d19.png" alt=" " width="800" height="44"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;32a9d862a19a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;after that check the container information, is volume is proporly mount on container or not, to check the use&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker inspect 32a9d862a19a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will show the volume details in mount section using below.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdromk7lanky8ypq1mi8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdromk7lanky8ypq1mi8.png" alt=" " width="800" height="307"&gt;&lt;/a&gt;&lt;br&gt;
*&lt;em&gt;Note:To delete the volume we have to stop the running container then delete the container then we can delete the volume *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;📝 Vlog Summary:&lt;br&gt;
In this vlog, I’ll walk you through the concept of Docker Volumes — what they are, why they matter, and how to use them to persist data in Docker containers. We’ll start by creating a Docker volume, learn how to mount it to a container, and inspect the container to verify the mount. I’ll also demonstrate how to build a Docker image and run a container with volume mapping. Finally, we’ll cover how to properly clean up volumes and containers. If you’re new to Docker or want to manage data persistency better, this one’s for you!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>what is jenkins and How to configure docker as agent on Jenkins.</title>
      <dc:creator>Abhishek Korde</dc:creator>
      <pubDate>Sun, 13 Apr 2025 15:42:19 +0000</pubDate>
      <link>https://dev.to/abhishek_korde_31/what-is-jenkins-and-how-to-configure-docker-as-agent-on-jenkins-4fg1</link>
      <guid>https://dev.to/abhishek_korde_31/what-is-jenkins-and-how-to-configure-docker-as-agent-on-jenkins-4fg1</guid>
      <description>&lt;p&gt;First we try to understand the what is jenkins and why jenkins is used in devops.&lt;br&gt;
&lt;strong&gt;🚀 What is Jenkins?&lt;/strong&gt;&lt;br&gt;
Jenkins is an open-source automation server used primarily for Continuous Integration (CI) and Continuous Delivery (CD) in software development.&lt;br&gt;
It helps automate the process of:&lt;/p&gt;

&lt;p&gt;Building code&lt;/p&gt;

&lt;p&gt;Testing applications&lt;/p&gt;

&lt;p&gt;Deploying software&lt;/p&gt;

&lt;p&gt;Monitoring pipelines&lt;/p&gt;

&lt;p&gt;Jenkins is one of the most popular tools in the DevOps toolchain.&lt;br&gt;
&lt;strong&gt;🛠️ Why is Jenkins Used in DevOps?&lt;/strong&gt;&lt;br&gt;
In DevOps, the goal is to:&lt;/p&gt;

&lt;p&gt;Develop faster&lt;/p&gt;

&lt;p&gt;Test more often&lt;/p&gt;

&lt;p&gt;Deliver software reliably and continuously&lt;/p&gt;

&lt;p&gt;Jenkins makes this possible by automating everything.&lt;br&gt;
&lt;strong&gt;what is use of docker agent in Jenkins?&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Old Jenkin Architecture:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Felaupuvx5s9xgj080kxc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Felaupuvx5s9xgj080kxc.png" alt=" " width="800" height="762"&gt;&lt;/a&gt;&lt;br&gt;
if you refer the old jenkins architecture in organizationi, we created EC2 instance jenkin master  but there are lots of devlops,devopments teams so jenkins gets lots of load so to offload this things so you should always use jenkin master only for the scheduling purposes. so in organization people create jenkin master and jenkin worker nodes, so different application run on different worker node. in past this architecture is well and good but advancement of microservices, we have lots of application os with this type of approach its difficult to build as well as some worker node sitting idle. so to solve this problems the new architecture came in the picture.&lt;br&gt;
&lt;strong&gt;New Jenkin Architecture:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5f9cbf32hmj4pdlrnjm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5f9cbf32hmj4pdlrnjm.png" alt=" " width="695" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;in latest approach, we are using Jenkins with docker as agent. so in latest approach we are trying to run jenkin pipeline on docker containers. as we know docker container is light in weight instead of virtual machines &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now you understood what is jenkin as well as docker as agent so we will try to configure the docker agent in Jenkins.&lt;/strong&gt;&lt;br&gt;
first login to EC2 instance using external terminal using below command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiev4sj3dfkysjfkhn4y8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiev4sj3dfkysjfkhn4y8.png" alt=" " width="800" height="63"&gt;&lt;/a&gt;&lt;br&gt;
Jenkins is java based application so  first install the java as Pre-Requisites:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Java (JDK)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run the below commands to install Java and Jenkins&lt;/p&gt;

&lt;p&gt;Install Java&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;sudo apt update&lt;br&gt;
sudo apt install openjdk-17-jre&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify Java is Installed&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;java -version&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, you can proceed with installing Jenkins&lt;br&gt;
&lt;strong&gt;curl -fsSL &lt;a href="https://pkg.jenkins.io/debian/jenkins.io-2023.key" rel="noopener noreferrer"&gt;https://pkg.jenkins.io/debian/jenkins.io-2023.key&lt;/a&gt; | sudo tee \&lt;br&gt;
  /usr/share/keyrings/jenkins-keyring.asc &amp;gt; /dev/null&lt;br&gt;
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \&lt;br&gt;
  &lt;a href="https://pkg.jenkins.io/debian" rel="noopener noreferrer"&gt;https://pkg.jenkins.io/debian&lt;/a&gt; binary/ | sudo tee \&lt;br&gt;
  /etc/apt/sources.list.d/jenkins.list &amp;gt; /dev/null&lt;br&gt;
sudo apt-get update&lt;br&gt;
sudo apt-get install jenkins&lt;/strong&gt;&lt;br&gt;
*&lt;em&gt;Note: *&lt;/em&gt; By default, Jenkins will not be accessible to the external world due to the inbound traffic restriction by AWS. Open port 8080 in the inbound traffic rules as show below.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;EC2 &amp;gt; Instances &amp;gt; Click on&lt;/li&gt;
&lt;li&gt;In the bottom tabs -&amp;gt; Click on Security&lt;/li&gt;
&lt;li&gt;Security groups&lt;/li&gt;
&lt;li&gt;Add inbound traffic rules as shown in the image (you can just allow TCP 8080 as well, in my case, I allowed All traffic).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvdo83nhq3rlra6l7k42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvdo83nhq3rlra6l7k42.png" alt=" " width="800" height="133"&gt;&lt;/a&gt;&lt;br&gt;
Login to Jenkins using the below URL:&lt;br&gt;
http://:8080 [You can get the ec2-instance-public-ip-address from your AWS EC2 console page]&lt;/p&gt;

&lt;p&gt;Note: If you are not interested in allowing All Traffic to your EC2 instance 1. Delete the inbound traffic rule for your instance 2. Edit the inbound traffic rule to only allow custom TCP port 8080&lt;/p&gt;

&lt;p&gt;After you login to Jenkins, - Run the command to copy the Jenkins Admin Password - sudo cat /var/lib/jenkins/secrets/initialAdminPassword - Enter the Administrator password&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1d08vgiq4yfn5gtir2k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1d08vgiq4yfn5gtir2k.png" alt=" " width="800" height="369"&gt;&lt;/a&gt;&lt;br&gt;
Click on Install suggested plugins&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vcqob7hqz3x99wg72r2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vcqob7hqz3x99wg72r2.png" alt=" " width="800" height="369"&gt;&lt;/a&gt;&lt;br&gt;
Wait for the Jenkins to Install suggested plugins&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4i9n8ebgq1xtofj40ho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4i9n8ebgq1xtofj40ho.png" alt=" " width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create First Admin User or Skip the step [If you want to use this Jenkins instance for future use-cases as well, better to create admin user]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8j97z20mrdiv11xa697.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8j97z20mrdiv11xa697.png" alt=" " width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jenkins Installation is Successful. You can now starting using the Jenkins&lt;br&gt;
Jenkins Installation is Successful. You can now starting using the Jenkins&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kng7jtejewvuhditd6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kng7jtejewvuhditd6u.png" alt=" " width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Install the Docker Pipeline plugin in Jenkins:&lt;br&gt;
Log in to Jenkins.&lt;br&gt;
Go to Manage Jenkins &amp;gt; Manage Plugins.&lt;br&gt;
In the Available tab, search for "Docker Pipeline".&lt;br&gt;
Select the plugin and click the Install button.&lt;br&gt;
Restart Jenkins after the plugin is installed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhvuh91qrbgichixgu67x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhvuh91qrbgichixgu67x.png" alt=" " width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait for the Jenkins to be restarted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Slave Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run the below command to Install Docker&lt;br&gt;
&lt;strong&gt;sudo apt update&lt;br&gt;
sudo apt install docker.io&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Grant Jenkins user and Ubuntu user permission to docker deamon.&lt;br&gt;
&lt;strong&gt;sudo su - &lt;br&gt;
usermod -aG docker jenkins&lt;br&gt;
usermod -aG docker ubuntu&lt;br&gt;
systemctl restart docker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once you are done with the above steps, it is better to restart Jenkins.&lt;br&gt;
http://:8080/restart&lt;br&gt;
The docker agent configuration is now successful.&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>devops</category>
      <category>cloud</category>
      <category>docker</category>
    </item>
    <item>
      <title>configuration usng ansible</title>
      <dc:creator>Abhishek Korde</dc:creator>
      <pubDate>Sat, 05 Apr 2025 10:54:19 +0000</pubDate>
      <link>https://dev.to/abhishek_korde_31/configuration-usng-ansible-3iah</link>
      <guid>https://dev.to/abhishek_korde_31/configuration-usng-ansible-3iah</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/abhishek_korde_31/how-to-configure-passwordless-authentication-using-ansible-3boj" class="crayons-story__hidden-navigation-link"&gt;How to configure passwordless authentication using Ansible&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/abhishek_korde_31" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F2949324%2F4fd1c454-08e4-4c53-8558-d0904983085a.jpg" alt="abhishek_korde_31 profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/abhishek_korde_31" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Abhishek Korde
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Abhishek Korde
                
              
              &lt;div id="story-author-preview-content-2383875" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/abhishek_korde_31" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F2949324%2F4fd1c454-08e4-4c53-8558-d0904983085a.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Abhishek Korde&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/abhishek_korde_31/how-to-configure-passwordless-authentication-using-ansible-3boj" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Apr 5 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/abhishek_korde_31/how-to-configure-passwordless-authentication-using-ansible-3boj" id="article-link-2383875"&gt;
          How to configure passwordless authentication using Ansible
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/devops"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;devops&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ansible"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ansible&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/cloud"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;cloud&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/security"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;security&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/abhishek_korde_31/how-to-configure-passwordless-authentication-using-ansible-3boj" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/exploding-head-daceb38d627e6ae9b730f36a1e390fca556a4289d5a41abb2c35068ad3e2c4b5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;5&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/abhishek_korde_31/how-to-configure-passwordless-authentication-using-ansible-3boj#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            2 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>devops</category>
      <category>ansible</category>
      <category>cloud</category>
      <category>security</category>
    </item>
  </channel>
</rss>
