<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ankit Raj</title>
    <description>The latest articles on DEV Community by Ankit Raj (@rajankit2295).</description>
    <link>https://dev.to/rajankit2295</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rajankit2295"/>
    <language>en</language>
    <item>
      <title>Building a Complete DevOps Pipeline: From Infrastructure to Deployment</title>
      <dc:creator>Ankit Raj</dc:creator>
      <pubDate>Fri, 15 Aug 2025 15:34:16 +0000</pubDate>
      <link>https://dev.to/rajankit2295/building-a-complete-devops-pipeline-from-infrastructure-to-deployment-3j56</link>
      <guid>https://dev.to/rajankit2295/building-a-complete-devops-pipeline-from-infrastructure-to-deployment-3j56</guid>
      <description>&lt;p&gt;Hii&lt;/p&gt;

&lt;p&gt;In this blog, I'll walk you through my journey of building a complete DevOps pipeline for a frontend application. This project covers everything from setting up Infrastructure on AWS to automatically deploying applications on Kubernetes using Jenkins.&lt;/p&gt;

&lt;p&gt;What this project is about:&lt;/p&gt;

&lt;p&gt;~ Infrastructure Provisioning using Terraform&lt;br&gt;
~ Server Configuration with Ansible&lt;br&gt;
~ Containerization with Docker&lt;br&gt;
~ Kubernetes Deployment for orchestration&lt;br&gt;
~ CI/CD Pipeline with Jenkins&lt;br&gt;
~ Automated Deployment with GitHub webhooks&lt;/p&gt;

&lt;p&gt;By the end of this project, every time I push code to GitHub, it automatically gets deployed to Kubernetes. Pretty cool, right? 😎&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's how everything flows together in my project:&lt;/p&gt;

&lt;p&gt;Code Push to GitHub &lt;br&gt;
             ↓&lt;br&gt;&lt;br&gt;
 GitHub Webhook Triggers Jenkins &lt;br&gt;
             ↓&lt;br&gt;
 Jenkins Pipeline Starts &lt;br&gt;
             ↓ &lt;br&gt;
 Build Docker Image &lt;br&gt;
             ↓ &lt;br&gt;
 Deploy to Kubernetes &lt;br&gt;
             ↓ &lt;br&gt;
 Application Running! &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The complete workflow:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1.Infrastructure Setup → Terraform provisions AWS EC2 instances&lt;/p&gt;

&lt;p&gt;2.Configuration → Ansible installs and configures all required tools&lt;/p&gt;

&lt;p&gt;3.Containerization → Dockerfile containerize the application&lt;/p&gt;

&lt;p&gt;4.Kubernetes Deployment → K8s deployment and service files&lt;/p&gt;

&lt;p&gt;5.CI/CD Pipeline → Jenkinsfile automates everything&lt;/p&gt;

&lt;p&gt;6.Automation → GitHub webhook triggers pipeline on code changes&lt;/p&gt;

&lt;p&gt;Let's dive into each step ............&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Infrastructure Provisioning with Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First things first - I needed servers to work with. Instead of manually creating EC2 instances every time, I used Terraform to automate this process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I started by launching a single EC2 instance from AWS console and installed Terraform on it. This became my "control center" for managing infrastructure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# SSH into the EC2 instance and install Terraform

curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Creating Terraform Configuration Files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now came the interesting part - writing Terraform code to create infrastructure automatically!!!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Variables File&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=""&gt;https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/terraform-files/variable.tf&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This file defines all the variables I can customize - like region, instance types, and key pair names. Think of it as a settings file where I can change values without modifying the main code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;~ Defines AWS region (I chose us-east-1)&lt;br&gt;
~ Sets instance types for different servers&lt;br&gt;
~ Specifies the SSH key pair for accessing servers&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Main Configuration File&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=""&gt;https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/terraform-files/main.tf&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is the heart of my infrastructure ..... It creates:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;~ VPC with custom networking&lt;br&gt;
~ Security Groups with proper firewall rules&lt;br&gt;
~ Three EC2 instances: Jenkins Master, Jenkins Agent (with Kubernetes), and Ansible Server&lt;br&gt;
~ All networking components like subnets, internet gateways, and route tables&lt;/p&gt;

&lt;p&gt;How they work together:&lt;/p&gt;

&lt;p&gt;Jenkins Master - Controls the CI/CD pipeline&lt;br&gt;
Jenkins Agent - Runs Kubernetes and deploys applications&lt;br&gt;
Ansible Server - Configures all other servers&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.Outputs File&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.tourl"&gt;https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/terraform-files/outputs.tf&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After Terraform creates everything, this file shows me the important information I need - like public IP addresses of all servers. Can be helpful for connecting to them later!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deploying the Infrastructure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Time to make it all happen! Here are the commands I used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Initialize Terraform (it downloads required plugins)
terraform init

# See what Terraform plans to create ... planning dekhooo uski 
terraform plan

# This command wiil help you in actually creating the infrastructure 
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running these commands, I got three new EC2 instances ready to use !!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frg5ivf6irjlrjl9zzgb8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frg5ivf6irjlrjl9zzgb8.png" alt="This output showing creation of EC2 instances" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above Image Shows creation of EC2 instances ....!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdaknolx5hg0jxsjtkta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdaknolx5hg0jxsjtkta.png" alt="This Image shows the Servers provisioned by Terraform." width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Configuration Management with Ansible&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now I had servers, but they were blank. I needed to install and configure all the required tools. Instead of manually SSH'ing into each server and running commands, I used Ansible to automate this and configure all of them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;# Connect to the Ansible server using its public IP&lt;br&gt;
ssh -i your-key.pem ubuntu@ansible-server-ip&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating Ansible Configuration Files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Inventory File&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.tourl"&gt;https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/ansible-files/inventory.ini&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What is an inventory file?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Think of it as Ansible's book! It tells Ansible which servers exist and how to connect to them. I listed all my servers here with their IP addresses and connection details.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What's inside:&lt;/p&gt;

&lt;p&gt;~Jenkins Master server details&lt;br&gt;
~Jenkins Agent server details&lt;br&gt;
~SSH connection information&lt;br&gt;
~Server groupings for easy management&lt;br&gt;
~My Pem file attached with all of them&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Playbook File&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.tourl"&gt;https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/ansible-files/playbook.yaml&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is where the magic happens! The playbook is like a recipe that tells Ansible exactly what to install and configure on each server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What this playbook does:&lt;/p&gt;

&lt;p&gt;~ Jenkins Master: Installs Jenkins, Java, Docker, and sets up initial configuration&lt;br&gt;
~ Jenkins Agent: Installs Docker, Kubernetes (k3s), kubectl, and connects to Jenkins Master&lt;br&gt;
~ Creates users and permissions for everything to work smoothly&lt;br&gt;
~ Configures security settings and networking&lt;/p&gt;

&lt;p&gt;Why it's awesome:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instead of manually running 50+ commands on each server, I just run one Ansible command and it configures everything perfectly ....!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Running Ansible Configuration&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Test connection to all servers
ansible all -i inventory.ini -m ping

# Run the playbook to configure everything
ansible-playbook -i inventory.ini playbook.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this step, all my servers were fully configured and ready for action .............!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs2ipfmq2ygnmb39jfku.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs2ipfmq2ygnmb39jfku.png" alt="This image describe the connection failure or success" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above Image Describes about the connection - Success or Failure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyn0900u94ycrbqwn0zn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyn0900u94ycrbqwn0zn.png" alt="This image showing the configuration where all of the tools are configuiring on Jenkins-Master Server" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above image showing the configuration where all of the tools are configuiring on Jenkins-Master Server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64gp0dpon6utm49iv3x8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64gp0dpon6utm49iv3x8.png" alt="This image showing the configuration where all of the tools are configuiring on Jenkins-Agent Server" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above image showing the configuration where all of the tools are configuiring on Jenkins-Agent Server.&lt;/p&gt;

&lt;p&gt;......!!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Kubernetes Deployment Files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;# Connect to Jenkins Agent server&lt;br&gt;
ssh -i your-key.pem ubuntu@jenkins-agent-ip&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating Kubernetes Configuration Files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Deployment File&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.tourl"&gt;https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/k8s/deployment.yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What this file does:&lt;/p&gt;

&lt;p&gt;~Tells Kubernetes how to run my frontend application&lt;br&gt;
~Defines how many copies (replicas) to run&lt;br&gt;
~Sets up health checks to ensure app is working&lt;br&gt;
~Configures resource limits (CPU and memory)&lt;br&gt;
~Exposes the application on port 80&lt;/p&gt;

&lt;p&gt;Some Cool features:&lt;/p&gt;

&lt;p&gt;~ Auto-healing: If a pod crashes, Kubernetes automatically creates a new one&lt;br&gt;
~ Rolling updates: Updates happen without downtime&lt;br&gt;
~ Health monitoring: Kubernetes checks if the app is healthy&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Service File&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.tourl"&gt;https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/k8s/service.yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What this file does:&lt;/p&gt;

&lt;p&gt;~ Creates a stable way to access my application&lt;br&gt;
~ Acts like a load balancer between multiple pods&lt;br&gt;
~ Exposes the app on NodePort 30080 so I can access it from outside&lt;/p&gt;

&lt;p&gt;Why I need this:&lt;/p&gt;

&lt;p&gt;-Pods in Kubernetes can come and go, but the service ensures there's always a stable endpoint to reach my application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pushing Files to GitHub&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After creating these files, I pushed them to my GitHub repo where my frontend application code lives. This way, Jenkins can access everything it needs from one place.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add k8s/
git commit -m "Add Kubernetes deployment and service files"
git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Testing Manual Deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before automating everything, I tested manual deployment on Jenkins-Agent Server to make sure everything works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create a namespace for my application
kubectl create namespace smart-Frontened

# Apply the deployment
kubectl apply -f k8s/deployment.yaml -n smart-Frontened

# Apply the service
kubectl apply -f k8s/service.yaml -n smart-Frontened

# Check if pods are running
kubectl get pods -n smart-Frontened

# Check service status
kubectl get svc -n smart-Frontened

# Get detailed pod information
kubectl describe pods -n smart-Frontened

# Check application logs
kubectl logs -f deployment/smart-Frontened-web-app -n smart-Frontened

# Test if application is accessible
curl http://jenkins-agent-ip:30080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Everything worked perfectly! Now time to automate this process ....!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn87xnoi37yhd6hjyew95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn87xnoi37yhd6hjyew95.png" alt="This image describes the kubernetes deployment" width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Jenkins CI/CD Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now for the most interesting part - automating everything with Jenkins Pipeline! I SSH'd into the Jenkins Master server to set up the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;# Connect to Jenkins Master&lt;br&gt;
ssh -i your-key.pem ubuntu@jenkins-master-ip&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating Pipeline Files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Dockerfile&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.tourl"&gt;https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What this Dockerfile does:&lt;/p&gt;

&lt;p&gt;~ Packages my frontend application into a container&lt;br&gt;
~ Uses Nginx as the web server to serve static files&lt;br&gt;
~ Creates a lightweight, portable image that runs anywhere&lt;br&gt;
~ Exposes port 80 for web traffic&lt;/p&gt;

&lt;p&gt;Why containerization rocks:&lt;/p&gt;

&lt;p&gt;~Consistency: Runs the same way everywhere (dev, test, production)&lt;br&gt;
~Portability: Can move between different environments easily&lt;br&gt;
~Isolation: App runs in its own environment without conflicts&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Jenkinsfile&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.tourl"&gt;https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/Jenkinsfile&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is the core of my automation. The Jenkinsfile defines the entire pipeline that runs automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What this pipeline does:&lt;/p&gt;

&lt;p&gt;~ Checkout Stage: Gets the latest code from GitHub&lt;br&gt;
~ Build Stage: Creates a Docker image of my application&lt;br&gt;
~ Deploy Stage: Deploys the application to Kubernetes&lt;br&gt;
~ Verification Stage: Checks if deployment was successful&lt;br&gt;
~ Cleanup Stage: Removes old Docker images to save space&lt;/p&gt;

&lt;p&gt;The magic behind it:&lt;/p&gt;

&lt;p&gt;~Automatic triggering: Runs whenever I push code to GitHub&lt;br&gt;
~Complete automation: From code to running application without manual intervention&lt;br&gt;
~Error handling: If something fails, it stops and tells me what went wrong&lt;br&gt;
~Rollback capability: Can easily go back to previous versions&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pushing Pipeline Files to GitHub&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add Dockerfile Jenkinsfile
git commit -m "Add Docker and Jenkins pipeline configuration file"  
git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now Jenkins has access to everything it needs from my GitHub repo ....... !!! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1psfw5gr9oca9kd0e7ix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1psfw5gr9oca9kd0e7ix.png" alt="This Pipeline stages showing how my script are running through various stages." width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Automated Pipeline Trigger&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The final piece of the project - making everything automatic .... I set up a GitHub webhook so that every time I push code, Jenkins automatically starts the deployment process.&lt;/p&gt;

&lt;p&gt;What is a GitHub Webhook?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Think of it as a notification system. When I push code to GitHub, it immediately tells Jenkins "Hey, there's new code here!" and Jenkins goes into action.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How to set it up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to GitHub repository → Settings → Webhooks&lt;/li&gt;
&lt;li&gt;Add Jenkins webhook URL: &lt;a href="http://jenkins-master-ip:8080/github-webhook/" rel="noopener noreferrer"&gt;http://jenkins-master-ip:8080/github-webhook/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Select "Push events" to trigger on code pushes&lt;/li&gt;
&lt;li&gt;Save the webhook&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Testing the Automation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now comes the moment of truth! Let me make a small change and push it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Make some changes to my frontend application directly from github or by terminal 

# Commit and push the changes
git add .
git commit -m "Update frontend with new content"
git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What happens automatically:&lt;/p&gt;

&lt;p&gt;~ GitHub receives my push and triggers the webhook&lt;br&gt;
~ Jenkins pipeline starts automatically&lt;br&gt;
~ Builds new Docker image with my changes and remove unused too&lt;br&gt;
~ Deploys to Kubernetes replacing the old version&lt;br&gt;
~ Application is live with my new updates ....&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No manual work needed - it's all automatic Dudeeee !!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk95k8ahitnh5fezkiekt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk95k8ahitnh5fezkiekt.png" alt="This Image showing my Github Webhook Log." width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pipeline Execution Flow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's exactly how everything flows together when I push code:&lt;/p&gt;

&lt;p&gt;Developer pushes code to GitHub&lt;br&gt;
                   ↓&lt;br&gt;
GitHub webhook notifies Jenkins&lt;br&gt;&lt;br&gt;
                   ↓&lt;br&gt;
Jenkins Pipeline starts automatically&lt;br&gt;
                   ↓&lt;br&gt;
Stage 1: Checkout - Gets latest code from GitHub&lt;br&gt;
                   ↓&lt;br&gt;&lt;br&gt;
Stage 2: Build - Creates Docker image from Dockerfile&lt;br&gt;
                   ↓&lt;br&gt;
Stage 3: Deploy - Uses kubectl to deploy to Kubernetes&lt;br&gt;
                   ↓&lt;br&gt;
Stage 4: Verify - Checks if deployment was successful&lt;br&gt;
                   ↓&lt;br&gt;
Stage 5: Cleanup - Removes old Docker images&lt;br&gt;
                   ↓&lt;br&gt;
Application is live with new changes!&lt;/p&gt;

&lt;p&gt;Total time: Usually 3-5 minute from code push to live deployment...!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7eafsr9eaof0f87zsdo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7eafsr9eaof0f87zsdo.png" alt="This image showing by Build Pipelines by github webhook trigger after commit." width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See My Web Page 👇&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23m1l6sj7w9ndkb835yq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23m1l6sj7w9ndkb835yq.png" alt="This Image shows My Web-App Landing Page after Successfull deployment." width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I Achieved&lt;/strong&gt; 💫&lt;/p&gt;

&lt;p&gt;Let me break down what each tool brought to my project:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From Terraform (Infrastructure Provisioning):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No more manual server creation - Everything is code-defined&lt;br&gt;
Consistent environments - Same setup every time&lt;br&gt;
Easy scaling - Can create more servers with one command&lt;br&gt;
Cost control - Can destroy everything when not needed&lt;br&gt;
Version control - Infrastructure changes are tracked in Git&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From Ansible (Configuration Management):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No more manual software installation - All automated&lt;br&gt;
Consistent server configuration - Same setup across all servers&lt;br&gt;
Time saving - What took hours now takes minutes&lt;br&gt;
Error reduction - No more human mistakes in configuration&lt;br&gt;
Documentation - Configuration is self-documenting&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From Kubernetes (Container Orchestration):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Auto-healing - If app crashes, it automatically restarts&lt;br&gt;
Load balancing - Traffic distributed across multiple app instances&lt;br&gt;
Zero-downtime deployments - Updates happen without service interruption&lt;br&gt;
Resource management - Efficient use of server resources&lt;br&gt;
Scalability - Can easily increase/decrease app instances&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From Docker (Containerization):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Consistency - App runs the same everywhere&lt;br&gt;
Portability - Easy to move between environments&lt;br&gt;
Isolation - App doesn't interfere with other applications&lt;br&gt;
Fast deployment - Quick to start and stop&lt;br&gt;
Resource efficiency - Lightweight compared to virtual machines&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From Jenkins (CI/CD Pipeline):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Complete automation - From code to production without manual steps&lt;br&gt;
Fast feedback - Know immediately if something breaks&lt;br&gt;
Consistent deployments - Same process every time&lt;br&gt;
Rollback capability - Easy to go back to previous versions&lt;br&gt;
Time saving - What took 30 minutes now takes 5 minutes&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges I Faced (And How I Solved Them)&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Let me share some real challenges I encountered and how I overcame them:&lt;/p&gt;

&lt;p&gt;Challenge 1: Terraform State File Issues&lt;/p&gt;

&lt;p&gt;Problem: Terraform state file got corrupted, couldn't make changes&lt;br&gt;
Solution: Learned to use terraform import to restore state and always backup state files&lt;/p&gt;

&lt;p&gt;Challenge 2: Ansible Connection Problems&lt;/p&gt;

&lt;p&gt;Problem: Ansible couldn't connect to servers due to SSH key issues&lt;br&gt;
Solution: Made sure SSH keys were properly configured and added -vvv flag for debugging&lt;/p&gt;

&lt;p&gt;Challenge 3: Docker Permission Denied (I can't tell about this) 😤&lt;/p&gt;

&lt;p&gt;Problem: Jenkins couldn't build Docker images due to permission issues&lt;br&gt;
Solution: Added Jenkins user to docker group and restarted Jenkins service&lt;/p&gt;

&lt;p&gt;Challenge 4: Kubernetes Pod Crashes&lt;/p&gt;

&lt;p&gt;Problem: Pods kept crashing with "ImagePullBackOff" error&lt;br&gt;
Solution: Fixed Docker image naming and ensured images were properly built&lt;/p&gt;

&lt;p&gt;Challenge 5: Jenkins Pipeline Failures&lt;/p&gt;

&lt;p&gt;Problem: Pipeline failed at kubectl commands&lt;br&gt;
Solution: Configured proper kubeconfig file and installed kubectl on Jenkins agent&lt;/p&gt;

&lt;p&gt;Challenge 6: Application Not Accessible&lt;/p&gt;

&lt;p&gt;Problem: Could access pods but not the application from browser&lt;br&gt;
Solution: Configured NodePort service correctly and opened security group ports&lt;/p&gt;

&lt;p&gt;Challenge 7: GitHub Webhook Not Triggering&lt;/p&gt;

&lt;p&gt;Problem: Code pushes weren't triggering Jenkins pipeline&lt;br&gt;
Solution: Fixed webhook URL format and ensured Jenkins was accessible from internet&lt;/p&gt;

&lt;p&gt;Key Learning: I learned from every error .... Doesn't matter .... i'll forget again these things! 🥲 The more problems I solved, the better I understood how everything works together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Building this DevOps pipeline was an incredible journey !!!!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What started as: &lt;/p&gt;

&lt;p&gt;~ Manual deployments taking 30+ minutes with high chance of errors&lt;/p&gt;

&lt;p&gt;~ Became: Fully automated deployments taking 3-5 minutes with zero manual intervention&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The transformation:&lt;/strong&gt; 😉&lt;/p&gt;

&lt;p&gt;Time saved: 90% reduction in deployment time&lt;br&gt;
Error reduction: Eliminated human (specially mine) errors in deployment process&lt;br&gt;
Faster releases: Can deploy multiple times per day if needed&lt;br&gt;
Scalability: Easy to scale infrastructure and applications&lt;br&gt;
Reliability: Consistent, repeatable deployment process&lt;/p&gt;

&lt;p&gt;This project taught me many things that i can't tell you about (My wifi goes down everytime and it checks my Patience). 🤧&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Github Project Repo&lt;/strong&gt; 👉 &lt;a href="https://dev.tourl"&gt;https://github.com/rajankit2295/smart-Frontened-DevOps-Project/tree/main&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  DevOps #AWS #Cloud #LearningByDoing #Code #Community #Challenge #Web #JustDoIt
&lt;/h1&gt;

</description>
      <category>aws</category>
      <category>cloudnative</category>
      <category>devops</category>
      <category>learning</category>
    </item>
    <item>
      <title>🌀 AWS Load Balancers - Explained in My Style</title>
      <dc:creator>Ankit Raj</dc:creator>
      <pubDate>Sat, 02 Aug 2025 12:36:56 +0000</pubDate>
      <link>https://dev.to/rajankit2295/aws-load-balancers-explained-in-my-style-5epe</link>
      <guid>https://dev.to/rajankit2295/aws-load-balancers-explained-in-my-style-5epe</guid>
      <description>&lt;p&gt;Okay, so I’ve been diving deep into AWS these days and today I wanted to talk about something that confused me A LOT in the beginning - &lt;strong&gt;Load Balancers&lt;/strong&gt;. There are four types of Load Balancers in AWS, and at first, they all looked the same to me. But once I understood their layers and use cases, it all started to make sense.&lt;/p&gt;

&lt;p&gt;This blog is my attempt to explain all four AWS Load Balancers in the easiest way possible - with visuals, analogies, and beginner-friendly terms. If you're also exploring cloud and DevOps like me, this will help you get a grip.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Classic Load Balancer (CLB) -- The Oldie But Goldie&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This one’s been around for a while. It’s like that old Nokia phone - reliable, simple, but not really built for today’s advanced apps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Supports both Layer 4 (TCP) and Layer 7 (HTTP/HTTPS)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Routes requests to EC2 instances&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lacks support for advanced routing (like path-based or host-based routing)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No container support or modern monitoring tools&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;#When to Use?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Only when you have some legacy systems running, or migrating old workloads. For anything modern, you’ll probably want to use ALB or NLB.&lt;/p&gt;

&lt;p&gt;My thoughts: It still works, but not my go-to choice anymore.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ysdvmeg3id8r8vi982h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ysdvmeg3id8r8vi982h.png" alt="Shows a basic load balancer distributing traffic across EC2 instances in different Availability Zones." width="800" height="632"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Application Load Balancer (ALB) -- The Smart One&lt;/strong&gt; 🧠&lt;/p&gt;

&lt;p&gt;This is the modern HTTP/HTTPS load balancer. It's clever, context-aware, and knows how to handle different types of web traffic. Works on Layer 7 of the OSI model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Smart routing: Supports path-based and host-based routing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Works great with containers and microservices (like ECS, Fargate)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports WebSockets and HTTP/2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Advanced monitoring with CloudWatch metrics&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrated with AWS WAF for security&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to Use?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Whenever you want flexibility in routing traffic. Perfect for apps with multiple services like /login, /dashboard, /cart, etc.&lt;/p&gt;

&lt;p&gt;My thoughts: Super helpful in modern architectures. I use this whenever I need custom rules or work with containers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjp9ihb1mzgzv02bc5jor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjp9ihb1mzgzv02bc5jor.png" alt="Displays content-based routing where traffic is forwarded based on URLs to different target groups." width="800" height="610"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.Network Load Balancer (NLB) -- The Speed Demon&lt;/strong&gt; ⚡&lt;/p&gt;

&lt;p&gt;This one’s all about speed and performance. NLB operates on Layer 4 (Transport Layer) and is designed to handle millions of requests per second with ultra-low latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Extremely fast and highly scalable&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports TCP, UDP, and TLS traffic&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can handle volatile traffic spikes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can assign Elastic IPs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Preserves the source IP for backend services&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to Use?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Real-time apps like financial systems, multiplayer games, or IoT workloads that need fast, reliable connections.&lt;/p&gt;

&lt;p&gt;My thoughts: If speed is your top priority, this is your guy. But it’s not smart like ALB — it just forwards traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyabho5e9r0t0e5q6x9p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyabho5e9r0t0e5q6x9p.png" alt="Illustrates TCP-level load balancing to high-performance EC2 targets using static IPs." width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.Gateway Load Balancer (GWLB) -- The Security Guy&lt;/strong&gt; 🔐&lt;/p&gt;

&lt;p&gt;This one’s different. It’s not for routing user requests to your app - it's for routing traffic through security appliances like firewalls, intrusion prevention systems, etc.&lt;/p&gt;

&lt;p&gt;Works on Layer 3 (Network Layer).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Integrates with third-party security appliances&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Simplifies insertion of security services&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Used in inline inspection of traffic&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy once, scale across multiple VPCs&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to Use?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you want to add deep security inspection into your network flow. Great for enterprise-level setups.&lt;/p&gt;

&lt;p&gt;My thoughts: Not for everyone, but if you're building something large-scale or security-heavy, it's a must-have.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8oju6v7s0thuzy1qj2rg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8oju6v7s0thuzy1qj2rg.png" alt="Visualizes traffic flowing through virtual appliances (like firewalls) before reaching EC2 instances." width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wrapping Up&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That’s my take on AWS Load Balancers! I tried to keep it as simple and as "me" as possible.&lt;/p&gt;

&lt;p&gt;If you're just starting in AWS, don't worry if this feels like too much. Save this, come back to it when you actually use them - trust me, it all starts to make sense once you do it hands-on.&lt;/p&gt;

&lt;p&gt;If you liked this breakdown or learned something new, drop a like or comment. I'm learning and sharing as I go - let's connect and grow together!!!!&lt;/p&gt;

&lt;h1&gt;
  
  
  aws #cloud #loadbalancer #devops #learningbydoing #community #challenge
&lt;/h1&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>webdev</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Understanding Networking Architecture (AWS Basics From My Learning)</title>
      <dc:creator>Ankit Raj</dc:creator>
      <pubDate>Wed, 30 Jul 2025 07:46:59 +0000</pubDate>
      <link>https://dev.to/rajankit2295/understanding-networking-architecture-aws-basics-from-my-learning-577j</link>
      <guid>https://dev.to/rajankit2295/understanding-networking-architecture-aws-basics-from-my-learning-577j</guid>
      <description>&lt;p&gt;Hey, again!&lt;/p&gt;

&lt;p&gt;In my AWS learning path, I recently studied this Networking structure that shows how EC2 instances are protected using firewalls and how traffic flows in and out of a VPC through route tables, routers, and gateways. It's a layered concept but once it clicks, it's super logical.&lt;/p&gt;

&lt;p&gt;So in this blog, I'm gonna break it down part-by-part just like how I understood it — simple and straight.&lt;/p&gt;

&lt;p&gt;Here’s the image I followed while learning:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkm0v92kymle2qzewojg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkm0v92kymle2qzewojg.png" alt="A high-level AWS VPC diagram showing EC2 instances within subnets, protected by Security Groups and NACLs, with traffic flow managed by Route Tables, a central Router, Internet Gateway, and Virtual Private Gateway." width="800" height="755"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's Understand everything step-by-step :&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;💻 &lt;strong&gt;EC2 Instance (Inside the Subnet):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before we talk about firewalls, let's first understand what we're protecting.&lt;/p&gt;

&lt;p&gt;In AWS, your application usually runs on EC2 instances — virtual servers in the cloud. These instances live inside a subnet, which is part of your VPC.&lt;/p&gt;

&lt;p&gt;Each EC2 can host your websites, APIs, backend systems, etc. And because they are exposed to networks (maybe even the internet), they need security — which is where firewalls come in.&lt;/p&gt;

&lt;p&gt;So first comes your instance, then the Security Group (like armor around it), and the NACL (like the gate of the whole area).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧱 Security Group (Inside the Subnet)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s start from where the EC2 instance lives. Inside a subnet, you attach Security Groups to your instances. Think of them like the first line of defense or the bodyguard for each EC2 instance.&lt;/p&gt;

&lt;p&gt;What is a Security Group?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It’s basically a virtual firewall that controls inbound (incoming) and outbound (outgoing) traffic at the instance level. Unlike NACLs, Security Groups are stateful.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stateful? What's that?&lt;/p&gt;

&lt;p&gt;It means if your security group allows inbound traffic on port 22 (SSH), then the response (outbound) is automatically allowed back. You don’t have to write outbound rules separately for it.&lt;/p&gt;

&lt;p&gt;How it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You create a security group.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You add inbound rules (e.g., allow port 80 from anywhere).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You add outbound rules (e.g., allow all traffic to anywhere).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then attach this SG to your EC2 instance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This controls exactly what can come in and what can go out from that specific instance.&lt;br&gt;
Example: Want to host a web app? Just allow inbound on port 80 (HTTP) and 443 (HTTPS).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧱 Network ACL (Outside the Subnet)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Okay now jump one level above. Your subnet (which contains EC2s) is also protected — but this time, by Network ACL (NACL). You can imagine this as a neighborhood security gate, while Security Group is the security at your door.&lt;/p&gt;

&lt;p&gt;What is a NACL?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NACL stands for Network Access Control List. It’s a set of rules that allow or deny traffic at the subnet level.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stateless? What does that mean?&lt;/p&gt;

&lt;p&gt;NACLs are stateless, meaning if you allow inbound traffic, you also have to allow the corresponding outbound traffic separately. Nothing automatic here.&lt;/p&gt;

&lt;p&gt;NACL Rule Types:&lt;/p&gt;

&lt;p&gt;Inbound Rules: Control what traffic can come into the subnet.&lt;/p&gt;

&lt;p&gt;Outbound Rules: Control what traffic can go out of the subnet.&lt;/p&gt;

&lt;p&gt;Rules are evaluated in order (from lowest to highest rule number).&lt;/p&gt;

&lt;p&gt;First match wins. Rest is ignored.&lt;/p&gt;

&lt;p&gt;Fun fact: If you don’t want a certain IP range hitting your subnet at all, block it in NACL.&lt;br&gt;
So if someone somehow bypasses SG (which they shouldn’t), NACL acts like an extra shield. It's mostly used for high-level access restrictions or blacklisting specific IPs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📦 Route Table&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every subnet in a VPC is associated with a Route Table. This defines where traffic should go once it enters the subnet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What’s inside the Route Table?&lt;/p&gt;

&lt;p&gt;Destination: IP range (like 0.0.0.0/0, or a VPC CIDR)&lt;/p&gt;

&lt;p&gt;Target: Where to send it (like IGW, NAT, local, etc.)&lt;/p&gt;

&lt;p&gt;How it works:&lt;/p&gt;

&lt;p&gt;When an EC2 tries to send a request to the internet, route table checks the destination IP.&lt;/p&gt;

&lt;p&gt;If the destination matches a rule, traffic is sent to the target.&lt;/p&gt;

&lt;p&gt;Example: 0.0.0.0/0 -&amp;gt; IGW means all internet-bound traffic is forwarded to Internet Gateway.&lt;br&gt;
Think of route table as the GPS inside your VPC.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔁 Router (Between Internet Gateway / Virtual Private Gateway)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The router in the VPC isn’t a separate service — it’s automatically managed by AWS. It connects your subnets, gateways, and route tables.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What it does:&lt;/p&gt;

&lt;p&gt;Connects different subnets within the VPC (east-west traffic)&lt;/p&gt;

&lt;p&gt;Handles traffic from subnets to outside VPC (north-south traffic)&lt;/p&gt;

&lt;p&gt;Works with route tables to know where to forward traffic&lt;/p&gt;

&lt;p&gt;Think of this router like a smart traffic controller. It doesn’t ask questions. It checks route tables and forwards traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🌐 Internet Gateway (IGW)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is what allows your instances to communicate with the public internet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why is it attached to the VPC?&lt;/p&gt;

&lt;p&gt;Because it gives the entire VPC access to the internet, but only subnets that are associated with a route table pointing to IGW can use it.&lt;/p&gt;

&lt;p&gt;What it does:&lt;/p&gt;

&lt;p&gt;Accepts traffic from the internet&lt;/p&gt;

&lt;p&gt;Forwards it into your VPC based on the route table&lt;/p&gt;

&lt;p&gt;Also lets your EC2 instances send responses back to the internet&lt;/p&gt;

&lt;p&gt;Without IGW, your EC2 can’t even do a simple apt update if it’s in a public subnet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔒 Virtual Private Gateway (VGW)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is used when your AWS VPC needs to connect with your on-premise network — like from your office data center.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How it works:&lt;/p&gt;

&lt;p&gt;You set up a VPN connection from your on-prem setup to VGW&lt;/p&gt;

&lt;p&gt;VGW connects to your VPC router&lt;/p&gt;

&lt;p&gt;Route table entry like Destination: 10.0.0.0/16 -&amp;gt; Target: VGW&lt;/p&gt;

&lt;p&gt;This is more like a private door that connects your VPC to a known internal network. Not public internet.&lt;/p&gt;

&lt;p&gt;Helpful when companies have hybrid cloud setups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧠 Final Thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At first, it all feels like too much — security group here, NACL there, route tables, gateways, blah blah. But when you draw it and walk through the flow, it all makes sense.&lt;/p&gt;

&lt;p&gt;Want to protect individual instances? Use Security Groups.&lt;/p&gt;

&lt;p&gt;Want to control subnet-level traffic? Use NACLs.&lt;/p&gt;

&lt;p&gt;Want to route traffic in/out? Route Table + Router.&lt;/p&gt;

&lt;p&gt;Internet access? Use IGW.&lt;/p&gt;

&lt;p&gt;Private corporate access? Use VGW.&lt;/p&gt;

&lt;p&gt;All these together make VPC powerful and flexible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hope this helped someone who’s just starting to make sense of all this.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;LinkedIn: &lt;a href="https://www.linkedin.com/in/ankit-raj-b20a0a305/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/ankit-raj-b20a0a305/&lt;/a&gt;&lt;br&gt;
GitHub: &lt;a href="https://github.com/rajankit2295" rel="noopener noreferrer"&gt;https://github.com/rajankit2295&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  aws #cloud #ec2 #vpc #devops #architecture #networking #community #blog #learning
&lt;/h1&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>networking</category>
      <category>webdev</category>
    </item>
    <item>
      <title>AWS EC2 High-Level Architecture Explained (From My Learning Journey)</title>
      <dc:creator>Ankit Raj</dc:creator>
      <pubDate>Fri, 25 Jul 2025 07:30:50 +0000</pubDate>
      <link>https://dev.to/rajankit2295/aws-ec2-high-level-architecture-explained-from-my-learning-journey-415h</link>
      <guid>https://dev.to/rajankit2295/aws-ec2-high-level-architecture-explained-from-my-learning-journey-415h</guid>
      <description>&lt;p&gt;Hey everyone!&lt;br&gt;
So this is my first blog and I'm kinda excited to post this. While going through my AWS Learning Journey, I came across this really cool EC2-based architecture. I decided to understand it properly and write about it in my own words so I can learn better — and maybe help some of you too!&lt;/p&gt;

&lt;p&gt;🌟 What this blog is about&lt;/p&gt;

&lt;p&gt;I'm gonna explain a 3-tier EC2-based AWS architecture I recently studied. It includes:&lt;/p&gt;

&lt;p&gt;Public and private subnets&lt;br&gt;
Two types of load balancers&lt;br&gt;
Auto Scaling&lt;br&gt;
Amazon Aurora DB (with read replica)&lt;/p&gt;

&lt;p&gt;I've added a diagram below (yes, it's the same one I followed):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fff161vkamxqb3nira4fd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fff161vkamxqb3nira4fd.png" alt="A 3-tier EC2-based AWS architecture with public and private subnets, load balancers, Auto Scaling, and Amazon Aurora DB." width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this architecture it shows a public-facing Application Load Balancer, which forwards client traffic to the public subnet consisting of a web tier. The EC2 instances within the web tier sit within Auto Scaling groups to allow for dynamic scaling. The web tier redirects all API calls from the instances in the public-facing subnet to an internal-facing Application Load Balancer. This internal-facing load balancer then forwards the traffic to the private subnet containing the application tier.&lt;/p&gt;

&lt;p&gt;🏠 Web Tier (Public Subnet)&lt;/p&gt;

&lt;p&gt;This is the top layer of the architecture. Here's what happens:&lt;/p&gt;

&lt;p&gt;An Application Load Balancer is exposed to the internet via an Internet Gateway.&lt;/p&gt;

&lt;p&gt;It receives all the incoming traffic (users accessing the app from browser/mobile).&lt;/p&gt;

&lt;p&gt;That traffic is forwarded to EC2 instances inside Auto Scaling Groups (ASG).&lt;/p&gt;

&lt;p&gt;These EC2s sit inside public subnets across two Availability Zones (AZs).&lt;/p&gt;

&lt;p&gt;Why use ASG? Because traffic might increase or decrease, so your infra should scale accordingly.&lt;/p&gt;

&lt;p&gt;🚀 Application Tier (Private Subnet)&lt;/p&gt;

&lt;p&gt;Now, the frontend EC2s (in public subnet) don't directly talk to the DB.&lt;br&gt;
They forward all API calls or backend logic processing to another Application Load Balancer.&lt;/p&gt;

&lt;p&gt;But this one is internal-facing only — not exposed to the internet. It's private.&lt;/p&gt;

&lt;p&gt;This ALB then routes traffic to another set of EC2 instances in private subnets.&lt;/p&gt;

&lt;p&gt;These EC2s handle all backend logic, business operations, etc.&lt;/p&gt;

&lt;p&gt;Again, distributed across multiple AZs for high availability.&lt;/p&gt;

&lt;p&gt;📂 Database Tier (Private Subnet)&lt;/p&gt;

&lt;p&gt;Now comes the final part: storage &amp;amp; data.&lt;/p&gt;

&lt;p&gt;I'm using Amazon Aurora here as the primary database.&lt;/p&gt;

&lt;p&gt;It's in a private subnet (for obvious security reasons).&lt;/p&gt;

&lt;p&gt;And to reduce read load, there's an Aurora read replica in another AZ.&lt;/p&gt;

&lt;p&gt;The app tier EC2s talk to the primary DB for writes.&lt;/p&gt;

&lt;p&gt;And the read replica helps with all read-heavy operations (like fetching data for reports or dashboards).&lt;/p&gt;

&lt;p&gt;👥 Why this setup is cool&lt;/p&gt;

&lt;p&gt;Public and private subnet separation &lt;/p&gt;

&lt;p&gt;Load balancing on both ends (frontend &amp;amp; app layer)&lt;/p&gt;

&lt;p&gt;Auto Scaling for high traffic handling&lt;/p&gt;

&lt;p&gt;High availability with multi-AZ deployment&lt;/p&gt;

&lt;p&gt;Secure DB in private subnet&lt;/p&gt;

&lt;p&gt;📊 What could be added/improved?&lt;/p&gt;

&lt;p&gt;If I had to expand or apply this further, I’d probably add:&lt;/p&gt;

&lt;p&gt;WAF (Web Application Firewall) for extra protection&lt;/p&gt;

&lt;p&gt;CloudFront for CDN&lt;/p&gt;

&lt;p&gt;NAT Gateway if private subnet EC2s need internet&lt;/p&gt;

&lt;p&gt;CloudWatch for monitoring and alerts&lt;/p&gt;

&lt;p&gt;🙌 Thoughts&lt;/p&gt;

&lt;p&gt;I studied it during my AWS learning Journey and decided to break it down in my own words. Doing this helped me understand each part clearly.&lt;/p&gt;

&lt;p&gt;If you’re just getting into AWS or preparing for, try to draw this architecture by yourself. Once you get the flow, everything else becomes easier.&lt;/p&gt;

&lt;p&gt;Thanks for reading! If you liked this or have any suggestions, feel free to comment ❤️....&lt;/p&gt;

&lt;p&gt;PS: I’m just a student, learning cloud step by step and building my portfolio. Let’s connect!!&lt;/p&gt;

&lt;p&gt;LinkedIn: &lt;a href="https://www.linkedin.com/in/ankit-raj-b20a0a305/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/ankit-raj-b20a0a305/&lt;/a&gt;&lt;br&gt;
GitHub: &lt;a href="https://github.com/rajankit2295" rel="noopener noreferrer"&gt;https://github.com/rajankit2295&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  aws #cloud #ec2 #aurora #devops #architecture #community #blog
&lt;/h1&gt;

</description>
      <category>aws</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
