DEV Community

Cover image for Building a Complete DevOps Pipeline: From Infrastructure to Deployment
Ankit Raj
Ankit Raj

Posted on

Building a Complete DevOps Pipeline: From Infrastructure to Deployment

Hii

In this blog, I'll walk you through my journey of building a complete DevOps pipeline for a frontend application. This project covers everything from setting up Infrastructure on AWS to automatically deploying applications on Kubernetes using Jenkins.

What this project is about:

~ Infrastructure Provisioning using Terraform
~ Server Configuration with Ansible
~ Containerization with Docker
~ Kubernetes Deployment for orchestration
~ CI/CD Pipeline with Jenkins
~ Automated Deployment with GitHub webhooks

By the end of this project, every time I push code to GitHub, it automatically gets deployed to Kubernetes. Pretty cool, right? 😎

Architecture Overview

Here's how everything flows together in my project:

Code Push to GitHub
↓

GitHub Webhook Triggers Jenkins
↓
Jenkins Pipeline Starts
↓
Build Docker Image
↓
Deploy to Kubernetes
↓
Application Running!

The complete workflow:

1.Infrastructure Setup β†’ Terraform provisions AWS EC2 instances

2.Configuration β†’ Ansible installs and configures all required tools

3.Containerization β†’ Dockerfile containerize the application

4.Kubernetes Deployment β†’ K8s deployment and service files

5.CI/CD Pipeline β†’ Jenkinsfile automates everything

6.Automation β†’ GitHub webhook triggers pipeline on code changes

Let's dive into each step ............

Step 1: Infrastructure Provisioning with Terraform

First things first - I needed servers to work with. Instead of manually creating EC2 instances every time, I used Terraform to automate this process.

Setting up Terraform

I started by launching a single EC2 instance from AWS console and installed Terraform on it. This became my "control center" for managing infrastructure.

# SSH into the EC2 instance and install Terraform

curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install terraform
Enter fullscreen mode Exit fullscreen mode

Creating Terraform Configuration Files

Now came the interesting part - writing Terraform code to create infrastructure automatically!!!

1.Variables File

https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/terraform-files/variable.tf

  • This file defines all the variables I can customize - like region, instance types, and key pair names. Think of it as a settings file where I can change values without modifying the main code.

What it does:

~ Defines AWS region (I chose us-east-1)
~ Sets instance types for different servers
~ Specifies the SSH key pair for accessing servers

2.Main Configuration File

https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/terraform-files/main.tf

  • This is the heart of my infrastructure ..... It creates:

~ VPC with custom networking
~ Security Groups with proper firewall rules
~ Three EC2 instances: Jenkins Master, Jenkins Agent (with Kubernetes), and Ansible Server
~ All networking components like subnets, internet gateways, and route tables

How they work together:

Jenkins Master - Controls the CI/CD pipeline
Jenkins Agent - Runs Kubernetes and deploys applications
Ansible Server - Configures all other servers

3.Outputs File

https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/terraform-files/outputs.tf

  • After Terraform creates everything, this file shows me the important information I need - like public IP addresses of all servers. Can be helpful for connecting to them later!

Deploying the Infrastructure

Time to make it all happen! Here are the commands I used:

# Initialize Terraform (it downloads required plugins)
terraform init

# See what Terraform plans to create ... planning dekhooo uski 
terraform plan

# This command wiil help you in actually creating the infrastructure 
terraform apply
Enter fullscreen mode Exit fullscreen mode

After running these commands, I got three new EC2 instances ready to use !!!

This output showing creation of EC2 instances

Above Image Shows creation of EC2 instances ....!

This Image shows the Servers provisioned by Terraform.

Step 2: Configuration Management with Ansible

  • Now I had servers, but they were blank. I needed to install and configure all the required tools. Instead of manually SSH'ing into each server and running commands, I used Ansible to automate this and configure all of them.

# Connect to the Ansible server using its public IP
ssh -i your-key.pem ubuntu@ansible-server-ip

Creating Ansible Configuration Files

1.Inventory File

https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/ansible-files/inventory.ini

What is an inventory file?

  • Think of it as Ansible's book! It tells Ansible which servers exist and how to connect to them. I listed all my servers here with their IP addresses and connection details.

What's inside:

~Jenkins Master server details
~Jenkins Agent server details
~SSH connection information
~Server groupings for easy management
~My Pem file attached with all of them

2.Playbook File

https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/ansible-files/playbook.yaml

  • This is where the magic happens! The playbook is like a recipe that tells Ansible exactly what to install and configure on each server.

What this playbook does:

~ Jenkins Master: Installs Jenkins, Java, Docker, and sets up initial configuration
~ Jenkins Agent: Installs Docker, Kubernetes (k3s), kubectl, and connects to Jenkins Master
~ Creates users and permissions for everything to work smoothly
~ Configures security settings and networking

Why it's awesome:

  • Instead of manually running 50+ commands on each server, I just run one Ansible command and it configures everything perfectly ....!

Running Ansible Configuration

# Test connection to all servers
ansible all -i inventory.ini -m ping

# Run the playbook to configure everything
ansible-playbook -i inventory.ini playbook.yaml
Enter fullscreen mode Exit fullscreen mode

After this step, all my servers were fully configured and ready for action .............!!

This image describe the connection failure or success

Above Image Describes about the connection - Success or Failure.

This image showing the configuration where all of the tools are configuiring on Jenkins-Master Server

Above image showing the configuration where all of the tools are configuiring on Jenkins-Master Server.

This image showing the configuration where all of the tools are configuiring on Jenkins-Agent Server

Above image showing the configuration where all of the tools are configuiring on Jenkins-Agent Server.

......!!

Step 3: Kubernetes Deployment Files

# Connect to Jenkins Agent server
ssh -i your-key.pem ubuntu@jenkins-agent-ip

Creating Kubernetes Configuration Files

1. Deployment File

https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/k8s/deployment.yaml

What this file does:

~Tells Kubernetes how to run my frontend application
~Defines how many copies (replicas) to run
~Sets up health checks to ensure app is working
~Configures resource limits (CPU and memory)
~Exposes the application on port 80

Some Cool features:

~ Auto-healing: If a pod crashes, Kubernetes automatically creates a new one
~ Rolling updates: Updates happen without downtime
~ Health monitoring: Kubernetes checks if the app is healthy

2.Service File

https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/k8s/service.yaml

What this file does:

~ Creates a stable way to access my application
~ Acts like a load balancer between multiple pods
~ Exposes the app on NodePort 30080 so I can access it from outside

Why I need this:

-Pods in Kubernetes can come and go, but the service ensures there's always a stable endpoint to reach my application.

Pushing Files to GitHub

After creating these files, I pushed them to my GitHub repo where my frontend application code lives. This way, Jenkins can access everything it needs from one place.

git add k8s/
git commit -m "Add Kubernetes deployment and service files"
git push origin main
Enter fullscreen mode Exit fullscreen mode

Testing Manual Deployment

Before automating everything, I tested manual deployment on Jenkins-Agent Server to make sure everything works:

# Create a namespace for my application
kubectl create namespace smart-Frontened

# Apply the deployment
kubectl apply -f k8s/deployment.yaml -n smart-Frontened

# Apply the service
kubectl apply -f k8s/service.yaml -n smart-Frontened

# Check if pods are running
kubectl get pods -n smart-Frontened

# Check service status
kubectl get svc -n smart-Frontened

# Get detailed pod information
kubectl describe pods -n smart-Frontened

# Check application logs
kubectl logs -f deployment/smart-Frontened-web-app -n smart-Frontened

# Test if application is accessible
curl http://jenkins-agent-ip:30080
Enter fullscreen mode Exit fullscreen mode

Everything worked perfectly! Now time to automate this process ....!

This image describes the kubernetes deployment

Step 4: Jenkins CI/CD Pipeline

Now for the most interesting part - automating everything with Jenkins Pipeline! I SSH'd into the Jenkins Master server to set up the pipeline.

# Connect to Jenkins Master
ssh -i your-key.pem ubuntu@jenkins-master-ip

Creating Pipeline Files

1.Dockerfile

https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/Dockerfile

What this Dockerfile does:

~ Packages my frontend application into a container
~ Uses Nginx as the web server to serve static files
~ Creates a lightweight, portable image that runs anywhere
~ Exposes port 80 for web traffic

Why containerization rocks:

~Consistency: Runs the same way everywhere (dev, test, production)
~Portability: Can move between different environments easily
~Isolation: App runs in its own environment without conflicts

2.Jenkinsfile

https://github.com/rajankit2295/smart-Frontened-DevOps-Project/blob/main/Jenkinsfile

  • This is the core of my automation. The Jenkinsfile defines the entire pipeline that runs automatically.

What this pipeline does:

~ Checkout Stage: Gets the latest code from GitHub
~ Build Stage: Creates a Docker image of my application
~ Deploy Stage: Deploys the application to Kubernetes
~ Verification Stage: Checks if deployment was successful
~ Cleanup Stage: Removes old Docker images to save space

The magic behind it:

~Automatic triggering: Runs whenever I push code to GitHub
~Complete automation: From code to running application without manual intervention
~Error handling: If something fails, it stops and tells me what went wrong
~Rollback capability: Can easily go back to previous versions

Pushing Pipeline Files to GitHub

git add Dockerfile Jenkinsfile
git commit -m "Add Docker and Jenkins pipeline configuration file"  
git push origin main
Enter fullscreen mode Exit fullscreen mode

Now Jenkins has access to everything it needs from my GitHub repo ....... !!!

This Pipeline stages showing how my script are running through various stages.

Step 5: Automated Pipeline Trigger

The final piece of the project - making everything automatic .... I set up a GitHub webhook so that every time I push code, Jenkins automatically starts the deployment process.

What is a GitHub Webhook?

  • Think of it as a notification system. When I push code to GitHub, it immediately tells Jenkins "Hey, there's new code here!" and Jenkins goes into action.

How to set it up:

Testing the Automation

Now comes the moment of truth! Let me make a small change and push it:

# Make some changes to my frontend application directly from github or by terminal 

# Commit and push the changes
git add .
git commit -m "Update frontend with new content"
git push origin main
Enter fullscreen mode Exit fullscreen mode

What happens automatically:

~ GitHub receives my push and triggers the webhook
~ Jenkins pipeline starts automatically
~ Builds new Docker image with my changes and remove unused too
~ Deploys to Kubernetes replacing the old version
~ Application is live with my new updates ....

No manual work needed - it's all automatic Dudeeee !!!

This Image showing my Github Webhook Log.

Pipeline Execution Flow

Here's exactly how everything flows together when I push code:

Developer pushes code to GitHub
↓
GitHub webhook notifies Jenkins

↓
Jenkins Pipeline starts automatically
↓
Stage 1: Checkout - Gets latest code from GitHub
↓

Stage 2: Build - Creates Docker image from Dockerfile
↓
Stage 3: Deploy - Uses kubectl to deploy to Kubernetes
↓
Stage 4: Verify - Checks if deployment was successful
↓
Stage 5: Cleanup - Removes old Docker images
↓
Application is live with new changes!

Total time: Usually 3-5 minute from code push to live deployment...!!

This image showing by Build Pipelines by github webhook trigger after commit.

See My Web Page πŸ‘‡

This Image shows My Web-App Landing Page after Successfull deployment.

What I Achieved πŸ’«

Let me break down what each tool brought to my project:

From Terraform (Infrastructure Provisioning):

No more manual server creation - Everything is code-defined
Consistent environments - Same setup every time
Easy scaling - Can create more servers with one command
Cost control - Can destroy everything when not needed
Version control - Infrastructure changes are tracked in Git

From Ansible (Configuration Management):

No more manual software installation - All automated
Consistent server configuration - Same setup across all servers
Time saving - What took hours now takes minutes
Error reduction - No more human mistakes in configuration
Documentation - Configuration is self-documenting

From Kubernetes (Container Orchestration):

Auto-healing - If app crashes, it automatically restarts
Load balancing - Traffic distributed across multiple app instances
Zero-downtime deployments - Updates happen without service interruption
Resource management - Efficient use of server resources
Scalability - Can easily increase/decrease app instances

From Docker (Containerization):

Consistency - App runs the same everywhere
Portability - Easy to move between environments
Isolation - App doesn't interfere with other applications
Fast deployment - Quick to start and stop
Resource efficiency - Lightweight compared to virtual machines

From Jenkins (CI/CD Pipeline):

Complete automation - From code to production without manual steps
Fast feedback - Know immediately if something breaks
Consistent deployments - Same process every time
Rollback capability - Easy to go back to previous versions
Time saving - What took 30 minutes now takes 5 minutes

Challenges I Faced (And How I Solved Them)

Let me share some real challenges I encountered and how I overcame them:

Challenge 1: Terraform State File Issues

Problem: Terraform state file got corrupted, couldn't make changes
Solution: Learned to use terraform import to restore state and always backup state files

Challenge 2: Ansible Connection Problems

Problem: Ansible couldn't connect to servers due to SSH key issues
Solution: Made sure SSH keys were properly configured and added -vvv flag for debugging

Challenge 3: Docker Permission Denied (I can't tell about this) 😀

Problem: Jenkins couldn't build Docker images due to permission issues
Solution: Added Jenkins user to docker group and restarted Jenkins service

Challenge 4: Kubernetes Pod Crashes

Problem: Pods kept crashing with "ImagePullBackOff" error
Solution: Fixed Docker image naming and ensured images were properly built

Challenge 5: Jenkins Pipeline Failures

Problem: Pipeline failed at kubectl commands
Solution: Configured proper kubeconfig file and installed kubectl on Jenkins agent

Challenge 6: Application Not Accessible

Problem: Could access pods but not the application from browser
Solution: Configured NodePort service correctly and opened security group ports

Challenge 7: GitHub Webhook Not Triggering

Problem: Code pushes weren't triggering Jenkins pipeline
Solution: Fixed webhook URL format and ensured Jenkins was accessible from internet

Key Learning: I learned from every error .... Doesn't matter .... i'll forget again these things! πŸ₯² The more problems I solved, the better I understood how everything works together.

Conclusion

  • Building this DevOps pipeline was an incredible journey !!!!

What started as:

~ Manual deployments taking 30+ minutes with high chance of errors

~ Became: Fully automated deployments taking 3-5 minutes with zero manual intervention

The transformation: πŸ˜‰

Time saved: 90% reduction in deployment time
Error reduction: Eliminated human (specially mine) errors in deployment process
Faster releases: Can deploy multiple times per day if needed
Scalability: Easy to scale infrastructure and applications
Reliability: Consistent, repeatable deployment process

This project taught me many things that i can't tell you about (My wifi goes down everytime and it checks my Patience). 🀧

My Github Project Repo πŸ‘‰ https://github.com/rajankit2295/smart-Frontened-DevOps-Project/tree/main

DevOps #AWS #Cloud #LearningByDoing #Code #Community #Challenge #Web #JustDoIt

Top comments (0)