DEV Community

SUNATH KHADIKAR
SUNATH KHADIKAR

Posted on

Building an End-to-End CI/CD Pipeline with Spring Boot, Jenkins, Kubernetes & Security Scans

Why I Built This Project

I’ve worked with CI/CD concepts and tools before, but there was always a gap between knowing the tools and building a complete system.

Most tutorials stop at:

“Pipeline executed successfully”

But real CI/CD systems involve:

  • Versioning strategies
  • Webhooks
  • Code quality gates
  • Security scanning
  • Kubernetes rollouts
  • Failures, restarts, and a lot of debugging

So I decided to build a true end-to-end CI/CD pipeline, starting from a git push and ending with a fully deployed, secure, observable Spring Boot application running on Kubernetes.

This blog is a learning engineering story of what I built, what broke, how I fixed it (glad I did), and what I learned.


What Was Built

A production-like CI/CD pipeline with:

  • Git push → automatic Jenkins trigger
  • Maven build & tests
  • Application versioning with /release endpoint
  • Docker image build (immutable)
  • SonarQube code quality gates
  • Trivy image security scanning
  • Kubernetes deployment & rollout
  • Email notifications
  • MongoDB-backed application persistence

Everything runs locally — but behaves like production.


Technology Stack

Layer Technology
Application Spring Boot
Versioning Maven + /release API
SCM GitHub
CI/CD Jenkins (Pipeline as Code)
Code Quality SonarQube
Exposure Ngrok
Containers Docker
Security Trivy
Orchestration Kubernetes
Database MongoDB
Notifications Jenkins Mailer

High-Level Architecture

GitHub (push)

Webhook (Ngrok)

Jenkins Pipeline

Maven Build & Tests

SonarQube Quality Gate

Docker Image Build

Trivy Image Scan

Kubernetes Deployment

Email Notification

Design and Architecture

The project was structured into distinct phases to incrementally build complexity, moving from a basic code setup to an event-driven, secured CI/CD pipeline.

Each phase intentionally introduced a new real-world concern such as versioning, security, observability, or deployment reliability.


The Core Pipeline Flow

1. Code Push

A developer pushes code to GitHub, which automatically triggers the Jenkins pipeline via a webhook exposed using Ngrok.

This eliminates manual pipeline execution and establishes a true event-driven CI/CD workflow.


2. Continuous Integration

Jenkins clones the repository and executes:

mvn clean test
Enter fullscreen mode Exit fullscreen mode

This ensures:

  • The code compiles
  • Unit tests pass
  • The build is in a deployable state before proceeding further

3. Static Code Analysis

The codebase is analyzed using SonarQube.

  • Jenkins sends the analysis report to SonarQube
  • The pipeline waits for the Quality Gate result
  • The build fails automatically if quality gates are not met

This enforces engineering standards instead of relying on human judgment.


4. Multi-Stage Docker Build

A multi-stage Dockerfile is used to build the application image.

Why multi-stage?

  • The build stage contains Maven and build dependencies
  • The final runtime stage contains only the JRE and the JAR
  • Results in a smaller, more secure production image

5. Security Scanning

Before pushing the image to the registry, Trivy scans it for vulnerabilities.

  • Critical and High vulnerabilities are detected
  • The pipeline can be configured to fail based on severity
  • This ensures vulnerable images never reach deployment

6. Continuous Deployment

Once all checks pass:

  • Jenkins pushes the versioned image to Docker Hub
  • The Kubernetes deployment is updated using kubectl
  • Rollout progress is monitored using:
kubectl rollout status deployment/<deployment-name>
Enter fullscreen mode Exit fullscreen mode

Key Challenges and Technical Solutions

1. Docker-in-Docker Permission Hurdle

Problem:

Jenkins runs inside a Docker container but needs to build Docker images.

This caused permission issues with /var/run/docker.sock.

Solution:

  • Mounted the host Docker socket into the Jenkins container
  • Added the docker group inside the Jenkins image
  • Added the jenkins user to that group

This allowed Jenkins to build images without running as root, maintaining better security practices.


2. Kubernetes Connectivity from Containers

Problem:

Jenkins, running inside a container, could not access the Kubernetes API server using 127.0.0.1 on a Windows host.

Solution:

Used the following flag while running the Jenkins container:

--add-host=localhost:host-gateway
Enter fullscreen mode Exit fullscreen mode

This mapped the container’s localhost to the host gateway, allowing Jenkins to communicate with the KIND Kubernetes cluster.


3. Handling Rollouts in Kubernetes

Problem:

Kubernetes does not trigger a new rollout if the deployment still references the :latest image tag — even if the image has changed in the registry.

Solution:

Implemented dynamic image tagging:

  • Jenkins replaces an IMAGE_PLACEHOLDER in deployment.yaml
  • The placeholder is replaced with a unique version using ${BUILD_NUMBER}

This guarantees:

  • Every deployment triggers a rollout
  • Rollbacks are deterministic
  • Image versions are traceable

What Differentiates This Project

Operational Visibility

A /release endpoint was implemented in the Spring Boot application.

It exposes:

  • Application version
  • Jenkins build number
  • Runtime environment metadata

This allows anyone to verify exactly which build is running inside a Kubernetes Pod, directly from the application.


Infrastructure as Code

Instead of relying on manual Jenkins setup:

  • A custom Jenkins Docker image was built
  • Pre-installed with:
    • Maven
    • Docker CLI
    • Kubectl

This ensures:

  • Reproducibility
  • Consistency across environments
  • Faster onboarding

Key Takeaways

Security Integration

Security must be shifted left.

Using Trivy helped identify critical CVEs, including Tomcat RCE vulnerabilities, before deployment.


Network Orchestration

The hardest problems were not tools — they were network boundaries.

Managing communication between:

  • Jenkins
  • SonarQube
  • Kubernetes
  • Docker

was the most valuable learning experience.


Automation Reliability

A pipeline is only real CI/CD when:

  • It is event-driven
  • It requires no manual triggers
  • Humans are removed from the deployment path

GitHub webhooks completed that transformation.


Final Thought

This project was not about tools —

it was about understanding how systems talk to each other, how failures surface, and how automation earns trust.

That understanding is what turns DevOps from scripts into engineering.

Feedback appreciated


Some screenshots from the project

Pipeline running on Jenkins triggered on git push

Post build image pushed to docker hub repo

Deployment status

Pipeline executed successfully

Sonarqube analysis

Trivy vulnerability scan

Top comments (0)