DEV Community

Cover image for Building a Robust CI/CD Pipeline using Jenkins for a MERN Stack Application
Yash Patil
Yash Patil

Posted on

Building a Robust CI/CD Pipeline using Jenkins for a MERN Stack Application

As modern software development emphasizes agility and reliability, CI/CD (Continuous Integration/Continuous Deployment) pipelines have become indispensable. In this blog, we’ll delve into the implementation of a CI/CD pipeline for a MERN stack application (React.js frontend, Node.js/Express backend, and MongoDB database) which we have already worked on (check out this blog to know about the application), highlighting semantic versioning, why versioning is needed in modern-day applications, Docker image versioning, automated deployment, and more. Let’s walk through each stage of the pipeline with detailed explanations and examples.

First let’s talk about what are CI/CD pipelines and how my built pipeline fits into the picture..

What is a CI/CD Pipeline?

A CI/CD pipeline automates the integration and deployment of code, ensuring that changes are validated, built, tested, and deployed efficiently. It minimizes manual intervention, reduces errors, and accelerates delivery. In our project, the pipeline not only automates these processes but also increments the application version for each build.

Your pipeline embodies a real-world DevOps best practice: Continuous Integration and Continuous Deployment (CI/CD). Let’s break this down in the context of industry-standard workflows and explain why each step is necessary.

Versioning in Real-World Practices

Semantic Versioning

Semantic Versioning is the most common standard for versioning applications in the industry. It follows the format:

MAJOR.MINOR.PATCH
Enter fullscreen mode Exit fullscreen mode
  • PATCH: Incremented for bug fixes or minor improvements (e.g., 1.0.11.0.2).

  • MINOR: Incremented for new features that are backward-compatible (e.g., 1.0.01.1.0).

  • MAJOR: Incremented for changes that are backward-incompatible (e.g., 1.0.02.0.0).

Is It Necessary to Increment the Version for Every Commit or a pipeline build job?

  • No, in many cases, a version bump only happens when a release is being prepared for deployment to production or for public consumption.

  • Yes, if every commit directly impacts a deliverable (e.g., in a Continuous Deployment (CD) environment where every commit leads to production changes).

When to Increment Versions in the Real World

Version Bumps Are Typically Tied to Releases

  • Teams often work in branches (e.g., feature, development, or staging) and only merge into main when a feature or fix is complete.

  • A version bump occurs when a deployable release is ready, not for every commit.

Scenarios Where Version Increments Happen:

  1. Feature Releases:
* A feature is complete, tested, and merged into `main`. The version is incremented (e.g., `1.1.0`) before releasing it to production.
Enter fullscreen mode Exit fullscreen mode
  1. Bug Fixes:
* A critical bug is fixed, and the version's **PATCH** number is incremented before deployment.
Enter fullscreen mode Exit fullscreen mode
  1. Hotfixes:
* Emergency fixes often lead to quick **PATCH** bumps (e.g., `1.2.1` → `1.2.2`).
Enter fullscreen mode Exit fullscreen mode
  1. Performance Improvements:
* Minor performance updates might warrant a **PATCH** bump, or a **MINOR** bump if they involve significant new optimizations.
Enter fullscreen mode Exit fullscreen mode

Commits That Do Not Trigger Version Increments:

  • Work-in-progress changes.

  • Internal refactoring without user-facing impacts.

  • Development or experimental changes not yet merged to main.

In practice, incrementing versions should be done automatically. In our pipeline, to simulate a continuous deployment pipeline such that every build increments the patch version, signaling minor updates. Builds that are directly deliverable to production environments, version increments are necessary every time.

For this pipeline, we are using Jenkins, a widely popular automation server known for its flexibility and extensive plugin ecosystem. Jenkins excels in orchestrating tasks like building, testing, and deploying applications. Its robust community support, ease of configuration, and compatibility with a wide range of tools make it a go-to choice for CI/CD workflows.

To set up Jenkins, I have created a Jenkins container using the following Docker command:

docker run -p 8080:8080 -p 50000:50000 -d \
    -v jenkins_home:/var/jenkins_home \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v $(which docker):/usr/bin/docker \
    --group-add $(stat -c '%g' /var/run/docker.sock) \
    jenkins/jenkins:lts
Enter fullscreen mode Exit fullscreen mode

Explanation of the Arguments:

  1. -p 8080:8080: Maps Jenkins’ web interface to port 8080 on the host machine.

  2. -p 50000:50000: Maps the Jenkins agent communication port to the host machine.

  3. -v jenkins_home:/var/jenkins_home: Persists Jenkins data on the host for durability.

  4. -v /var/run/docker.sock:/var/run/docker.sock: Grants Jenkins access to the Docker daemon, enabling it to manage Docker containers.

  5. -v $(which docker):/usr/bin/docker: Provides Jenkins access to the Docker CLI.

  6. --group-add $(stat -c '%g' /var/run/docker.sock): Adds the Jenkins user to the Docker group for permission to run Docker commands.

  7. jenkins/jenkins:lts: Specifies the Jenkins Long-Term Support (LTS) image.

This setup ensures Jenkins is fully equipped to handle Docker-based workflows, making it an integral part of our CI/CD process.

Also make sure that the Jenkins container has Node and NPM installed on it.

Image description

As you can see our Jenkins container is up and running and accessible at localhost:8080 :

Image description

I have opted to use a simple pipeline job for our project and written a pipeline script in the Jenkinsfile.

Check out the project repository that contains the codebase of the application, the pipeline script under Jenkinsfile, our docker-compose file and a bash script which is used for deployment of our application but we’ll get to that later.

STAGE ONE : Increment application versions

Our Frontend and Backend services use NPM as their package managers and dependencies handlers. Every package manager tool keeps track of a version in its main build file. package.json file we write for our application has a field version which denotes the current application version. This is also where build information, dependencies and startup scripts are listed.

Image description

Image description

Build tools have commands to increment the versions of the applications. In this pipeline, every build automatically increments the patch version, simulating the scenario of minor updates.

Image description

npm version patch --no-git-tag-version: Updates the patch version in package.json without creating a Git tag.

I have used JQ a lightweight and flexible command-line utility to parse, filter, transform, and process JSON data.

In our case it parses the updated version from package.json files of both frontend and backend services and stores them in environmental variables env.FRONTEND_VERSION & env.BACKEND_VERSION respectively followed by the BUILD_NUMBER which is also an environmental variable of Jenkins ecosystem out of the box.

STAGE TWO : Building Docker Images and pushing them to Dockerhub.

Building Docker Images for Every New Version

When changes are committed, rebuilding the Docker images ensures that the application is packaged with the latest code and dependencies.

Why is this important in industry?

  1. Immutable Deployments:
* Docker images are snapshots of your application and its environment. By creating a new image for every version, you guarantee the environment for that version is consistent, regardless of where or when it is deployed.
Enter fullscreen mode Exit fullscreen mode
  1. Eliminates "Works on My Machine" Issues:
* All developers, testers, and production environments use the same image, avoiding discrepancies between local setups and production.
Enter fullscreen mode Exit fullscreen mode
  1. Supports Rollbacks:
* If something breaks, you can redeploy an older image/version without rebuilding the application.
Enter fullscreen mode Exit fullscreen mode

Pushing Images to a Centralized Repository

By pushing images to Docker Hub (or another registry):

  1. Centralized Access:
* Teams and systems can pull the latest images without needing access to the source code.
Enter fullscreen mode Exit fullscreen mode
  1. Supports Distributed Teams:
* Developers across different locations can pull the same image, ensuring consistency.
Enter fullscreen mode Exit fullscreen mode
  1. Versioned History:
* The registry acts as a timeline of all your application versions, making it easy to trace or rollback.
Enter fullscreen mode Exit fullscreen mode

In industry, container registries like Docker Hub, AWS ECR, or Azure Container Registry are used to store and manage these images.

We will tag our image with ${env.FRONTEND_VERSION} and ${env.BACKEND_VERSION} for precise versioning.

Image description

Sorry for the small image.

We have already created Dockerfiles for the frontend and backend services at paths ./mern/frontend and ./mern/backend respectively. We build the images and tag them using our dockerhub username, the image repository for the individual service and the updated versions stored in ${env.FRONTEND_VERSION} and ${env.BACKEND_VERSION}.

Image description

We have created a environment variable to store value for the dockerhub username as to not hardcode it.

After the images are built we need to push them to Dockerhub, but to do so we need to login to DockerHub user from our Jenkins ecosystem. I had created usernamePassword type credentials in Jenkins.

Image description

And accessed them in the Pipeline script using the withCredentials() function. The --password-stdin flag avoids exposing sensitive data in logs while docker login.

STAGE THREE : Committing Version Updates to Git Repository

Why Commit Version Changes?

Now that we have built and pushed our images with the correct and latest version tags, committing the version bump ensures the repository reflects the current state of the application, keeping the development history consistent and traceable. This is crucial because:

  1. Maintains Accurate State in the Repository:
* The `package.json` or equivalent file reflects the actual version of the deployed application.
Enter fullscreen mode Exit fullscreen mode
  1. Avoids Conflicts:
* Without this step, multiple contributors could unknowingly use the same old version number, leading to conflicts.
Enter fullscreen mode Exit fullscreen mode
  1. Collaboration:
* Other contributors always pull the latest version with updated dependencies, reducing confusion about which version is in production.
Enter fullscreen mode Exit fullscreen mode

Image description

Again for this stage I have setup Github Access credentials so that the Jenkins User can commit the version bump and changes to the Main Branch.

  1. Git Config: Sets up Jenkins as the Git user.

  2. Remote URL Update: Includes the GitHub PAT (Personal Access Token) for secure authentication.

  3. Commit and Push: Tracks version changes and pushes them to the main branch.

STAGE FOUR : Deploying the Application with the Latest Version to the Server

As for deploying the application I had used an EC2 instance from AWS which acts as a Deployment Server in our pipeline workflow.

There are few Pre-Requisites :

  1. Add SSH rules for Jenkins Server to the EC2 Instance in the security group.

  2. Add an Inbound rule for port 5173 which is where we have to access our application.

  3. Docker and Docker-compose must be installed on the Instance Beforehand the execution of the pipeline.

I have used a sshagent plugin to SSH to the EC2 Instance using “SSH Username with private key” type credentials.

Image description

I have also used a startup-script which will be executed once the Jenkins user will SSH into the EC2 Instance.

Image description

Image description

I am passing ${env.FRONTEND_VERSION} and ${env.BACKEND_VERSION} variables are arguments to the script which it stores as Environmental Variables once we SSH to the EC2 instance and Docker-compose files references these variables for its image tags.

  1. First we copy the script and the docker-compose file to the deployment server(EC2 Instance).

  2. And we have instructed the pipeline to execute this script once it SSH to the EC2 Instance.

We have opted for a Pipeline Job to which we provide the Git Repository to use, the credentials to access it and provide the path to the Pipeline Script written in the JenkinsFile.

Image description

Image description

PIPELINE EXECUTION

Now lets execute the pipeline and see the console output for its execution of its each stage.

  1. It checks-out the latest codebase from the repository and branch mentioned to build.

Image description

  1. It executes the first stage of our script and increments the versions of the services.

Image description

  1. Builds the images based on the latest version and code changes and pushes them to Dockerhub.

Image description

Image description

Image description

Image description

4. Commit the Version Updates to the Git Repository.

Image description

Image description

  1. As for the Final stage of the pipeline, it starts a SSH agent for the Jenkins user.

Image description

6. Copies the startup-script and the docker-compose file to the deployment server.
Enter fullscreen mode Exit fullscreen mode

Image description

Image description

7. SSH into the server and executes the startup-script which starts up our application using docker-compose.
Enter fullscreen mode Exit fullscreen mode

Image description

Image description

As you can see, our application is up and running in the deployment server and our pipeline has been successfully executed.

Image description

And Finally lets access our application using the server’s public IP at port 5173.

Image description

Image description

Image description

Well that’s it, this CI/CD pipeline demonstrates a practical approach to automating the development and deployment lifecycle of a MERN stack application. From incrementing application versions and building Docker images to committing changes and deploying on an EC2 instance, every stage reflects a real-world workflow followed in the industry. While this example focuses on a specific setup, such as using Docker for containerization and Jenkins for pipeline orchestration, the foundational concepts remain the same. Depending on project requirements, additional stages like automated testing, security scans, or performance monitoring might be included, and deployment environments may vary—ranging from Kubernetes clusters to other cloud platforms. This flexibility and modularity make pipelines like this a cornerstone of modern DevOps practices.

Heroku

This site is built on Heroku

Join the ranks of developers at Salesforce, Airbase, DEV, and more who deploy their mission critical applications on Heroku. Sign up today and launch your first app!

Get Started

Top comments (0)

Heroku

This site is powered by Heroku

Heroku was created by developers, for developers. Get started today and find out why Heroku has been the platform of choice for brands like DEV for over a decade.

Sign Up

👋 Kindness is contagious

Engage with a sea of insights in this enlightening article, highly esteemed within the encouraging DEV Community. Programmers of every skill level are invited to participate and enrich our shared knowledge.

A simple "thank you" can uplift someone's spirits. Express your appreciation in the comments section!

On DEV, sharing knowledge smooths our journey and strengthens our community bonds. Found this useful? A brief thank you to the author can mean a lot.

Okay