As modern software development emphasizes agility and reliability, CI/CD (Continuous Integration/Continuous Deployment) pipelines have become indispensable. In this blog, we’ll delve into the implementation of a CI/CD pipeline for a MERN stack application (React.js frontend, Node.js/Express backend, and MongoDB database) which we have already worked on (check out this blog to know about the application), highlighting semantic versioning, why versioning is needed in modern-day applications, Docker image versioning, automated deployment, and more. Let’s walk through each stage of the pipeline with detailed explanations and examples.
First let’s talk about what are CI/CD pipelines and how my built pipeline fits into the picture..
What is a CI/CD Pipeline?
A CI/CD pipeline automates the integration and deployment of code, ensuring that changes are validated, built, tested, and deployed efficiently. It minimizes manual intervention, reduces errors, and accelerates delivery. In our project, the pipeline not only automates these processes but also increments the application version for each build.
Your pipeline embodies a real-world DevOps best practice: Continuous Integration and Continuous Deployment (CI/CD). Let’s break this down in the context of industry-standard workflows and explain why each step is necessary.
Versioning in Real-World Practices
Semantic Versioning
Semantic Versioning is the most common standard for versioning applications in the industry. It follows the format:
MAJOR.MINOR.PATCH
PATCH: Incremented for bug fixes or minor improvements (e.g.,
1.0.1
→1.0.2
).MINOR: Incremented for new features that are backward-compatible (e.g.,
1.0.0
→1.1.0
).MAJOR: Incremented for changes that are backward-incompatible (e.g.,
1.0.0
→2.0.0
).
Is It Necessary to Increment the Version for Every Commit or a pipeline build job?
No, in many cases, a version bump only happens when a release is being prepared for deployment to production or for public consumption.
Yes, if every commit directly impacts a deliverable (e.g., in a Continuous Deployment (CD) environment where every commit leads to production changes).
When to Increment Versions in the Real World
Version Bumps Are Typically Tied to Releases
Teams often work in branches (e.g.,
feature
,development
, orstaging
) and only merge intomain
when a feature or fix is complete.A version bump occurs when a deployable release is ready, not for every commit.
Scenarios Where Version Increments Happen:
- Feature Releases:
* A feature is complete, tested, and merged into `main`. The version is incremented (e.g., `1.1.0`) before releasing it to production.
- Bug Fixes:
* A critical bug is fixed, and the version's **PATCH** number is incremented before deployment.
- Hotfixes:
* Emergency fixes often lead to quick **PATCH** bumps (e.g., `1.2.1` → `1.2.2`).
- Performance Improvements:
* Minor performance updates might warrant a **PATCH** bump, or a **MINOR** bump if they involve significant new optimizations.
Commits That Do Not Trigger Version Increments:
Work-in-progress changes.
Internal refactoring without user-facing impacts.
Development or experimental changes not yet merged to
main
.
In practice, incrementing versions should be done automatically. In our pipeline, to simulate a continuous deployment pipeline such that every build increments the patch version, signaling minor updates. Builds that are directly deliverable to production environments, version increments are necessary every time.
For this pipeline, we are using Jenkins, a widely popular automation server known for its flexibility and extensive plugin ecosystem. Jenkins excels in orchestrating tasks like building, testing, and deploying applications. Its robust community support, ease of configuration, and compatibility with a wide range of tools make it a go-to choice for CI/CD workflows.
To set up Jenkins, I have created a Jenkins container using the following Docker command:
docker run -p 8080:8080 -p 50000:50000 -d \
-v jenkins_home:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker \
--group-add $(stat -c '%g' /var/run/docker.sock) \
jenkins/jenkins:lts
Explanation of the Arguments:
-p 8080:8080
: Maps Jenkins’ web interface to port 8080 on the host machine.-p 50000:50000
: Maps the Jenkins agent communication port to the host machine.-v jenkins_home:/var/jenkins_home
: Persists Jenkins data on the host for durability.-v /var/run/docker.sock:/var/run/docker.sock
: Grants Jenkins access to the Docker daemon, enabling it to manage Docker containers.-v $(which docker):/usr/bin/docker
: Provides Jenkins access to the Docker CLI.--group-add $(stat -c '%g' /var/run/docker.sock)
: Adds the Jenkins user to the Docker group for permission to run Docker commands.jenkins/jenkins:lts
: Specifies the Jenkins Long-Term Support (LTS) image.
This setup ensures Jenkins is fully equipped to handle Docker-based workflows, making it an integral part of our CI/CD process.
Also make sure that the Jenkins container has Node and NPM installed on it.
As you can see our Jenkins container is up and running and accessible at localhost:8080
:
I have opted to use a simple pipeline job for our project and written a pipeline script in the Jenkinsfile.
Check out the project repository that contains the codebase of the application, the pipeline script under Jenkinsfile, our docker-compose file and a bash script which is used for deployment of our application but we’ll get to that later.
STAGE ONE : Increment application versions
Our Frontend and Backend services use NPM as their package managers and dependencies handlers. Every package manager tool keeps track of a version in its main build file. package.json
file we write for our application has a field version
which denotes the current application version. This is also where build information, dependencies and startup scripts are listed.
Build tools have commands to increment the versions of the applications. In this pipeline, every build automatically increments the patch version, simulating the scenario of minor updates.
npm version patch --no-git-tag-version
: Updates the patch version in package.json
without creating a Git tag.
I have used JQ a lightweight and flexible command-line utility to parse, filter, transform, and process JSON data.
In our case it parses the updated version from package.json
files of both frontend and backend services and stores them in environmental variables env.FRONTEND_VERSION
& env.BACKEND_VERSION
respectively followed by the BUILD_NUMBER which is also an environmental variable of Jenkins ecosystem out of the box.
STAGE TWO : Building Docker Images and pushing them to Dockerhub.
Building Docker Images for Every New Version
When changes are committed, rebuilding the Docker images ensures that the application is packaged with the latest code and dependencies.
Why is this important in industry?
- Immutable Deployments:
* Docker images are snapshots of your application and its environment. By creating a new image for every version, you guarantee the environment for that version is consistent, regardless of where or when it is deployed.
- Eliminates "Works on My Machine" Issues:
* All developers, testers, and production environments use the same image, avoiding discrepancies between local setups and production.
- Supports Rollbacks:
* If something breaks, you can redeploy an older image/version without rebuilding the application.
Pushing Images to a Centralized Repository
By pushing images to Docker Hub (or another registry):
- Centralized Access:
* Teams and systems can pull the latest images without needing access to the source code.
- Supports Distributed Teams:
* Developers across different locations can pull the same image, ensuring consistency.
- Versioned History:
* The registry acts as a timeline of all your application versions, making it easy to trace or rollback.
In industry, container registries like Docker Hub, AWS ECR, or Azure Container Registry are used to store and manage these images.
We will tag our image with ${env.FRONTEND_VERSION}
and ${env.BACKEND_VERSION}
for precise versioning.
Sorry for the small image.
We have already created Dockerfiles for the frontend and backend services at paths ./mern/frontend
and ./mern/backend
respectively. We build the images and tag them using our dockerhub username, the image repository for the individual service and the updated versions stored in ${env.FRONTEND_VERSION}
and ${env.BACKEND_VERSION}
.
We have created a environment variable to store value for the dockerhub username as to not hardcode it.
After the images are built we need to push them to Dockerhub, but to do so we need to login to DockerHub user from our Jenkins ecosystem. I had created usernamePassword type credentials in Jenkins.
And accessed them in the Pipeline script using the withCredentials() function. The --password-stdin
flag avoids exposing sensitive data in logs while docker login.
STAGE THREE : Committing Version Updates to Git Repository
Why Commit Version Changes?
Now that we have built and pushed our images with the correct and latest version tags, committing the version bump ensures the repository reflects the current state of the application, keeping the development history consistent and traceable. This is crucial because:
- Maintains Accurate State in the Repository:
* The `package.json` or equivalent file reflects the actual version of the deployed application.
- Avoids Conflicts:
* Without this step, multiple contributors could unknowingly use the same old version number, leading to conflicts.
- Collaboration:
* Other contributors always pull the latest version with updated dependencies, reducing confusion about which version is in production.
Again for this stage I have setup Github Access credentials so that the Jenkins User can commit the version bump and changes to the Main Branch.
Git Config: Sets up Jenkins as the Git user.
Remote URL Update: Includes the GitHub PAT (Personal Access Token) for secure authentication.
Commit and Push: Tracks version changes and pushes them to the main branch.
STAGE FOUR : Deploying the Application with the Latest Version to the Server
As for deploying the application I had used an EC2 instance from AWS which acts as a Deployment Server in our pipeline workflow.
There are few Pre-Requisites :
Add SSH rules for Jenkins Server to the EC2 Instance in the security group.
Add an Inbound rule for port 5173 which is where we have to access our application.
Docker and Docker-compose must be installed on the Instance Beforehand the execution of the pipeline.
I have used a sshagent plugin to SSH to the EC2 Instance using “SSH Username with private key” type credentials.
I have also used a startup-script which will be executed once the Jenkins user will SSH into the EC2 Instance.
I am passing ${env.FRONTEND_VERSION}
and ${env.BACKEND_VERSION}
variables are arguments to the script which it stores as Environmental Variables once we SSH to the EC2 instance and Docker-compose files references these variables for its image tags.
First we copy the script and the docker-compose file to the deployment server(EC2 Instance).
And we have instructed the pipeline to execute this script once it SSH to the EC2 Instance.
We have opted for a Pipeline Job to which we provide the Git Repository to use, the credentials to access it and provide the path to the Pipeline Script written in the JenkinsFile.
PIPELINE EXECUTION
Now lets execute the pipeline and see the console output for its execution of its each stage.
- It checks-out the latest codebase from the repository and branch mentioned to build.
- It executes the first stage of our script and increments the versions of the services.
- Builds the images based on the latest version and code changes and pushes them to Dockerhub.
4. Commit the Version Updates to the Git Repository.
- As for the Final stage of the pipeline, it starts a SSH agent for the Jenkins user.
6. Copies the startup-script and the docker-compose file to the deployment server.
7. SSH into the server and executes the startup-script which starts up our application using docker-compose.
As you can see, our application is up and running in the deployment server and our pipeline has been successfully executed.
And Finally lets access our application using the server’s public IP at port 5173.
Well that’s it, this CI/CD pipeline demonstrates a practical approach to automating the development and deployment lifecycle of a MERN stack application. From incrementing application versions and building Docker images to committing changes and deploying on an EC2 instance, every stage reflects a real-world workflow followed in the industry. While this example focuses on a specific setup, such as using Docker for containerization and Jenkins for pipeline orchestration, the foundational concepts remain the same. Depending on project requirements, additional stages like automated testing, security scans, or performance monitoring might be included, and deployment environments may vary—ranging from Kubernetes clusters to other cloud platforms. This flexibility and modularity make pipelines like this a cornerstone of modern DevOps practices.
Top comments (0)