Modern teams rely on CI/CD to keep development fast, safe, and consistent. In this guide, we’ll walk through a real GitLab CI/CD pipeline built for a Node.js application—the Solar System project. You’ll see how unit tests, coverage analysis, container builds, service containers, and Docker registry pushes all fit into one reliable workflow.
This is essentially the story of how the team moved from “just trying CI/CD” to running a polished, production-ready pipeline.
🌌 Project Overview: The Solar System App
The application is a simple Node.js + Express service backed by MongoDB.
Key components include:
app.js — Express server, MongoDB connection, and endpoints
app-test.js — Mocha test suite
Dockerfile — Build instructions
deployment.yaml & service.yaml — Kubernetes manifests
Scripts for tests & coverage:
"scripts": {
"start": "node app.js",
"test": "mocha app-test.js --timeout 10000 --reporter mocha-junit-reporter --exit",
"coverage": "nyc --reporter cobertura --reporter lcov --reporter text --reporter json-summary mocha app-test.js"
}
Running locally is straightforward:
npm install
npm test
npm run coverage
npm start
When brought into GitLab CI, things get interesting.
🧪 Project Status Meeting 1
Understanding Pipeline Requirements
The team outlined a plan with nine tasks, starting from analyzing the project structure to getting CI jobs ready for test, coverage, and security scanning.
Key CI Goals
Run unit tests on PRs and branch pushes
Generate coverage reports
Build & scan Docker containers
Here’s the initial GitHub Actions-style flow (later translated into GitLab):
jobs:
test:
steps:
- npm ci
- npm test
coverage:
needs: test
steps:
- npm run coverage
scan:
needs: coverage
steps:
- docker build…
- trivy scan…
Everything looked perfect… until the first CI run.
🧯 Project Status Meeting 2
⚠️ The Problem: CI Jobs Were Hitting the Production Database
The first CI run triggered alerts from the production MongoDB cluster.
Why?
Because the pipeline was using real production credentials:
variables:
MONGO_URI: "mongodb+srv://prod.example.net/superData"
MONGO_USERNAME: superuser
Running tests against your production DB is the quickest way to ruin your day.
What went wrong?
Test jobs ran multiple Mongo connections per pipeline
Coverage jobs also connected
Feature branches + merge requests = many parallel connections
Result: production DB slowdown and intermittent failures
What the team learned
Always isolate CI dependencies.
Your pipeline should treat production like a locked vault.
🛠️ Fixing It: Using GitLab CI Services for Test Databases
GitLab lets you spin up Docker containers as services next to your job container.
Perfect for databases.
Here’s the improved pattern:
services:
- name: mongo:4.4
alias: mongo
Then the job connects to:
mongodb://mongo:27017/testdb
Full reusable template:
.default_test_template: &test_template
stage: test
image: node:17-alpine3.14
services:
- name: mongo:4.4
alias: mongo
before_script:
- npm install
- |
until mongo --host mongo \
-u testuser -p testpass \
--eval "db.adminCommand('ping')" &>/dev/null; do
echo "Waiting for MongoDB…"
sleep 2
done
variables:
MONGO_URI: "mongodb://testuser:testpass@mongo:27017/testdb"
Now both jobs reuse the template:
unit_testing:
<<: *test_template
script:
- npm test
code_coverage:
<<: *test_template
script:
- npm run coverage
Result?
Production DB traffic dropped by 40%
CI jobs became repeatable, isolated, and stable
📊 Adding Coverage Reports to Merge Requests
Developers want fast feedback inside their MR.
GitLab supports this through artifacts:reports.
Example:
unit_testing:
artifacts:
reports:
junit: test-results.xml
code_coverage:
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
GitLab now shows:
How many tests passed
Which tests failed
Coverage percentage
Highlighted line-level coverage
⚡ Improving Speed: Caching node_modules
Pipeline speed matters. Installing 300–400 npm packages per job isn’t fun.
The team added this cache:
.default_cache: &default_cache
key:
files:
- package-lock.json
prefix: node_modules
paths:
- node_modules
policy: pull-push
A fresh run takes ~7 seconds.
A cached run finishes in ~1 second.
🏗️ Building Docker Images in CI
Next step: build containers directly inside GitLab.
docker_build:
stage: containerization
image: docker:24.0.5
services:
- docker:24.0.5-dind
script:
- docker build -t $DOCKER_USERNAME/solar-system:$IMAGE_VERSION .
- docker save … -o image/solar-system-image.tar
artifacts:
paths:
- image
This produces a reusable solar-system-image.tar.
🧪 Testing the Docker Image
Before pushing, make sure the container actually works.
docker_test:
stage: containerization
needs: [docker_build]
image: docker:24.0.5
services:
- docker:24.0.5-dind
script:
- docker load -i image/solar-system-image.tar
- docker run -d -p 3000:3000 --name app solar-system:$IMAGE_VERSION
- IP=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' app)
- docker run --rm alpine wget -qO- http://$IP:3000/live | grep -q "live"
If /live responds correctly, the image is healthy.
☁️ Publishing to Docker Hub & GitLab Container Registry
Docker Hub push:
docker_push:
stage: containerization
needs: [docker_build, docker_test]
image: docker:24.0.5
services:
- docker:24.0.5-dind
script:
- docker load -i image/solar-system-image.tar
- docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
- docker push $DOCKER_USERNAME/solar-system:$IMAGE_VERSION
GitLab Container Registry push:
publish_gitlab_container_registry:
stage: containerization
needs: [docker_build, docker_test]
image: docker:24.0.5
services:
- docker:24.0.5-dind
script:
- docker load -i image/solar-system-image.tar
- docker login $CI_REGISTRY -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD"
- docker tag solar:$IMAGE_VERSION $CI_REGISTRY_IMAGE/ss-image:$IMAGE_VERSION
- docker push $CI_REGISTRY_IMAGE/ss-image:$IMAGE_VERSION
GitLab registry now stores your versioned images securely.
📈 Project Status Meeting 3
Impact of the Refactored Pipeline
After isolating DB dependencies and optimizing the pipeline:
Build & test jobs are 25% faster
DB load reduced by 40%
CI failures drastically reduced
Pipeline is now ready for full CD into Kubernetes
The team is now planning:
Kubernetes deployment automation
Helm chart templating
Canary & blue/green rollouts
Automated rollbacks
🏁 Conclusion
This journey highlights what real-world CI/CD adoption looks like:
Start simple
Fix the bottlenecks
Introduce isolation
Add observability
Build containers
Test them
Publish them
Automate deployments
You now have all the building blocks to create your own production-grade GitLab CI/CD pipeline for Node.js applications.
Top comments (0)