DEV Community

Srinivasaraju Tangella
Srinivasaraju Tangella

Posted on

How to Build a Real-World CI/CD Pipeline for Microservices with Jenkins and Kubernetes

High-level flow (single microservice)

1.Developer pushes code to Git (feature branch → PR → merge to main).

2.Jenkins Multibranch pipeline triggers on PR and main.

3.Pipeline runs: checkout → unit tests → static analysis → build artifact → build Docker image → container image scan → push image to registry (tagged) → generate Helm/K8s manifests → deploy to Kubernetes staging → run integration/e2e tests → promote to production (manual or automated) → run smoke checks → monitor and alert.


Assumptions & prerequisites

Git: GitHub / GitLab / Bitbucket (any).

Jenkins: Jenkins (master + agents) with required plugins (Git, Kubernetes plugin, Docker Pipeline, Credentials, Blue Ocean optional).

Container registry: Docker Hub, private registry, or Harbor/Artifactory.

Kubernetes cluster(s): staging and production (can be same cluster with namespaces).

Helm (recommended) or Kustomize for templating.

Build agent that can build docker images (DinD) or use Kaniko/Buildah in-cluster for CI security.

Secrets manager readiness (K8s secrets, sealed-secrets, Vault).

Monitoring & logging: Prometheus + Grafana + EFK/ELK or Loki.


Repo layout (recommended monorepo for many microservices OR one repo per service)

For one microservice repo:

/repo-root
/src # app source
/test # unit / integration tests
Dockerfile
Jenkinsfile
/helm # helm chart for this service
/k8s # raw k8s manifests (optional)
pom.xml / build.gradle (or package.json)
.dockerignore
README.md


Step-by-step practical pipeline (detailed bullets)

1)Git & PR strategy

Use feature branches + PRs; protect main branch (require PR approvals and passing checks).

Configure Jenkins multibranch pipeline to discover branches and PRs automatically.

2)Jenkins setup (high level)

Install plugins: Git, Pipeline, Multibranch Pipeline, Credentials, Docker Pipeline, Kubernetes, Blue Ocean, GitHub/GitLab webhooks plugin.

Configure credentials in Jenkins Credentials Store:

git-ssh-key or git-token

docker-registry-credentials (username/password or token)

kubeconfig-staging, kubeconfig-prod or use Jenkins Kubernetes plugin + serviceaccount

helm-repo-credentials if private

Set up Jenkins agents:

Option A: Docker agents with docker CLI (DinD) — simpler but security risk.

Option B (recommended): Use kaniko or buildah inside Kubernetes pod agents to build images without docker socket.

3)CI pipeline stages (Jenkinsfile, declarative)

checkout — fetch code and submodules.

setup — install dependencies (maven/npm/pip).

unitTest — run unit tests; archive reports (JUnit).

staticAnalysis — run linters (SonarQube, ESLint, SpotBugs).

build — compile/package artifact (jar/war/npm build).

containerBuild — build Docker image (with Kaniko or docker build), tag with git commit sha and semantic version.

imageScan — run Trivy or Clair for vulnerabilities. Fail or warn based on policy.

pushImage — push to registry.

deploy:staging — deploy using helm/kubectl apply.

integrationTests — run API/contract tests against staging.

promote — manual approval gate (for production).

deploy:prod — deploy to production with helm upgrade (canary/blue-green/rolling).

post-deploy — smoke tests and health checks; trigger monitoring dashboards.

cleanup — remove old images, prune.

I'll provide a Jenkinsfile example below.


Security & build best-practices

Use immutable image tags: myapp:${GIT_COMMIT}; also push myapp:1.2.3 semantic tag.

Do not store secrets in repo. Use Jenkins credentials + K8s secrets + sealed-secrets / Vault.

Run image scanning (Trivy) and fail pipeline on critical vulnerabilities.

Use non-root user inside container (modify Dockerfile).

Use minimal base images (distroless / alpine / slim).

Use reproducible builds (lock dependency versions).

Use RBAC for Jenkins and K8s service accounts.

Enable image signing (cosign) for production deploys.


Example files & commands

Example Dockerfile (Java Spring Boot, minimal)

Dockerfile

FROM eclipse-temurin:17-jdk-jammy AS build
WORKDIR /app
COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
RUN ./mvnw -q -N io.takari:maven:wrapper

COPY src ./src
RUN ./mvnw -DskipTests package

FROM eclipse-temurin:17-jre-jammy
WORKDIR /app
ARG JAR_FILE=target/*.jar
COPY --from=build /app/${JAR_FILE} app.jar

run as non-root

RUN useradd -m appuser
USER appuser
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app/app.jar"]

Example Jenkinsfile (declarative) — Kaniko build + Helm deploy

pipeline {
agent { label 'jenkins-k8s-agent' } // agent configured to run Kaniko/Helm
environment {
REGISTRY = 'registry.example.com/myteam'
IMAGE = "${REGISTRY}/myapp"
GIT_COMMIT = "${env.GIT_COMMIT ?: sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()}"
VERSION = "${env.BUILD_NUMBER}"
IMAGE_TAG = "${IMAGE}:${GIT_COMMIT}"
KUBECONFIG_STAGING = credentials('kubeconfig-staging') // stored in Jenkins
DOCKER_CRED_ID = 'docker-registry-credentials'
}
options { timeout(time: 60, unit: 'MINUTES') }
stages {
stage('Checkout') {
steps { checkout scm }
}
stage('Unit tests') {
steps {
sh './mvnw -DskipTests=false test'
junit 'target/surefire-reports/.xml'
}
}
stage('Build artifact') {
steps { sh './mvnw -DskipTests package -DskipITs' }
post { success { archiveArtifacts artifacts: 'target/
.jar', fingerprint: true } }
}
stage('Build & Push Image (Kaniko)') {
steps {
withCredentials([usernamePassword(credentialsId: DOCKER_CRED_ID, usernameVariable: 'DOCKER_USER', passwordVariable: 'DOCKER_PSW')]) {
sh """
echo "{\"auths\":{\"registry.example.com\":{\"username\":\"$DOCKER_USER\",\"password\":\"$DOCKER_PSW\"}}}" > /kaniko/.docker/config.json
/kaniko/executor --dockerfile=Dockerfile --context=${WORKSPACE} --destination=${IMAGE_TAG} --destination=${IMAGE}:latest
"""
}
}
}
stage('Image scan') {
steps {
sh "trivy image --exit-code 1 --severity CRITICAL ${IMAGE_TAG} || true"
}
}
stage('Deploy to Staging') {
steps {
withCredentials([file(credentialsId: 'kubeconfig-staging', variable: 'KUBECONFIG')]) {
sh """
helm upgrade --install myapp ./helm \
--namespace staging \
--set image.repository=${IMAGE} \
--set image.tag=${GIT_COMMIT} \
--wait --timeout 5m
"""
}
}
}
stage('Integration tests') {
steps {
sh './scripts/run-integration-tests.sh'
}
}
stage('Manual approval to Prod') {
when { branch 'main' }
steps { input message: "Approve deployment to production?" }
}
stage('Deploy to Prod') {
steps {
withCredentials([file(credentialsId: 'kubeconfig-prod', variable: 'KUBECONFIG')]) {
sh "helm upgrade --install myapp ./helm --namespace production --set image.repository=${IMAGE} --set image.tag=${GIT_COMMIT} --wait --timeout 5m"
}
}
}
}
post {
success { echo 'Pipeline succeeded' }
failure { echo 'Pipeline failed' }
}
}
Note: Replace Kaniko path and executor according to your agent image. If you use Docker-in-Docker, change to docker build and docker push. If you use cloud build (GCP/Azure/AWS), use their build service.


Example Helm values.yaml (snippet)

image:
repository: registry.example.com/myteam/myapp
tag: latest
replicaCount: 2
service:
type: ClusterIP
port: 8080
resources:
limits:
cpu: 500m
memory: 512Mi
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
livenessProbe:
httpGet:
path: /actuator/health
port: 8080


Example Kubernetes deployment (raw manifest snippet)

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: registry.example.com/myteam/myapp:<>
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10


Build agents & image build strategies (practical choices)

Kaniko (recommended) — runs in Kubernetes pod; secure; no docker socket.

BuildKit — faster builds and caching, use with Docker Buildx.

Docker-in-Docker (DinD) — easier but risky (exposes daemon); use only in trusted env.

Cloud Build — GCP Cloud Build / AWS CodeBuild if using cloud.

Use a caching layer (registry or cache) for faster builds.


Deployment strategies (choose one)

Rolling update (default in K8s) — zero downtime if readiness probes good.

Blue/Green — two sets of deployments and swap traffic via service or ingress.

Canary — incremental percentage rollout (use Argo Rollouts or Istio/Linkerd for traffic splitting).

A/B testing — feature flags + traffic splitting.

For production-critical services, use canary with automatic rollback if error rates spike.


Verification & observability (must-have)

Health checks: readiness & liveness endpoints.

Smoke tests: post-deploy HTTP checks; roll back on failure.

Metrics: instrument app (Prometheus metrics), export to Grafana dashboards.

Tracing: OpenTelemetry + Jaeger.

Logs: centralized logging (ELK/EFK or Loki).

Alerting: configure alerts for error rates, latency, SLO breaches.

Add a post-deploy stage in Jenkins that runs smoke tests and feeds results to Slack or MS Teams via webhook.


Rollback & troubleshooting

kubectl rollout status deployment/myapp -n production

kubectl rollout undo deployment/myapp -n production (rollback to previous ReplicaSet)

Use kubectl describe pod and kubectl logs for debug.

Keep previous image tags and artifacts in registry to re-deploy old versions.


CI for many microservices (monorepo vs multi-repo)

Multi-repo: each service has its own pipeline — simpler isolation.

Monorepo: use path-based Jenkins multibranch or scripted logic to detect changed services and run pipelines only for affected services. Use matrix builds for parallelism.

Example monorepo technique:

git diff --name-only $GIT_COMMIT^ $GIT_COMMIT | grep '^services/service-A/' to decide what to build.


Testing strategy (practical)

Unit tests (fast) — run on every commit/PR.

Component tests — run on PR merge to main or nightly.

Integration tests — run in staging environment with dependent services or mocks.

Contract testing (Pact) — ensure backward-compatible API changes.

E2E tests — scheduled and on-demand; longer runtime.

Performance tests — run in a separate pipeline; do not block quick deploys.


Example commands (CI agent)

Build + push (if using Docker CLI):

docker build -t registry.example.com/myteam/myapp:${GIT_COMMIT} .
docker login -u $DOCKER_USER -p $DOCKER_PSW registry.example.com
docker push registry.example.com/myteam/myapp:${GIT_COMMIT}

Helm deploy:

helm upgrade --install myapp ./helm \
--namespace production \
--set image.repository=registry.example.com/myteam/myapp \
--set image.tag=${GIT_COMMIT} \
--wait --timeout 5m

Rollback:

kubectl rollout undo deployment/myapp -n production

Image scan (Trivy):

trivy image --exit-code 1 --severity CRITICAL registry.example.com/myteam/myapp:${GIT_COMMIT}


Monitoring pipeline quality & governance

Keep policy checks in pipeline (vuln threshold).

Track MTTR, deployment frequency, change failure rate.

Store pipeline logs & artifacts (Jenkins archive/artifactory).

Periodic image garbage collection and retention policy.


Quick checklist to implement now

Create Git repo & commit Jenkinsfile, Dockerfile, helm/.

Configure Jenkins multibranch and webhooks.
Add Jenkins credentials (docker, kubeconfigs).

Prepare Jenkins agent image with Kaniko + Helm + Trivy installed.

Create staging namespace and RBAC serviceaccount for Jenkins.

Implement Jenkinsfile above, adapt kaniko paths.

Add health endpoints to app and liveness/readiness probes.

Add Trivy stage and decide vulnerability policy (fail/warn).

Implement smoke tests and integration tests.

Configure Helm values per environment.

Add Alerts + Dashboards (Prometheus + Grafana).


Suggested enhancements

Use Argo CD or Flux for GitOps: push deployment manifests to Git and let GitOps do the deployment — simpler ops.

Use Argo Rollouts for advanced canary and blue-green.

Add HashiCorp Vault for secret management.

Add COSIGN for image signing & verification.

Implement policy-as-code (OPA/Gatekeeper) for K8s admission control.

Top comments (0)