DEV Community

Naveen Kumar
Naveen Kumar

Posted on

How to Build a Complete DevOps Pipeline from Scratch — Hands-On Guide for Developers in Bangalore

A practical, code-first walkthrough of building a production-grade CI/CD pipeline using Jenkins, Docker, Kubernetes, and Terraform — the exact stack taught in DevOps Training in Electronic City Bangalore

If you are a developer in Bangalore who has been writing application code for a while and now wants to understand how that code actually gets built, tested, containerized, deployed, and monitored in a real production environment — this guide is for you.

We are going to build a complete DevOps pipeline from scratch. Not a toy example. Not a Hello World container. A real, production-pattern pipeline that covers the full journey from code commit to monitored Kubernetes deployment — using the exact toolchain covered in the DevOps Certification Course in Electronic City at eMexo Technologies.

By the end of this guide you will have built:
✅ A Dockerized application with an optimized multi-stage Dockerfile
✅ A Jenkins CI/CD pipeline triggered by a GitHub webhook
✅ Automated testing integrated into the pipeline
✅ A Docker image pushed to a container registry
✅ A Kubernetes deployment with rolling update strategy
✅ A Helm chart for repeatable, environment-specific deployments
✅ Terraform configuration provisioning the underlying AWS infrastructure
✅ Prometheus and Grafana monitoring the deployed application
Let us build. 🔧

Prerequisites
Before starting, make sure you have:

A GitHub account
Docker installed locally
kubectl installed locally
An AWS account (free tier works for most of this)
Terraform installed locally
Basic familiarity with any programming language (we will use a simple Node.js app as our example)

Step 1 — The Application
We need something to deploy. Here is a minimal Node.js Express application. The simplicity is intentional — this guide is about the pipeline, not the application.

// app.js
const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;

app.get('/', (req, res) => {
  res.json({
    message: 'DevOps Pipeline Demo',
    version: process.env.APP_VERSION || '1.0.0',
    environment: process.env.NODE_ENV || 'development'
  });
});

app.get('/health', (req, res) => {
  res.status(200).json({ status: 'healthy' });
});

app.listen(PORT, () => {
  console.log(`App running on port ${PORT}`);
});

module.exports = app;
Enter fullscreen mode Exit fullscreen mode
// app.test.js
const request = require('supertest');
const app = require('./app');

describe('GET /', () => {
  it('should return 200 and pipeline demo message', async () => {
    const res = await request(app).get('/');
    expect(res.statusCode).toBe(200);
    expect(res.body.message).toBe('DevOps Pipeline Demo');
  });
});

describe('GET /health', () => {
  it('should return healthy status', async () => {
    const res = await request(app).get('/health');
    expect(res.statusCode).toBe(200);
    expect(res.body.status).toBe('healthy');
  });
});
Enter fullscreen mode Exit fullscreen mode
// package.json
{
  "name": "devops-pipeline-demo",
  "version": "1.0.0",
  "scripts": {
    "start": "node app.js",
    "test": "jest --coverage"
  },
  "dependencies": {
    "express": "^4.18.2"
  },
  "devDependencies": {
    "jest": "^29.0.0",
    "supertest": "^6.3.0"
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 2 — The Dockerfile (Multi-Stage Build)
A single-stage Dockerfile works but ships everything — including build tools, test dependencies, and development packages — into your production image. A multi-stage build separates build-time dependencies from runtime dependencies, producing a significantly smaller and more secure production image.

# Stage 1 — Build and Test
FROM node:18-alpine AS builder

WORKDIR /app

# Copy dependency files first (layer caching optimization)
# Changing app.js won't invalidate this layer if package.json is unchanged
COPY package*.json ./
RUN npm ci --only=production

# Copy source code
COPY . .

# Run tests in build stage
# If tests fail, the build fails — nothing proceeds
RUN npm test

# Stage 2 — Production Image
FROM node:18-alpine AS production

# Security: run as non-root user
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodeuser -u 1001 -G nodejs

WORKDIR /app

# Copy only production dependencies from builder stage
COPY --from=builder --chown=nodeuser:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nodeuser:nodejs /app/app.js ./app.js
COPY --from=builder --chown=nodeuser:nodejs /app/package.json ./package.json

USER nodeuser

EXPOSE 3000

# Health check built into the image
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD wget --quiet --tries=1 --spider http://localhost:3000/health || exit 1

CMD ["node", "app.js"]
Enter fullscreen mode Exit fullscreen mode

Why this matters in a real DevOps environment:
The multi-stage build pattern is a production standard taught in every serious DevOps Course in Electronic City. The final image contains only what is needed to run the application — no build tools, no test frameworks, no development dependencies. Smaller image = faster pull times = lower attack surface.
Build and test locally first:

docker build -t devops-pipeline-demo:local .
docker run -p 3000:3000 devops-pipeline-demo:local
curl http://localhost:3000/health
# {"status":"healthy"}
Enter fullscreen mode Exit fullscreen mode

Step 3 — The Jenkins Pipeline (Jenkinsfile)
Jenkins reads pipeline configuration from a Jenkinsfile in the root of your repository. This makes your pipeline version-controlled code — not a fragile web UI configuration that can be accidentally deleted.

// Jenkinsfile
pipeline {
    agent any

    environment {
        // Registry configuration — use your actual registry
        DOCKER_REGISTRY = 'your-registry-url'
        IMAGE_NAME = 'devops-pipeline-demo'
        IMAGE_TAG = "${BUILD_NUMBER}-${GIT_COMMIT[0..7]}"
        FULL_IMAGE = "${DOCKER_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}"

        // Kubernetes namespace targets
        STAGING_NAMESPACE = 'staging'
        PRODUCTION_NAMESPACE = 'production'
    }

    stages {
        stage('Checkout') {
            steps {
                checkout scm
                script {
                    echo "Building commit: ${GIT_COMMIT}"
                    echo "Branch: ${GIT_BRANCH}"
                }
            }
        }

        stage('Code Quality Check') {
            steps {
                sh 'npm install'
                // ESLint for code quality (add .eslintrc.js to your repo)
                sh 'npx eslint . --ext .js || true'
            }
        }

        stage('Security Scan — Dependencies') {
            steps {
                // Audit npm dependencies for known vulnerabilities
                sh 'npm audit --audit-level=high'
            }
        }

        stage('Build Docker Image') {
            steps {
                script {
                    // Tests run inside the Dockerfile builder stage
                    // If tests fail, docker build fails — pipeline stops here
                    docker.build(FULL_IMAGE, '--target production .')
                    echo "Built image: ${FULL_IMAGE}"
                }
            }
        }

        stage('Security Scan — Container Image') {
            steps {
                // Trivy scans the built image for OS and library vulnerabilities
                sh """
                    trivy image \
                        --exit-code 1 \
                        --severity HIGH,CRITICAL \
                        --no-progress \
                        ${FULL_IMAGE}
                """
            }
        }

        stage('Push to Registry') {
            steps {
                script {
                    docker.withRegistry("https://${DOCKER_REGISTRY}",
                                       'docker-registry-credentials') {
                        docker.image(FULL_IMAGE).push()
                        // Also tag as latest for the branch
                        docker.image(FULL_IMAGE).push('latest')
                    }
                    echo "Pushed: ${FULL_IMAGE}"
                }
            }
        }

        stage('Deploy to Staging') {
            steps {
                script {
                    sh """
                        helm upgrade --install \
                            devops-demo-staging \
                            ./helm/devops-demo \
                            --namespace ${STAGING_NAMESPACE} \
                            --create-namespace \
                            --set image.repository=${DOCKER_REGISTRY}/${IMAGE_NAME} \
                            --set image.tag=${IMAGE_TAG} \
                            --set environment=staging \
                            --wait \
                            --timeout 5m
                    """
                }
            }
        }

        stage('Smoke Tests — Staging') {
            steps {
                script {
                    // Wait for deployment to be ready
                    sh "kubectl rollout status deployment/devops-demo-staging \
                        -n ${STAGING_NAMESPACE} --timeout=120s"

                    // Run smoke tests against staging
                    sh """
                        STAGING_URL=\$(kubectl get svc devops-demo-staging \
                            -n ${STAGING_NAMESPACE} \
                            -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

                        curl -f http://\${STAGING_URL}:3000/health || exit 1
                        echo "Staging smoke tests passed"
                    """
                }
            }
        }

        stage('Deploy to Production') {
            when {
                branch 'main'
            }
            steps {
                // Manual approval gate for production deployments
                input message: 'Deploy to production?',
                      ok: 'Deploy',
                      submitter: 'admin,devops-lead'

                script {
                    sh """
                        helm upgrade --install \
                            devops-demo-prod \
                            ./helm/devops-demo \
                            --namespace ${PRODUCTION_NAMESPACE} \
                            --create-namespace \
                            --set image.repository=${DOCKER_REGISTRY}/${IMAGE_NAME} \
                            --set image.tag=${IMAGE_TAG} \
                            --set environment=production \
                            --set replicaCount=3 \
                            --wait \
                            --timeout 10m
                    """
                }
            }
        }
    }

    post {
        success {
            echo "Pipeline succeeded — ${FULL_IMAGE} deployed"
            // Add Slack/email notification here
        }
        failure {
            echo "Pipeline failed at stage: ${currentBuild.result}"
            // Add failure notification here
        }
        always {
            // Clean up local Docker images to free disk space
            sh "docker rmi ${FULL_IMAGE} || true"
            cleanWs()
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Key pipeline design decisions explained:

🔒 Security scanning at two stages — dependency audit before build, container image scan after build. This is the DevSecOps pattern taught in the DevOps Training in Electronic City curriculum at eMexo Technologies.

🏷️ Image tagging strategyBUILD_NUMBER-GIT_COMMIT_SHORT gives you a tag that is both sequential (for ordering) and traceable (for debugging). Never tag production images as latest only.

Manual approval gate — production deployments require human approval. This is a non-negotiable pattern for production pipelines at Electronic City's enterprise companies.


Step 4 — The Helm Chart

Helm is the package manager for Kubernetes. A Helm chart defines your application's Kubernetes resources as templates — with values that can be overridden per environment. This is how you deploy the same application to staging and production with different configurations without duplicating YAML files.

helm/devops-demo/
├── Chart.yaml
├── values.yaml
├── values-staging.yaml
├── values-production.yaml
└── templates/
    ├── deployment.yaml
    ├── service.yaml
    ├── hpa.yaml
    ├── configmap.yaml
    └── _helpers.tpl
Enter fullscreen mode Exit fullscreen mode
# Chart.yaml
apiVersion: v2
name: devops-demo
description: DevOps Pipeline Demo Application
type: application
version: 0.1.0
appVersion: "1.0.0"
Enter fullscreen mode Exit fullscreen mode
# values.yaml — default values
replicaCount: 2

image:
  repository: your-registry/devops-pipeline-demo
  pullPolicy: IfNotPresent
  tag: "latest"

service:
  type: LoadBalancer
  port: 3000

resources:
  limits:
    cpu: 500m
    memory: 256Mi
  requests:
    cpu: 100m
    memory: 128Mi

autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 70

environment: development

# Health check configuration
livenessProbe:
  httpGet:
    path: /health
    port: 3000
  initialDelaySeconds: 10
  periodSeconds: 10

readinessProbe:
  httpGet:
    path: /health
    port: 3000
  initialDelaySeconds: 5
  periodSeconds: 5
Enter fullscreen mode Exit fullscreen mode
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "devops-demo.fullname" . }}
  labels:
    {{- include "devops-demo.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "devops-demo.selectorLabels" . | nindent 6 }}
  strategy:
    type: RollingUpdate
    rollingUpdate:
      # Never have zero pods during deployment
      maxUnavailable: 0
      # Allow one extra pod during rollout
      maxSurge: 1
  template:
    metadata:
      labels:
        {{- include "devops-demo.selectorLabels" . | nindent 8 }}
      annotations:
        # Force pod restart when configmap changes
        checksum/config: {{ include (print $.Template.BasePath
                           "/configmap.yaml") . | sha256sum }}
    spec:
      # Security context — run as non-root
      securityContext:
        runAsNonRoot: true
        runAsUser: 1001
        fsGroup: 1001
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - containerPort: 3000
              protocol: TCP
          env:
            - name: NODE_ENV
              value: {{ .Values.environment }}
            - name: APP_VERSION
              value: {{ .Values.image.tag }}
          livenessProbe:
            {{- toYaml .Values.livenessProbe | nindent 12 }}
          readinessProbe:
            {{- toYaml .Values.readinessProbe | nindent 12 }}
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
Enter fullscreen mode Exit fullscreen mode

Step 5 — Terraform for AWS Infrastructure
Before Kubernetes can run your application, it needs infrastructure to run on. Terraform provisions that infrastructure as code — repeatably, versionably, and without clicking through the AWS console.

# main.tf — EKS Cluster Infrastructure

terraform {
  required_version = ">= 1.0"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
  # Remote state — critical for team environments
  backend "s3" {
    bucket         = "your-terraform-state-bucket"
    key            = "devops-demo/terraform.tfstate"
    region         = "ap-south-1"
    encrypt        = true
    dynamodb_table = "terraform-state-lock"
  }
}

provider "aws" {
  region = var.aws_region
}

# VPC
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.0"

  name = "${var.project_name}-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["ap-south-1a", "ap-south-1b", "ap-south-1c"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  # Required tags for EKS to discover subnets
  public_subnet_tags = {
    "kubernetes.io/role/elb" = 1
  }
  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = 1
  }
}

# EKS Cluster
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 20.0"

  cluster_name    = "${var.project_name}-cluster"
  cluster_version = "1.29"

  vpc_id                         = module.vpc.vpc_id
  subnet_ids                     = module.vpc.private_subnets
  cluster_endpoint_public_access = true

  # Managed node groups
  eks_managed_node_groups = {
    general = {
      instance_types = ["t3.medium"]
      min_size       = 2
      max_size       = 5
      desired_size   = 2

      labels = {
        role = "general"
      }
    }
  }
}

# Variables
variable "aws_region" {
  description = "AWS region"
  type        = string
  default     = "ap-south-1"
}

variable "project_name" {
  description = "Project name prefix for all resources"
  type        = string
  default     = "devops-demo"
}

# Outputs
output "cluster_endpoint" {
  value = module.eks.cluster_endpoint
}

output "cluster_name" {
  value = module.eks.cluster_name
}
Enter fullscreen mode Exit fullscreen mode

Apply the infrastructure:

terraform init
terraform plan -out=tfplan
terraform apply tfplan

# Configure kubectl to use the new cluster
aws eks update-kubeconfig \
  --region ap-south-1 \
  --name devops-demo-cluster
Enter fullscreen mode Exit fullscreen mode

Step 6 — Prometheus and Grafana Monitoring

A pipeline that deploys without monitoring is half a pipeline. Prometheus and Grafana give you visibility into what happens after deployment.
Install the kube-prometheus-stack — the standard production monitoring stack — via Helm:

# Add the prometheus-community Helm repository
helm repo add prometheus-community \
  https://prometheus-community.github.io/helm-charts
helm repo update

# Install the complete monitoring stack
helm install monitoring prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --create-namespace \
  --set grafana.adminPassword='your-secure-password' \
  --set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false
Enter fullscreen mode Exit fullscreen mode

Add a ServiceMonitor to tell Prometheus to scrape your application's metrics:

# templates/servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: {{ include "devops-demo.fullname" . }}
  labels:
    {{- include "devops-demo.labels" . | nindent 4 }}
spec:
  selector:
    matchLabels:
      {{- include "devops-demo.selectorLabels" . | nindent 6 }}
  endpoints:
    - port: http
      path: /metrics
      interval: 15s
Enter fullscreen mode Exit fullscreen mode

Add basic Prometheus metrics to your Node.js application:

// Add to app.js
const client = require('prom-client');

// Collect default Node.js metrics
client.collectDefaultMetrics();

// Custom HTTP request counter
const httpRequestsTotal = new client.Counter({
  name: 'http_requests_total',
  help: 'Total number of HTTP requests',
  labelNames: ['method', 'route', 'status_code']
});

// Request duration histogram
const httpRequestDuration = new client.Histogram({
  name: 'http_request_duration_seconds',
  help: 'HTTP request duration in seconds',
  labelNames: ['method', 'route'],
  buckets: [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5]
});

// Metrics endpoint for Prometheus scraping
app.get('/metrics', async (req, res) => {
  res.set('Content-Type', client.register.contentType);
  res.end(await client.register.metrics());
});

// Middleware to record metrics for all routes
app.use((req, res, next) => {
  const end = httpRequestDuration.startTimer({
    method: req.method,
    route: req.path
  });
  res.on('finish', () => {
    httpRequestsTotal.inc({
      method: req.method,
      route: req.path,
      status_code: res.statusCode
    });
    end();
  });
  next();
});
Enter fullscreen mode Exit fullscreen mode

Access Grafana to see your dashboards:

kubectl port-forward svc/monitoring-grafana 3001:80 -n monitoring
# Open http://localhost:3001
# Default credentials: admin / your-secure-password
Enter fullscreen mode Exit fullscreen mode

Import dashboard ID 1860 (Node Exporter Full) and 6417 (Kubernetes Cluster Monitoring) from grafana.com for instant production-grade dashboards.

Step 7 — The GitHub Webhook
Connect GitHub to Jenkins so every push triggers your pipeline automatically:

# In Jenkins:
# 1. Install the GitHub plugin (Manage Jenkins → Plugins)
# 2. Create a GitHub Personal Access Token with repo and admin:repo_hook scopes
# 3. Add the token to Jenkins credentials (ID: github-token)
# 4. In your Pipeline job: check "GitHub hook trigger for GITScm polling"

# In GitHub repository settings:
# Settings → Webhooks → Add webhook
# Payload URL: http://your-jenkins-url/github-webhook/
# Content type: application/json
# Events: Just the push event
# Active: checked
Enter fullscreen mode Exit fullscreen mode

Now push any commit to your repository and watch the complete pipeline execute automatically — from code push to monitored Kubernetes deployment.

What You Have Built
Let us review what this pipeline does end-to-end:
On every push to any branch:

📥 GitHub webhook triggers Jenkins
🔍 Code quality check runs ESLint
🔒 npm audit scans for vulnerable dependencies
🐳 Multi-stage Docker build runs tests internally — build fails if tests fail
🔒 Trivy scans the built image for HIGH and CRITICAL CVEs
📤 Image pushed to registry with build-number + commit-hash tag
☸️ Helm deploys to staging namespace with environment-specific values
🔥 Smoke tests validate the staging deployment is healthy
📊 Prometheus scrapes metrics, Grafana dashboards update in real time

On push to main branch only:

  1. ✋ Manual approval gate — a human confirms production deployment
  2. ☸️ Helm deploys to production with 3 replicas and zero-downtime rolling update This is a real production-pattern pipeline. This is what DevOps Training in Electronic City at eMexo Technologies builds with students in hands-on lab sessions — not a simplified demo, but the actual architecture that Electronic City's top engineering teams run.

Taking This Further — Structured DevOps Training in Electronic City
Building this pipeline from scratch as a self-guided exercise teaches you what the pieces are. What it cannot easily give you is the experience of debugging when things break — and in real DevOps environments, things always break in ways that documentation does not cover.

The Best DevOps Training in Electronic City at eMexo Technologies gives you:
🔧 Dedicated lab infrastructure — real AWS environments, real Kubernetes clusters, real Jenkins instances. Not a local Docker Desktop simulation — actual cloud infrastructure.
👨‍🏫 Trainer with 8+ years enterprise DevOps experience — someone who has debugged Jenkins webhook failures at 2am in production, not someone who learned Jenkins from a tutorial last year.
🏅 Certification preparation — AWS Certified DevOps Engineer – Professional, Certified Kubernetes Administrator (CKA), Docker Certified Associate (DCA), and HashiCorp Terraform Associate.
💼 DevOps Training and Placement in Electronic City — resume positioning for developers making a DevOps transition, mock technical interviews using real question banks from Electronic City hiring managers, and direct recruiter referrals to companies actively hiring.
📅 Flexible batch options — weekday evening, weekend, fast-track, and fully live online — designed for working developers who are upskilling without leaving their current role.

What To Do Next
If you have followed this guide and built the pipeline locally — you have already demonstrated to yourself that you can do this. The next step is doing it with real enterprise-scale infrastructure and the mentorship that accelerates the learning curve significantly.

📌 Explore the full curriculum and register for a free demo class:
https://www.emexotechnologies.com/courses/devops-training-in-electronic-city-bangalore/

📞 Call / WhatsApp: +91-9513216462
The free demo covers live Docker, Jenkins, and Kubernetes demonstrations in the actual lab environment. Attend before you commit to anything.

Found this useful? Drop a ❤️ and share it with a developer who has been meaning to get into DevOps.
Questions about any specific step — Jenkins webhook configuration, Helm chart structure, Terraform state management — drop them in the comments. Happy to go deeper on any section.

eMexo Technologies is a leading DevOps Training Institute in Electronic City, Bangalore — offering hands-on certification training with 100% placement support for developers, freshers, and career-gap candidates.

Top comments (0)