DEV Community

Cover image for CloudForge CI/CD Lab
Isaiah Izibili
Isaiah Izibili

Posted on

CloudForge CI/CD Lab

Building a Production-Grade CI/CD Pipeline with GitHub Actions, Docker & Kubernetes

Introduction

Modern DevOps is not about tools alone — it’s about repeatable, automated, and reliable workflows that take code from a developer’s laptop all the way to production.

In this article, I’ll walk you through CloudForge CI/CD Lab, a fully hands-on DevOps project that demonstrates:

  • How to build a real Node.js application
  • How to test it properly
  • How to containerize it securely with Docker
  • How to automate everything using GitHub Actions
  • How to deploy cleanly to Kubernetes using branch-based workflows

This is not a toy project. Every step mirrors what real teams do in production.

Prerequisites – Download These First!

Estimated time: 15–30 minutes

Before writing a single line of code, we need the right tools. DevOps workflows fail early if the environment isn’t consistent.

✅ Required Tools
1️⃣ Node.js (v18 or higher — LTS recommended)

Download Node.js

Verify installation:

node --version
npm --version
Enter fullscreen mode Exit fullscreen mode

2️⃣ Git (Latest Stable)

git --version
Enter fullscreen mode Exit fullscreen mode

3️⃣ Docker Desktop

Verify:

docker --version
docker-compose --version

Enter fullscreen mode Exit fullscreen mode

4️⃣ GitHub Account

  1. Sign up at: https://github.com
  2. This will host:
  • Your source code
  • Your CI/CD pipeline
  • Your container images (GHCR)

5️⃣ Code Editor (Recommended)

🔍 Verify Everything Is Installed

node --version     # v18.x+ or v20.x+
npm --version      # 9.x+
git --version      # 2.34+
docker --version   # 24.x+
Enter fullscreen mode Exit fullscreen mode

If any command fails, fix it before proceeding.

Step 1: Set Up Git for Version Control
Why this matters

Git needs to know who you are so commits are traceable. This is critical in team environments and CI pipelines.

One-Time Git Configuration

git config --global user.name "Your Name"
git config --global user.email "you@example.com"
git config --global init.defaultBranch main

Enter fullscreen mode Exit fullscreen mode

Create and Initialize the Project

mkdir cloudforge-ci-cd-lab
cd cloudforge-ci-cd-lab
git init
Enter fullscreen mode Exit fullscreen mode

You now have a clean Git repository ready for automation.

Step 2: Build a Node.js Web Application

Estimated time: 10–15 minutes

This application is intentionally simple, but production-aware.

Initialize the Node.js Project

npm init -y
Enter fullscreen mode Exit fullscreen mode

This creates a package.json file that manages dependencies and scripts.

Update package.json

{
  "name": "cloudforge-ci-cd-lab",
  "version": "1.0.0",
  "description": "Production-grade DevOps CI/CD lab",
  "main": "app.js",
  "scripts": {
    "start": "node app.js",
    "test": "jest",
    "dev": "node app.js",
    "lint": "eslint ."
  },
  "keywords": ["devops", "nodejs", "docker", "kubernetes"],
  "author": "Your Name",
  "license": "MIT",
  "engines": {
    "node": ">=18.0.0"
  },
  "devDependencies": {
    "jest": "^29.7.0",
    "eslint": "^8.57.0",
    "supertest": "^7.1.4"
  }
}

Enter fullscreen mode Exit fullscreen mode

Packagejason

Create app.js
This server:

  • Exposes health endpoints
  • Emits Prometheus-style metrics
  • Handles graceful shutdown
  • Is Kubernetes-ready
  • Creates an HTTP server that listens on port 3000
  • Serves different endpoints (/, /health, /info, /metrics)
  • Includes security headers and proper error handling
  • Exports the server for testing (Your full app.js code goes here exactly as you provided — unchanged)

Update app.js

// core modules
const http = require("http");
const url = require("url");

// environment configuration
const PORT = process.env.PORT || 3000;
const ENVIRONMENT = process.env.NODE_ENV || "development";

let requestCount = 0;

// helper: send JSON responses
function sendJSON(res, statusCode, data) {
  res.statusCode = statusCode;
  res.setHeader("Content-Type", "application/json");
  res.end(JSON.stringify(data, null, 2));
}

// helper: send HTML responses
function sendHTML(res, statusCode, content) {
  res.statusCode = statusCode;
  res.setHeader("Content-Type", "text/html");
  res.end(content);
}

// helper: send Prometheus metrics
function sendMetrics(res) {
  const mem = process.memoryUsage();
  const metrics = `
# HELP http_requests_total Total HTTP requests
# TYPE http_requests_total counter
http_requests_total ${requestCount}

# HELP app_uptime_seconds Application uptime in seconds
# TYPE app_uptime_seconds gauge
app_uptime_seconds ${process.uptime()}

# HELP nodejs_memory_usage_bytes Node.js memory usage
# TYPE nodejs_memory_usage_bytes gauge
nodejs_memory_usage_bytes{type="rss"} ${mem.rss}
nodejs_memory_usage_bytes{type="heapUsed"} ${mem.heapUsed}
nodejs_memory_usage_bytes{type="heapTotal"} ${mem.heapTotal}
nodejs_memory_usage_bytes{type="external"} ${mem.external}
`;
  res.statusCode = 200;
  res.setHeader("Content-Type", "text/plain");
  res.end(metrics);
}

// main server
const server = http.createServer((req, res) => {
  requestCount++;
  const timestamp = new Date().toISOString();
  const { pathname } = url.parse(req.url, true);

  // logging
  console.log(
    `${timestamp} - ${req.method} ${pathname} - ${
      req.headers["user-agent"] || "Unknown"
    }`
  );

  // CORS headers
  res.setHeader("Access-Control-Allow-Origin", "*");
  res.setHeader("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE");
  res.setHeader("Access-Control-Allow-Headers", "Content-Type");

  // security headers
  res.setHeader("X-Content-Type-Options", "nosniff");
  res.setHeader("X-Frame-Options", "DENY");
  res.setHeader("X-XSS-Protection", "1; mode=block");

  // route handling
  switch (pathname) {
    case "/":
      sendHTML(
        res,
        200,
        `
<!DOCTYPE html>
<html>
<head>
  <title>DevOps Lab 2025</title>
  <style>
    body { font-family: Arial, sans-serif; max-width: 800px; margin: 50px auto; padding: 20px; }
    .header { background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); color: white; padding: 20px; border-radius: 8px; }
    .endpoint { background: #f8f9fa; padding: 15px; margin: 10px 0; border-radius: 5px; border-left: 4px solid #007bff; }
  </style>
</head>
<body>
  <div class="header">
    <h1>DevOps Lab 2025</h1>
    <p>Modern Node.js application with CI/CD pipeline</p>
  </div>
  <h2>Available Endpoints:</h2>
  <div class="endpoint"><strong>GET /</strong> - This welcome page</div>
  <div class="endpoint"><strong>GET /health</strong> - Health check (JSON)</div>
  <div class="endpoint"><strong>GET /info</strong> - System information</div>
  <div class="endpoint"><strong>GET /metrics</strong> - Prometheus metrics</div>
  <p>Environment: <strong>${ENVIRONMENT}</strong></p>
  <p>Server time: <strong>${timestamp}</strong></p>
  <p>Requests served: <strong>${requestCount}</strong></p>
</body>
</html>`
      );
      break;

    case "/health":
      sendJSON(res, 200, {
        status: "healthy",
        timestamp,
        uptime: process.uptime(),
        environment: ENVIRONMENT,
        version: "1.0.0",
        node_version: process.version,
        requests_served: requestCount,
      });
      break;

    case "/info":
      sendJSON(res, 200, {
        platform: process.platform,
        architecture: process.arch,
        node_version: process.version,
        memory_usage: process.memoryUsage(),
        environment: ENVIRONMENT,
        pid: process.pid,
        uptime: process.uptime(),
      });
      break;

    case "/metrics":
      sendMetrics(res);
      break;

    default:
      sendJSON(res, 404, {
        error: "Not Found",
        message: `Route ${pathname} not found`,
        timestamp,
      });
  }
});

// graceful shutdown
function shutdown(signal) {
  console.log(`\nReceived ${signal}, shutting down gracefully...`);
  server.close(() => {
    console.log("Server closed");
    process.exit(0);
  });
}
process.on("SIGTERM", () => shutdown("SIGTERM"));
process.on("SIGINT", () => shutdown("SIGINT"));

// start server
server.listen(PORT, () => {
  console.log(`🚀 Server running at http://localhost:${PORT}/`);
  console.log(`Environment: ${ENVIRONMENT}`);
  console.log(`Node.js version: ${process.version}`);
});

// export for testing
module.exports = server;
Enter fullscreen mode Exit fullscreen mode

appupdate

Install Dependencies

npm install --save-dev jest eslint supertest
npm install
Enter fullscreen mode Exit fullscreen mode

You’ll now see:

  • node_modules/
  • package-lock.json

npm install save

npm install

Step 3: Create Proper Automated Tests

Estimated time: 10 minutes
Testing is mandatory in CI/CD pipelines.

Create Test Directory

mkdir tests
touch tests/app.test.js
Enter fullscreen mode Exit fullscreen mode

Add Tests
(Copy this code into tests/app.test.js exactly as provided)

const request = require('supertest');
const server = require('../app');

describe('App Endpoints', () => {
  afterAll(() => {
    server.close();
  });

  test('GET / should return welcome page', async () => {
    const response = await request(server).get('/');
    expect(response.status).toBe(200);
    expect(response.text).toContain('DevOps Lab 2025');
  });

  test('GET /health should return health status', async () => {
    const response = await request(server).get('/health');
    expect(response.status).toBe(200);
    expect(response.body.status).toBe('healthy');
    expect(response.body.timestamp).toBeDefined();
    expect(typeof response.body.uptime).toBe('number');
  });

  test('GET /info should return system info', async () => {
    const response = await request(server).get('/info');
    expect(response.status).toBe(200);
    expect(response.body.platform).toBeDefined();
    expect(response.body.node_version).toBeDefined();
  });

  test('GET /metrics should return prometheus metrics', async () => {
    const response = await request(server).get('/metrics');
    expect(response.status).toBe(200);
    expect(response.text).toContain('http_requests_total');
    expect(response.text).toContain('app_uptime_seconds');
  });

  test('GET /nonexistent should return 404', async () => {
    const response = await request(server).get('/nonexistent');
    expect(response.status).toBe(404);
    expect(response.body.error).toBe('Not Found');
  });
});
Enter fullscreen mode Exit fullscreen mode

testapp

Create Jest configuration
Create jest.config.js:

module.exports = {
  testEnvironment: 'node',
  collectCoverage: true,
  coverageDirectory: 'coverage',
  testMatch: ['**/tests/**/*.test.js'],
  verbose: true
};
Enter fullscreen mode Exit fullscreen mode

Jest app

Step 4: GitHub Actions CI/CD Pipeline

Estimated time: 15 minutes

This pipeline:

  • Tests on multiple Node versions
  • Builds multi-arch Docker images
  • Scans for vulnerabilities
  • Pushes to GitHub Container Registry
  • Supports staging & production deployments

Create Workflow Directory

mkdir -p .github/workflows
Enter fullscreen mode Exit fullscreen mode

Create CI/CD pipeline file

.github/workflows/ci.yml
Enter fullscreen mode Exit fullscreen mode

(Use your full CI workflow YAML exactly as provided)

name: CI/CD Pipeline

on:
  push:
    branches: [ main, develop ]
    tags: [ 'v*' ]
  pull_request:
    branches: [ main ]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

jobs:
  test:
    name: Test & Lint
    runs-on: ubuntu-latest

    strategy:
      matrix:
        node-version: [20, 22]

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Setup Node.js ${{ matrix.node-version }}
        uses: actions/setup-node@v4
        with:
          node-version: ${{ matrix.node-version }}
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Run linting
        run: npm run lint

      - name: Run tests
        run: npm test

      - name: Security audit
        run: npm audit --audit-level=critical || echo "Audit completed with warnings"

  build:
    name: Build & Push Image
    runs-on: ubuntu-latest
    needs: test
    if: github.event_name == 'push'

    permissions:
      contents: read
      packages: write
      security-events: write

    outputs:
      image-tag: ${{ steps.meta.outputs.tags }}
      image-digest: ${{ steps.build.outputs.digest }}

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
        with:
          platforms: linux/amd64,linux/arm64

      - name: Log in to Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
          tags: |
            type=ref,event=branch
            type=ref,event=pr
            type=semver,pattern={{version}}
            type=semver,pattern={{major}}.{{minor}}
            type=sha,prefix={{branch}}-
            type=raw,value=${{ github.run_id }}
            type=raw,value=latest,enable={{is_default_branch}}
          labels: |
            org.opencontainers.image.title=DevOps Lab 2025
            org.opencontainers.image.description=Modern Node.js DevOps application

      - name: Build and push Docker image
        id: build
        uses: docker/build-push-action@v5
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=gha
          cache-to: type=gha,mode=max
          target: production

      - name: Run Trivy vulnerability scanner
        uses: aquasecurity/trivy-action@0.24.0
        with:
          image-ref: ${{ steps.meta.outputs.tags }}
          format: 'sarif'
          output: 'trivy-results.sarif'
          severity: 'CRITICAL,HIGH'
        continue-on-error: true

      - name: Upload Trivy scan results
        uses: github/codeql-action/upload-sarif@v3
        if: always() && hashFiles('trivy-results.sarif') != ''
        with:
          sarif_file: 'trivy-results.sarif'

  deploy-staging:
    name: Deploy to Staging
    runs-on: ubuntu-latest
    needs: build
    if: github.ref == 'refs/heads/develop'
    environment: staging

    steps:
      - name: Deploy to Staging
        run: |
          echo "🚀 Deploying to staging environment..."
          echo "Image: ${{ needs.build.outputs.image-tag }}"
          echo "Digest: ${{ needs.build.outputs.image-digest }}"
          # Add your staging deployment commands here (kubectl, helm, etc.)

  deploy-production:
    name: Deploy to Production
    runs-on: ubuntu-latest
    needs: build
    if: github.ref == 'refs/heads/main'
    environment: production

    steps:
      - name: Deploy to Production
        run: |
          echo "🎯 Deploying to production environment..."
          echo "Image: ${{ needs.build.outputs.image-tag }}"
          echo "Digest: ${{ needs.build.outputs.image-digest }}"
          # Add your production deployment commands here
Enter fullscreen mode Exit fullscreen mode

This pipeline demonstrates real-world CI/CD maturity, not demo-level automation.

Git CICD

Step 5: Dockerfile (Production-Grade)

Estimated time: 5 minutes

This Dockerfile:

  • Uses multi-stage builds for smaller image size
  • Installs curl for health checks
  • Creates a non-root user for security
  • Sets up proper file permissions
  • Configures health checks

(Use your full Dockerfile exactly as provided)

Create Dockerfile:

# Multi-stage build for optimized image
FROM node:20-alpine AS dependencies

# Update packages for security
RUN apk update && apk upgrade --no-cache

WORKDIR /app

# Copy package files first for better caching
COPY package*.json ./

# Install only production dependencies
RUN npm ci --only=production && npm cache clean --force

# Production stage  
FROM node:20-alpine AS production

# Update packages and install necessary tools
RUN apk update && apk upgrade --no-cache && \
    apk add --no-cache curl dumb-init && \
    rm -rf /var/cache/apk/*

# Create non-root user with proper permissions
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodeuser -u 1001 -G nodejs

WORKDIR /app

# Copy dependencies from previous stage with proper ownership
COPY --from=dependencies --chown=nodeuser:nodejs /app/node_modules ./node_modules

# Copy application code with proper ownership
COPY --chown=nodeuser:nodejs package*.json ./
COPY --chown=nodeuser:nodejs app.js ./

# Switch to non-root user
USER nodeuser

# Expose port
EXPOSE 3000

# Health check with proper timing for Node.js startup
HEALTHCHECK --interval=30s --timeout=10s --start-period=15s --retries=3 \
  CMD curl -f http://localhost:3000/health || exit 1

# Use dumb-init for proper signal handling in containers
ENTRYPOINT ["dumb-init", "--"]

# Start application
CMD ["npm", "start"]

Enter fullscreen mode Exit fullscreen mode

(Use your full Dockerfile exactly as provided)

Dockerfile

Step 6: Essential Configuration Files

Estimated time: 5 minutes
What this step does: Creates configuration files that tell various tools what to ignore, how to behave, and what settings to use.

These files keep your repo clean and secure.

  • .dockerignore
  • .gitignore
  • .env.example
  • .eslintrc.js

(Use your provided configurations verbatim)

Create .dockerignore

touch .dockerignore
Enter fullscreen mode Exit fullscreen mode
# Dependencies
node_modules
npm-debug.log*

# Git & GitHub
.git
.github

# Environment files
.env
.env.local
.env.*.local

# Logs
logs
*.log

# Coverage & test output
coverage
.nyc_output

# Editor/IDE configs
.vscode
.idea
*.swp
*.swo

# OS-specific files
.DS_Store
Thumbs.db

# Project files you don’t want in the image
README.md
tests/
jest.config.js
.eslintrc

Enter fullscreen mode Exit fullscreen mode

dockerignor

Why this arrangement works
Grouped logically (dependencies, VCS, env files, logs, coverage, editor configs, OS junk, project files).

Each entry on its own line → Docker will correctly ignore them when building images.

Comments (# ...) → optional, but they make the file easier to read and maintain.

Create .gitignore
Create .gitignore:

# ===============================
# Dependencies
# ===============================
node_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*

# ===============================
# Runtime Data
# ===============================
pids
*.pid
*.seed
*.pid.lock

# ===============================
# Coverage & Test Reports
# ===============================
coverage/
.nyc_output/

# ===============================
# Environment Variables
# ===============================
.env
.env.local
.env.*.local

# ===============================
# Logs
# ===============================
logs/
*.log

# ===============================
# IDE / Editor
# ===============================
.vscode/
.idea/
*.swp
*.swo

# ===============================
# OS Files
# ===============================
.DS_Store
Thumbs.db
Enter fullscreen mode Exit fullscreen mode

gitignor

Create environment template
Create .env.example:

# ===============================
# Server Configuration
# ===============================
PORT=3000
NODE_ENV=production

# ===============================
# Logging Configuration
# ===============================
LOG_LEVEL=info
Enter fullscreen mode Exit fullscreen mode

envverables

Create ESLint configuration
Create .eslintrc.js:

module.exports = {
  env: {
    node: true,
    es2021: true,
    jest: true
  },
  extends: ['eslint:recommended'],
  parserOptions: {
    ecmaVersion: 12,
    sourceType: 'module'
  },
  rules: {
    'no-console': 'off',
    'no-unused-vars': ['error', { 'argsIgnorePattern': '^_' }]
  }
};

Enter fullscreen mode Exit fullscreen mode

ensincfile

Step 7: Docker Compose for Development
Time Required: 5 minutes

What this step does: Creates a Docker Compose file that makes it easy to run your application and any supporting services with a single command.

Create docker-compose.yml:

version: '3.8'

services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=development
      - PORT=3000
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s
Enter fullscreen mode Exit fullscreen mode

docker compose

Step 8: Test Everything Locally
Time Required: 10 minutes

What this step does: Shows you how to actually run and test your application locally before deploying it.

Install and Test Locally

# Install all dependencies from package.json
npm install

# Run your test suite to make sure everything works
npm test

# Start the application server
npm start
Enter fullscreen mode Exit fullscreen mode

What you'll see:

Tests should pass with green checkmarks: ✓ GET / should return welcome page
Server starts and shows: 🚀 Server running at http://localhost:3000/

npminstall

npmtest1

npmstart

Test Endpoints

curl http://localhost:3000/
curl http://localhost:3000/health
curl http://localhost:3000/info
curl http://localhost:3000/metrics
Enter fullscreen mode Exit fullscreen mode

Homepage

_Purpose: Tests that your application is serving its main route (/).

Why it matters:_

  • Confirms the server is running and listening on port 3000.
  • Ensures your app can respond to basic HTTP requests.
  • This is often the first check to verify deployment success.

Localhost

Health check JSON

Purpose: Calls the /health endpoint, which usually returns a simple JSON like:
Why it matters:

  • Health checks are critical for monitoring and orchestration tools (Docker, Kubernetes, CI/CD pipelines).
  • They provide a lightweight way to confirm the app is alive without loading the full homepage.
  • Used by load balancers and cloud platforms to decide if traffic should be routed to this instance.

Health check

** System info JSON **
Purpose: Retrieves metadata about the system or application (version, environment, uptime, etc.).

Why it matters:

  • Helps developers and operators quickly see runtime details.
  • Useful for debugging and confirming the app is running with the expected configuration.
  • Can expose build info (commit hash, build date) for CI/CD traceability.

system info

Prometheus metrics

Purpose: Exposes application metrics in a format Prometheus can scrape (e.g., request counts, latency, memory usage).

Why it matters:

  • Enables observability: monitoring performance, resource usage, and reliability.
  • Critical for production systems where you need dashboards (Grafana) and alerts.
  • Lets you track trends over time and detect anomalies (e.g., rising error rates).

metrics

Why These Steps Together Are Important

  • They form a complete validation suite:

  • / → confirms the app is serving content.

  • /health → confirms the app is alive.

  • /info → confirms the app is running with the right config.

  • /metrics → confirms the app is observable and ready for monitoring.

  • This combination is standard in DevOps and CI/CD pipelines to ensure deployments are healthy before traffic is routed.

Docker Testing

These commands form a simple, reliable workflow to build, run, verify, and clean up your application as a container. In CI/CD, they help ensure every change can be packaged consistently, tested predictably, and deployed with confidence.

# Build image
docker build -t my-devops-app:latest .

# Run container
docker run -d \
  --name my-devops-container \
  -p 3000:3000 \
  --restart unless-stopped \
  my-devops-app:latest

# Check container status
docker ps
docker logs my-devops-container

# Test health check
curl http://localhost:3000/health

# Stop container
docker stop my-devops-container
docker rm my-devops-container
Enter fullscreen mode Exit fullscreen mode
  1. Build Image
docker build -t my-devops-app:latest .

Enter fullscreen mode Exit fullscreen mode

docker build

What it does:

  • Reads the Dockerfile in the current directory (.).
  • Builds a container image containing your app and its dependencies.
  • Tags the image as my-devops-app:latest for easy reference.

Why it’s important in CI/CD:

  • Consistency: Every build produces the same image, eliminating “works on my machine” issues.
  • Portability: The image can be deployed across environments (dev, staging, prod).
  • Versioning: Tags (latest, commit SHA, release numbers) allow tracking and rollback.

2. Run container

    docker run -d \
  --name my-devops-container \
  -p 3000:3000 \
  --restart unless-stopped \
  my-devops-app:latest

Enter fullscreen mode Exit fullscreen mode

docker run

1. What it does:

  • -d: Runs the container in detached mode so the process continues in the background.
  • --name my-devops-container: Assigns a stable name for easier management and log retrieval.
  • -p 3000:3000: Publishes container port 3000 to host port 3000, making the app reachable at localhost:3000.
  • --restart unless-stopped: Restarts the container automatically after failures or host reboots unless you explicitly stop it.

2. Why it matters in CI/CD:

  • Integration testing: Spins up a production‑like instance to run tests against real endpoints.
  • Environment parity: Mirrors how the app will run in production, catching config and runtime issues early.
  • Resilience during validation: Restart policies reduce flakiness in ephemeral CI environments.

3. Check container status

docker ps
docker logs my-devops-container
Enter fullscreen mode Exit fullscreen mode

docker container

  • What they do:

  • docker ps: Lists running containers—quick confirmation that your app started successfully.

  • docker logs my-devops-container: Streams stdout/stderr from the container for diagnostics.

  • Why it matters in CI/CD:

  • Fast feedback: Detects startup failures (crashes, port conflicts, missing env vars) before tests run.

  • Actionable debugging: Logs reveal misconfigurations, dependency errors, or health probe failures.

  • Automated gates: Pipelines can parse logs or check docker ps to decide whether to proceed or fail early.

4. Test health check

curl http://localhost:3000/health

Enter fullscreen mode Exit fullscreen mode
  • What it does:

  • Calls the app’s liveness/readiness endpoint, typically returning 200 OK with a simple JSON payload.

  • Why it matters in CI/CD:

  • Readiness assurance: Confirms the service is not only running but ready to serve traffic.

  • Deployment safety: Prevents promoting builds that start but aren’t healthy (e.g., DB not connected, migrations pending).

  • Automated validation: Pipelines can assert status codes and response bodies to enforce health standards.

5. Stop and remove container

docker stop my-devops-container
docker rm my-devops-container

Enter fullscreen mode Exit fullscreen mode

docker stop

  • What they do:

  • docker stop: Sends a graceful shutdown signal (SIGTERM) to the containerized process.

  • docker rm: Deletes the stopped container’s resources and metadata (the image remains int
    act).

  • Why it matters in CI/CD:

  • Clean slate: Avoids port conflicts, stale state, and resource leaks between pipeline runs.

  • Resource hygiene: Frees CPU/memory on runners, keeping builds stable and predictable.

  • Determinism: Ensures each run starts from a known baseline, improving the reliability of test results.

Docker Compose Commands

These three docker-compose commands represent the full lifecycle of multi‑container applications in a DevOps CI/CD pipeline. Let’s break them down carefully:

1. Start all services

docker-compose up -d
Enter fullscreen mode Exit fullscreen mode
  • What it does:

  • Reads the docker-compose.yml file in your project.

    • Starts all services defined there (e.g., app, database, cache, message broker).
    • -d runs them in detached mode, so they run in the background.
  • Why it’s important in CI/CD:

  • Multi‑service orchestration: Many apps depend on multiple components (e.g., Node.js + MongoDB + Redis). This command spins them all up together.

  • Consistency: Ensures every developer, tester, and pipeline runner uses the same service definitions.

  • Automation: Pipelines can bring up the full stack before running integration tests, mimicking production.

2. View real‑time logs

docker-compose logs -f
Enter fullscreen mode Exit fullscreen mode

docker compose log

  • What it does:

  • Streams logs from all running services defined in docker-compose.yml.

  • -f means follow mode, so you see logs in real time as they’re generated.

  • Why it’s important in CI/CD:

  • Observability: Lets you monitor startup progress and catch errors (e.g., DB connection failures, missing env vars).

  • Debugging: If a pipeline fails, logs show exactly which service broke.

  • Validation: Confirms services are running correctly before tests proceed.

3. Stop and clean up

docker-compose down
Enter fullscreen mode Exit fullscreen mode

docker compose down

  • What it does:

  • Stops all running services defined in docker-compose.yml.

  • Removes containers, networks, and temporary volumes created by docker-compose up.

  • Leaves images intact (so you don’t have to rebuild next time).

  • Why it’s important in CI/CD:

  • Clean environments: Prevents leftover containers or networks from interfering with future runs.

  • Resource hygiene: Frees CPU, memory, and disk space on build agents.

  • Repeatability: Ensures every pipeline run starts fresh, avoiding state leakage between builds.

Step 9: Push to GitHub
Estimated time: 10 minutes

git add .
Enter fullscreen mode Exit fullscreen mode

This stages all the changes in your project directory (the . means “everything here”). It tells Git which files you want to include in the next commit.
Why it’s important: Without staging, Git won’t know which files to save. This step ensures your project files (code, configs, Dockerfiles, workflows) are prepared to be recorded in history.

git commit -m "Initial commit: Complete CloudForge CI/CD Lab"
Enter fullscreen mode Exit fullscreen mode

This creates a snapshot of your staged files with a descriptive message. The commit becomes a permanent record in your repository’s history.
Why it’s important: Commits are the foundation of version control. They let you track changes, roll back if needed, and provide context for what was done. The message “Initial commit” marks the starting point of your project.

git branch -M main
Enter fullscreen mode Exit fullscreen mode

This renames your current branch to main. GitHub uses main as the default branch for new repositories.
Why it’s important: Aligning with GitHub’s default branch ensures consistency. Your CI/CD workflows are often configured to run on main, so this step guarantees your pipeline will trigger correctly.

git remote add origin https://github.com/YOUR_USERNAME/cloudforge-ci-cd-lab.git
Enter fullscreen mode Exit fullscreen mode

This links your local repository to a remote repository hosted on GitHub. The name origin is a shorthand reference to that remote.
Why it’s important: Without a remote, your code only exists locally. Adding GitHub as the remote allows you to push your project online, collaborate with others, and integrate with GitHub Actions for CI/CD.

git push -u origin main
Enter fullscreen mode Exit fullscreen mode

This pushes your local main branch commits to the remote repository on GitHub. The -u flag sets origin/main as the default upstream branch, so future pushes and pulls are simpler.
Why it’s important: This is the moment your code leaves your machine and enters GitHub. Once it’s there, GitHub Actions detects the workflow file (.github/workflows/ci.yml) and automatically starts your CI/CD pipeline.

Git workflow

The CI/CD pipeline now runs automatically

Because you’ve pushed the workflow file to GitHub, every new commit or pull request to main (or other configured branches like develop) will trigger the pipeline. GitHub Actions will build, test, and validate your project without manual intervention.
Why it’s important: This automation ensures that every change is tested and verified before deployment, reducing errors and speeding up delivery. It’s the essence of DevOps — continuous integration and continuous delivery.

Step 10: Kubernetes Deployment Configurations

These steps define how your app should run in two environments—staging and production—using Kubernetes manifests. These files describe the desired state (replicas, images, ports, health checks, resources), and Kubernetes continuously enforces that state. In a CI/CD pipeline, your builds produce container images, and these manifests tell Kubernetes which image to run, how many copies, and how to expose and monitor them—so deployments become repeatable, observable, and safe.

Directory structure and environment separation

  • Purpose:

    You create k8s/staging and k8s/production to keep environment-specific configs isolated. This prevents accidental cross-environment changes and makes promotion clear—staging uses one set of manifests, production another.

  • Why it matters in CI/CD:

Clear promotion paths: CI builds and tags images (e.g., develop-latest for staging, latest for production), then CD applies the corresponding manifests. Risk control: Staging can have fewer replicas, looser resource limits, or different probes, while production is stricter and scaled.

mkdir -p k8s/staging k8s/production
Enter fullscreen mode Exit fullscreen mode

Staging deployment: intent and key fields

Create k8s/staging/deployment.yml

staging code k8s

Copy and paste the following command into the just-created file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: devops-app-staging
  namespace: staging
spec:
  replicas: 2
  selector:
    matchLabels:
      app: devops-app
      environment: staging
  template:
    metadata:
      labels:
        app: devops-app
        environment: staging
    spec:
      containers:
      - name: app
        image: ghcr.io/YOUR_GITHUB_USERNAME/my-devops-project:develop-latest
        ports:
        - containerPort: 3000
        env:
        - name: NODE_ENV
          value: "staging"
        - name: PORT
          value: "3000"
        livenessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 5
          periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: devops-app-service-staging
  namespace: staging
spec:
  selector:
    app: devops-app
    environment: staging
  ports:
  - protocol: TCP
    port: 80
    targetPort: 3000
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

Staging area

  • Why it matters in CI/CD:

Safe validation: Staging runs the latest develop image with health checks, so your pipeline can deploy, verify readiness, run tests, and catch issues before production. Observability: Probes provide automated signals to fail fast if the app isn’t healthy.

Production deployment: scaling, resources, and stricter controls
Create Production Deployment

Create: touch k8s/production/deployment.yml:

copy the commands into the just-created file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: devops-app-production
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: devops-app
      environment: production
  template:
    metadata:
      labels:
        app: devops-app
        environment: production
    spec:
      containers:
      - name: app
        image: ghcr.io/YOUR_GITHUB_USERNAME/my-devops-project:latest
        ports:
        - containerPort: 3000
        env:
        - name: NODE_ENV
          value: "production"
        - name: PORT
          value: "3000"
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 5
          periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: devops-app-service-production
  namespace: production
spec:
  selector:
    app: devops-app
    environment: production
  ports:
  - protocol: TCP
    port: 80
    targetPort: 3000
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

Production

  • Why it matters in CI/CD: Reliability and cost control: Resource requests/limits prevent overconsumption and stabilize workloads. Zero-downtime readiness: Readiness probes ensure new pods receive traffic only when ready, enabling safe rolling updates.

Image tags and CI/CD flow

  • Develop vs. latest:

  • Staging uses: develop-latest—built from the develop branch on each push.

  • Production uses: latest—built from main or tagged releases after passing tests.

  • Pipeline fit:

  • CI: Builds and pushes images to GHCR with branch-specific tags.

  • CD (staging): Applies k8s/staging/deployment.yml after CI succeeds, validates via probes/tests.

  • CD (production): Applies k8s/production/deployment.yml upon approval or automated promotion, leveraging readiness for safe rollout.

Step 11: Complete Deployment Workflow

This final step ties together everything you’ve built in your CI/CD pipeline by introducing a branch‑based deployment strategy. The idea is simple but powerful: each branch in your Git repository corresponds to a specific environment. The develop branch is used for staging, so whenever you push changes there, GitHub Actions automatically runs tests and deploys the latest build to the staging cluster. This allows you to validate new features in a safe, production‑like environment without affecting real users.

The main branch is reserved for production, meaning only code that has been tested and approved gets merged into it. Once you push to main, GitHub Actions executes the full pipeline again, but this time it deploys to your production environment, ensuring that only stable builds reach end users. Pull requests are handled differently: they trigger tests but do not deploy, which provides a safeguard by catching issues before merging into staging or production.

Deploy to staging:

# Create and switch to develop branch
git checkout -b develop

# Make your changes, then commit and push
git add .
git commit -m "Add new feature"
git push origin develop
Enter fullscreen mode Exit fullscreen mode

git staging K8s

git push

Deploy to production:

# Switch to main branch
git checkout main

# Merge changes from develop
git merge develop

# Push to trigger production deployment
git push origin main
Enter fullscreen mode Exit fullscreen mode

Monitor Deployments

After deployment, monitoring is critical. GitHub Actions provides real‑time visibility into pipeline runs, while the GitHub Container Registry stores your built images for traceability. Finally, health checks against your staging and production URLs confirm that the applications are live and responding correctly. This workflow enforces discipline, reduces risk, and ensures that every change follows a predictable path from development to production.

staging Action

push to production

Conclusion: What CloudForge CI/CD Lab Really Teaches

CloudForge CI/CD Lab is not about learning individual tools — it is about understanding systems thinking in modern software delivery.

By building this pipeline from first principles, we moved deliberately through every layer of a real production workflow: version control discipline, application design for operability, automated testing, containerization, continuous integration, and declarative deployment. Each step was introduced not because it is fashionable, but because it solves a concrete problem that teams face in real-world environments.

More importantly, this project demonstrates a shift-left mindset. Problems are detected early through editor annotations, linting, and automated tests — long before they reach CI pipelines or production clusters. This is how high-performing teams reduce failures, shorten feedback loops, and ship confidently.

CloudForge CI/CD Lab also highlights that production readiness is not a single feature, but the accumulation of many small, intentional decisions: running containers as non-root, handling graceful shutdowns, enforcing resource limits, and treating infrastructure as code. None of these choices are optional in serious systems, and together they form the foundation of reliable delivery.

If you can build, explain, and reason about a pipeline like this, you are no longer just “using DevOps tools.” You are thinking like a DevOps engineer — one who understands why automation exists, how systems fail, and how to design workflows that scale with both code and teams.

This lab is not the end of the journey, but it is a solid, production-aligned starting point. From here, the same principles extend naturally into cloud platforms, advanced observability, progressive delivery, and platform engineering — without changing the fun

Top comments (0)