DEV Community

Cover image for Building an Automated Monitoring Pipeline with Prometheus, Grafana, and cAdvisor
ABIGAIL MOJUME
ABIGAIL MOJUME

Posted on

Building an Automated Monitoring Pipeline with Prometheus, Grafana, and cAdvisor

How I automated infrastructure monitoring deployment using GitHub Actions, Docker Compose, and container metrics


The Problem

In today's world, monitoring is essential. We need to track CPU, memory, disk, network and metrics to understand system performance. Additionally, monitoring request volumes, response times, and error rates helps identify issues before they impact users.

But here's the challenge: manually setting up monitoring infrastructure every time your code is successfully deployed is time-consuming and error-prone. What if you could automatically deploy your entire monitoring stack whenever new code gets deployed?

That's exactly what I built—an automated monitoring pipeline that deploys Prometheus, Grafana, and cAdvisor as the final step in our deployment process.


The Solution: A Three-Part Pipeline

Our complete CI/CD pipeline consists of:

  1. Code Quality Check (SonarQube) - Ensures code meets quality standards
  2. Application Deployment - Deploys the application to production servers
  3. Monitoring Setup (this article) - Automatically deploys monitoring infrastructure

When code passes quality checks and deploys successfully, the monitoring pipeline triggers automatically, ensuring our newly deployed application is immediately observable.


Directory Structure

Here's what we're building:

dev-prometheus-monitoring
    ├── prometheus.yml          # Monitoring configuration
    ├── docker-compose.yml      # Container orchestration
    ├── monitor.env            # Environment variables
    └── .github/workflows/
        └── deploy-monitoring.yml   # Automation workflow
Enter fullscreen mode Exit fullscreen mode

The components:

  • Prometheus: Collects and stores metrics
  • Grafana: Visualizes metrics in dashboards
  • cAdvisor: Exposes metrics about container
  • Redis: Stores historical data for cAdvisor
  • GitHub Actions: Automates deployment
  • Docker Compose: Orchestrates all containers

Why cAdvisor for Container Monitoring?

For containerized environments, cAdvisor provides critical insights that traditional system monitors can't:

  • Per-container resource usage: See exactly which container is consuming resources
  • Container-specific metrics: CPU, memory, network, and filesystem usage for each container
  • Real-time visibility: Track container performance as workloads scale
  • Docker integration: Native understanding of Docker containers and their lifecycles

This makes it ideal for microservices architectures where multiple containers run on the same host.


Step 1: Setting Up Your Workspace

I created a dedicated directory for this project on the VM:

mkdir dev-prometheus-monitoring
cd dev-prometheus-monitoring
Enter fullscreen mode Exit fullscreen mode

Step 2: Configuring Prometheus

Create prometheus.yml - this tells Prometheus what to monitor and how often:

global:
  scrape_interval: 15s      # Collect metrics every 15 seconds
  evaluation_interval: 15s   # Evaluate alerting rules every 15 seconds

scrape_configs:
  # Monitor Prometheus itself
  - job_name: 'nvs-prometheus'
    static_configs:
      - targets: ['localhost:9090']

  # Monitor Docker containers via cAdvisor
  - job_name: 'nvs-vm'
    static_configs:
      - targets: ['cadvisor:8080']
Enter fullscreen mode Exit fullscreen mode

Understanding the config:

  • scrape_interval: How frequently Prometheus pulls metrics. 15 seconds balances real-time visibility with resource usage.
  • job_name: Logical grouping of similar targets
  • targets: Specific endpoints exposing metrics in the format host:port

The nvs-vm job scrapes cAdvisor, which exposes metrics for all Docker containers running on the host.


Step 3: Creating the Docker Compose File

Docker Compose allows us to define and run multiple containers with a single command. Create docker-compose.yml:

version: '3.8'

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus-data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
    depends_on:
      - cadvisor
    networks:
      - monitoring
    restart: unless-stopped

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
      - "3000:3000"
    volumes:
      - grafana-data:/var/lib/grafana
    env_file:
      - /home/administrator/Desktop/workspace/dev-prometheus-monitoring/monitor.env
    networks:
      - monitoring
    restart: unless-stopped

  cadvisor:
    image: gcr.io/cadvisor/cadvisor:latest
    container_name: cadvisor
    ports:
      - "8080:8080"
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:rw
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
    depends_on:
      - redis
    networks:
      - monitoring
    restart: unless-stopped

  redis:
    image: redis:latest
    container_name: redis
    ports:
      - "6379:6379"
    networks:
      - monitoring
    restart: unless-stopped

volumes:
  prometheus-data:
  grafana-data:

networks:
  monitoring:
    driver: bridge
Enter fullscreen mode Exit fullscreen mode

Step 4: Environment Configuration

Create monitor.env to store Grafana configuration:

GF_SECURITY_ADMIN_PASSWORD=your_secure_password
GF_SERVER_ROOT_URL=http://your-server-ip:3000
Enter fullscreen mode Exit fullscreen mode

This keeps sensitive configuration separate from the docker-compose file and makes it easy to change settings without modifying the main configuration.


Step 5: Version Control Setup

Initialize Git and push to GitHub:

git init
git add prometheus.yml docker-compose.yml monitor.env
git commit -m "Initial monitoring setup"
git remote add origin https://github.com/yourusername/dev-prometheus-monitoring.git
git push -u origin main
Enter fullscreen mode Exit fullscreen mode

Create a .gitignore file:

_actions/
*.log
.DS_Store
Enter fullscreen mode Exit fullscreen mode

Why ignore _actions/? This directory contains the self-hosted runner files (explained next), which are large and system-specific.


Step 6: Setting Up Self-Hosted GitHub Actions Runner

Since we're working with on-premise servers, we need a self-hosted runner that can access our internal network.

Steps:

  1. Go to your GitHub repository
  2. Navigate to Settings → Actions → Runners
  3. Click "New self-hosted runner"
  4. Follow the provided commands on your Linux VM:
# Download the runner
mkdir actions-runner && cd actions-runner
curl -o actions-runner-linux-x64-2.311.0.tar.gz -L \
  https://github.com/actions/runner/releases/download/v2.311.0/actions-runner-linux-x64-2.311.0.tar.gz
tar xzf ./actions-runner-linux-x64-2.311.0.tar.gz

# Configure the runner
./config.sh --url https://github.com/yourusername/dev-prometheus-monitoring \
  --token YOUR_TOKEN

# Start the runner
./run.sh
Enter fullscreen mode Exit fullscreen mode

For production, install it as a service:

sudo ./svc.sh install
sudo ./svc.sh start
Enter fullscreen mode Exit fullscreen mode

Why self-hosted? GitHub-hosted runners can't access on-premise servers. Our self-hosted runner bridges the gap between GitHub (cloud) and our infrastructure (on-premise).


Step 7: Adding GitHub Secrets

Store sensitive credentials securely:

  1. Go to repository Settings → Secrets and variables → Actions
  2. Add these secrets:
    • GH_USERNAME: Your GitHub username
    • GH_TOKEN: Personal access token for GitHub authentication

These will be used to authenticate when cloning the repository during deployment.


Step 8: Creating the Automation Workflow

Create .github/workflows/deploy-monitoring.yml :

name: deploy-monitoring-solution

on:
  repository_dispatch:
    types: [deploy-monitoring]

permissions:
  contents: read
  packages: read

jobs:
  deploy-prometheus:
    name: Deploy Prometheus Stack
    runs-on: [self-hosted, "server-76"]

    steps:
      - name: Checkout orchestrator repo
        uses: actions/checkout@v4

      - name: Clone Prometheus repo
        working-directory: /home/administrator/Desktop/DevOps/dev-prometheus-monitoring
        run: |
          rm -rf dev-prometheus-monitoring
          git clone https://${{ secrets.GH_USERNAME }}:${{ secrets.GH_TOKEN }}@github.com/technotrending/dev-prometheus-monitoring.git -b main

      - name: Build Docker images
        working-directory: /home/administrator/Desktop/workspace/dev-prometheus-monitoring
        run: |
          docker compose up -d

      - name: Cleanup old images
        run: |
          docker image prune -af
Enter fullscreen mode Exit fullscreen mode

Pro tip I learned: Use mkdir -p .github/workflows to create nested directories in one command. The -p flag creates parent directories automatically.

Understanding the workflow:

Trigger

on:
  repository_dispatch:
    types: [deploy-monitoring]
Enter fullscreen mode Exit fullscreen mode

This workflow is triggered by the deployment pipeline. After your application successfully deploys, the deployment workflow sends a repository_dispatch event to trigger this monitoring setup.

Job Steps

  1. Checkout orchestrator repo: Downloads the workflow repository

  2. Clone Prometheus repo:

- Navigates to the DevOps directory
- Removes old copy if it exists
- Clones fresh configuration from GitHub
- Uses secrets for authentication
Enter fullscreen mode Exit fullscreen mode
  1. Build Docker images:
- Changes to the workspace directory
- Runs `docker compose up -d` to start all containers
- The `-d` flag runs them in detached mode (background)
Enter fullscreen mode Exit fullscreen mode
  1. Cleanup old images:
- Removes unused Docker images to save disk space
- Keeps the system clean after updates
Enter fullscreen mode Exit fullscreen mode

Step 9: Integrating with the Deployment Pipeline

In your application deployment workflow, add this step at the end:

- name: Trigger monitoring deployment
  if: success()
  run: |
    curl -X POST \
      -H "Accept: application/vnd.github.v3+json" \
      -H "Authorization: token ${{ secrets.GH_PAT }}" \
      https://api.github.com/repos/yourusername/dev-prometheus-monitoring/dispatches \
      -d '{"event_type":"deploy-monitoring"}'
Enter fullscreen mode Exit fullscreen mode

The flow:

  1. SonarQube checks pass ✓
  2. Application deploys successfully ✓
  3. Deployment workflow sends repository_dispatch event
  4. Monitoring workflow triggers automatically
  5. Prometheus, Grafana, cAdvisor, and Redis deploy

Step 10: Verifying the Setup

Once deployed, access your monitoring stack:

Prometheus UI: http://your-server-ip:9090

  • Navigate to Status → Targets
  • Both targets should show "UP" status:
    • nvs-prometheus (localhost:9090)
    • nvs-vm (cadvisor:8080)

Grafana UI: http://your-server-ip:3000

  1. Login with credentials from monitor.env
  2. Add Prometheus as a data source:
    • Go to Configuration → Data Sources
    • Add Prometheus
    • URL: http://prometheus:9090
    • Click "Save & Test"

cAdvisor UI: http://your-server-ip:8080

  • Browse container metrics directly
  • View real-time CPU, memory, and network usage per container

An image depicting healthy targets for cAdvisor and Prometheus on the Prometheus dashboard

Figure 1: Prometheus dashboard showing targets metrics and health status

Import Dashboards

For quick visualization, import pre-built Grafana dashboards:

  • Go to Create → Import
  • For Docker monitoring, use dashboard ID: 193
  • This shows container metrics collected by cAdvisor

Grafana dashboard

Figure 2: Grafana dashboard displaying real-time CPU usage metrics for Docker containers over a 3-hour period


The Complete Flow

Here's what happens end-to-end:

1. Developer commits code
   ↓
2. SonarQube validates code quality
   ↓
3. Application deploys to production
   ↓
4. Deployment workflow sends repository_dispatch
   ↓
5. Monitoring workflow triggers on self-hosted runner
   ↓
6. Runner clones latest monitoring configuration
   ↓
7. Docker Compose starts containers:
   - Redis (for data persistence)
   - cAdvisor (collects container metrics)
   - Prometheus (scrapes and stores metrics)
   - Grafana (visualizes data)
   ↓
8. Prometheus begins scraping cAdvisor every 15s
   ↓
9. Container metrics now visible in Grafana dashboards
   ↓
10. Old Docker images cleaned up automatically
Enter fullscreen mode Exit fullscreen mode

Conclusion

This automated monitoring setup provides:

  1. Immediate Visibility: New deployments are monitored from the moment they start
  2. Resource Optimization: Identify containers consuming excessive resources
  3. Performance Tracking: Historical data shows performance trends over time
  4. Incident Response: Quickly diagnose issues by examining container metrics
  5. Zero Manual Work: Monitoring deploys automatically with each application deployment

The complete pipeline ensures that every time code is deployed, monitoring infrastructure is automatically configured to track it. This eliminates manual setup, reduces errors, and guarantees observability from the moment containers start running.


Resources


Questions? Drop them in the comments below. I'm happy to help troubleshoot or explain any part of this setup in more detail.


If you found this helpful, please consider following me for more DevOps automation guides and infrastructure tutorials.

Top comments (0)