DEV Community

Xavier Fok
Xavier Fok

Posted on

Real-Time Proxy Monitoring: Build a Dashboard with Python and Grafana

Flying blind with proxies is expensive. Without monitoring, you do not know which proxies are healthy, which are burned, or how much bandwidth you are wasting on failed requests. Here is how to build a real-time monitoring dashboard.

What to Monitor

Core Metrics

  1. Success rate — Percentage of requests returning HTTP 200
  2. Response time — Average and P95 latency per proxy
  3. Bandwidth usage — Data consumed per proxy and total
  4. Error distribution — Types of errors (timeout, 403, 429, CAPTCHA)
  5. IP uniqueness — How many unique IPs you are actually using

Operational Metrics

  1. Pool health — Percentage of active vs failed proxies
  2. Rotation frequency — How often IPs change
  3. Geographic distribution — Where your exit IPs are located
  4. Cost per successful request — Real cost accounting
  5. Blacklist rate — How many IPs are currently blocked

Architecture

Your Application
    |
    v
Proxy Middleware (collects metrics)
    |
    v
Prometheus (stores time-series data)
    |
    v
Grafana (visualizes dashboards)
Enter fullscreen mode Exit fullscreen mode

Step 1: Metrics Collection

Create a proxy wrapper that logs every request:

import time
import requests
from prometheus_client import Counter, Histogram, Gauge, start_http_server

# Define metrics
REQUEST_COUNT = Counter(
    "proxy_requests_total",
    "Total proxy requests",
    ["proxy", "status", "target_domain"]
)

RESPONSE_TIME = Histogram(
    "proxy_response_seconds",
    "Response time in seconds",
    ["proxy"],
    buckets=[0.1, 0.5, 1, 2, 5, 10, 30]
)

ACTIVE_PROXIES = Gauge(
    "proxy_pool_active",
    "Number of active proxies in pool"
)

BANDWIDTH = Counter(
    "proxy_bandwidth_bytes",
    "Bandwidth consumed in bytes",
    ["proxy", "direction"]
)

class MonitoredProxy:
    def __init__(self, proxy_url):
        self.proxy_url = proxy_url
        self.proxy_dict = {"http": proxy_url, "https": proxy_url}

    def request(self, url, **kwargs):
        start = time.time()
        domain = url.split("/")[2]

        try:
            response = requests.get(
                url,
                proxies=self.proxy_dict,
                timeout=kwargs.get("timeout", 15),
                **kwargs
            )
            duration = time.time() - start

            # Record metrics
            REQUEST_COUNT.labels(
                proxy=self.proxy_url,
                status=str(response.status_code),
                target_domain=domain
            ).inc()

            RESPONSE_TIME.labels(proxy=self.proxy_url).observe(duration)

            BANDWIDTH.labels(
                proxy=self.proxy_url, direction="response"
            ).inc(len(response.content))

            return response

        except Exception as e:
            duration = time.time() - start
            REQUEST_COUNT.labels(
                proxy=self.proxy_url,
                status="error",
                target_domain=domain
            ).inc()
            raise
Enter fullscreen mode Exit fullscreen mode

Step 2: Prometheus Configuration

# prometheus.yml
scrape_configs:
  - job_name: "proxy_monitor"
    scrape_interval: 15s
    static_configs:
      - targets: ["localhost:8000"]
Enter fullscreen mode Exit fullscreen mode

Step 3: Grafana Dashboard Panels

Key panels for your dashboard:

Success Rate Over Time

rate(proxy_requests_total{status="200"}[5m]) /
rate(proxy_requests_total[5m]) * 100
Enter fullscreen mode Exit fullscreen mode

Average Response Time

rate(proxy_response_seconds_sum[5m]) /
rate(proxy_response_seconds_count[5m])
Enter fullscreen mode Exit fullscreen mode

Error Breakdown

sum by (status) (rate(proxy_requests_total{status!="200"}[5m]))
Enter fullscreen mode Exit fullscreen mode

Bandwidth Usage

sum(rate(proxy_bandwidth_bytes[1h])) * 3600
Enter fullscreen mode Exit fullscreen mode

Alert Rules

Set up alerts for critical conditions:

# alert_rules.yml
groups:
  - name: proxy_alerts
    rules:
      - alert: LowSuccessRate
        expr: |
          rate(proxy_requests_total{status="200"}[5m]) /
          rate(proxy_requests_total[5m]) < 0.8
        for: 5m
        annotations:
          summary: Proxy success rate below 80%

      - alert: HighLatency
        expr: |
          rate(proxy_response_seconds_sum[5m]) /
          rate(proxy_response_seconds_count[5m]) > 5
        for: 5m
        annotations:
          summary: Average proxy latency above 5 seconds
Enter fullscreen mode Exit fullscreen mode

Quick Alternative: Simple File-Based Logging

If Prometheus and Grafana are overkill for your setup, a simple CSV logger works:

import csv
from datetime import datetime

def log_request(proxy, url, status, latency, bytes_received):
    with open("proxy_log.csv", "a") as f:
        writer = csv.writer(f)
        writer.writerow([
            datetime.now().isoformat(),
            proxy, url, status,
            round(latency, 3),
            bytes_received
        ])
Enter fullscreen mode Exit fullscreen mode

Analyze with pandas later to identify trends and problem proxies.

For proxy monitoring setups and infrastructure guides, visit DataResearchTools.

Top comments (0)