DEV Community

manja316
manja316

Posted on

I Built a Full API Monitoring Dashboard in 20 Minutes Using Claude Code

Last week I needed a quick dashboard to monitor three different APIs my trading bot depends on. I didn't want to set up Grafana. I didn't want to write a React app. I wanted something running in 20 minutes.

Here's how I did it with a Claude Code skill — and why I think skills are the most underrated feature in the AI tooling space right now.

The Problem

My Polymarket trading bot calls three APIs:

  • Polymarket CLOB for order placement
  • CoinGecko for price feeds
  • A custom FastAPI service for signal generation

When any of these go down, I lose money. I needed:

  1. Health checks every 60 seconds
  2. Response time tracking
  3. Error rate alerting
  4. A terminal-based view I can glance at

The Approach: Claude Code Skills as Micro-Tools

If you haven't used Claude Code skills yet, here's the pitch: they're reusable prompt+tool bundles that extend what Claude can do in your terminal. Think of them like plugins, but you build them in plain markdown.

I used my Dashboard Builder skill to generate the monitoring setup. Here's what the skill does under the hood.

Step 1: Define Your Data Sources

The skill takes a simple YAML config:

# dashboard.yaml
sources:
  polymarket_clob:
    url: "https://clob.polymarket.com/health"
    method: GET
    interval: 60
    timeout: 5000
    alerts:
      response_time_ms: 2000
      error_rate_pct: 5

  coingecko:
    url: "https://api.coingecko.com/api/v3/ping"
    method: GET
    interval: 60
    timeout: 3000

  signal_service:
    url: "http://localhost:8080/health"
    method: GET
    interval: 30
    timeout: 2000
    alerts:
      response_time_ms: 500
Enter fullscreen mode Exit fullscreen mode

Step 2: The Monitoring Loop

The skill generates a Python script that does the actual monitoring. Here's the core logic:

import asyncio
import aiohttp
import time
from dataclasses import dataclass, field
from collections import deque

@dataclass
class EndpointStats:
    name: str
    url: str
    response_times: deque = field(
        default_factory=lambda: deque(maxlen=60)
    )
    errors: int = 0
    total_checks: int = 0
    last_status: int = 0
    last_checked: float = 0

    @property
    def avg_response_ms(self) -> float:
        if not self.response_times:
            return 0
        return sum(self.response_times) / len(self.response_times)

    @property
    def error_rate(self) -> float:
        if self.total_checks == 0:
            return 0
        return (self.errors / self.total_checks) * 100

async def check_endpoint(
    session: aiohttp.ClientSession,
    stats: EndpointStats,
    timeout: int = 5000
) -> None:
    start = time.monotonic()
    try:
        async with session.get(
            stats.url,
            timeout=aiohttp.ClientTimeout(
                total=timeout / 1000
            )
        ) as resp:
            elapsed_ms = (time.monotonic() - start) * 1000
            stats.response_times.append(elapsed_ms)
            stats.last_status = resp.status
            stats.total_checks += 1
            if resp.status >= 400:
                stats.errors += 1
    except Exception:
        elapsed_ms = (time.monotonic() - start) * 1000
        stats.response_times.append(elapsed_ms)
        stats.errors += 1
        stats.total_checks += 1
        stats.last_status = 0
    stats.last_checked = time.time()
Enter fullscreen mode Exit fullscreen mode

Step 3: Terminal Rendering

The part I actually care about — making it readable at a glance:

def render_dashboard(endpoints: list[EndpointStats]) -> str:
    lines = []
    lines.append("╔══════════════════════════════════════╗")
    lines.append("║      API MONITORING DASHBOARD        ║")
    lines.append("╠══════════════════════════════════════╣")

    for ep in endpoints:
        status = "" if ep.last_status == 200 else ""
        color = "\033[92m" if ep.last_status == 200 else "\033[91m"
        reset = "\033[0m"

        lines.append(
            f"{color}{status}{reset} {ep.name:<20} "
            f"{ep.avg_response_ms:>6.0f}ms  "
            f"err:{ep.error_rate:>4.1f}% ║"
        )

    lines.append("╚══════════════════════════════════════╝")
    return "\n".join(lines)
Enter fullscreen mode Exit fullscreen mode

Running it gives you:

╔══════════════════════════════════════╗
║      API MONITORING DASHBOARD        ║
╠══════════════════════════════════════╣
║ ● polymarket_clob        142ms  err: 0.0% ║
║ ● coingecko              289ms  err: 0.2% ║
║ ● signal_service          12ms  err: 0.0% ║
╚══════════════════════════════════════╝
Enter fullscreen mode Exit fullscreen mode

Adding Alerts

The useful part: getting notified before things break. I pipe alerts to a simple webhook:

async def check_alerts(
    stats: EndpointStats,
    config: dict
) -> list[str]:
    alerts = []
    thresholds = config.get("alerts", {})

    max_response = thresholds.get("response_time_ms", 5000)
    if stats.avg_response_ms > max_response:
        alerts.append(
            f"{stats.name}: response time "
            f"{stats.avg_response_ms:.0f}ms "
            f"> {max_response}ms threshold"
        )

    max_error = thresholds.get("error_rate_pct", 10)
    if stats.error_rate > max_error:
        alerts.append(
            f"{stats.name}: error rate "
            f"{stats.error_rate:.1f}% "
            f"> {max_error}% threshold"
        )

    return alerts
Enter fullscreen mode Exit fullscreen mode

Why Skills Beat One-Off Scripts

I could have written this as a standalone Python script. But making it a Claude Code skill means:

  1. Reusability — I pointed it at a different set of APIs for another project in 2 minutes. Just changed the YAML.
  2. Composability — I combined it with my API Connector skill to auto-generate the health check configs from OpenAPI specs.
  3. Iteration speed — When I wanted to add the alerting layer, I described what I wanted and the skill scaffolded it correctly because it already understood the monitoring context.

The Full Stack I Actually Use

For anyone building trading infrastructure or running services that need to stay up:

  • Dashboard Builder ($7) — generates monitoring dashboards from YAML configs. Terminal-based or HTML output.
  • API Connector ($7) — builds typed API clients from OpenAPI/Swagger specs. I use this for every new API integration.
  • Security Scanner ($10) — scans your codebase for OWASP top 10 vulnerabilities. I run this before every deploy.

What I'd Do Differently

The terminal rendering is fine for me, but if you're sharing with a team, you probably want the HTML output mode. The Dashboard Builder skill supports both — pass --format html and it generates a self-contained HTML file with auto-refresh.

Also: don't poll APIs faster than you need to. My 30-second interval for the local service is fine, but I've seen people set 5-second intervals on third-party APIs and get rate-limited. Match the interval to how fast you actually need to detect problems.

Try It

  1. Install Claude Code if you haven't: npm install -g @anthropic-ai/claude-code
  2. Grab the Dashboard Builder skill
  3. Point it at your APIs
  4. Have a working dashboard before your coffee gets cold

The code examples above are from my actual monitoring setup. The full monitoring script with systemd service config is about 200 lines of Python — the skill generates all of it from that YAML config.


Building tools that make developers faster. Check out my other Claude Code skills on Gumroad.

Top comments (0)