DEV Community

dohelper
dohelper

Posted on • Edited on

How I Built a Docker Monitoring Dashboard in a Single Container

The Problem

I run about 10 Docker containers on a home server. Nothing crazy — nginx, postgres, redis, a few web apps. I wanted a simple way to check if everything was healthy.

The obvious answer was Prometheus + Grafana. But for my use case, it felt like bringing a firetruck to blow out a candle:

  • Prometheus: separate container, config files, scrape targets, retention policies
  • Grafana: another container, dashboards to build, data sources to configure
  • Node Exporter: yet another container for host metrics
  • cAdvisor: and another one for container metrics

That's 4 extra containers just to monitor 10 containers. Something felt wrong.

The Idea

What if monitoring was just... one container? Mount the Docker socket, open a browser, done.

That's what I built. It's called DockProbe.

DockProbe Dashboard

Architecture Decisions

Why FastAPI?

I needed async because Docker stats streaming is inherently async. FastAPI with uvicorn gave me:

  • Async Docker API calls via aiodocker
  • Built-in OpenAPI docs (useful during development)
  • Single-process, low memory footprint

Why SQLite?

For a monitoring tool with 7-day retention, SQLite in WAL mode is perfect:

  • No separate database container
  • Handles the write pattern well (one insert every 10 seconds)
  • Named volume for persistence across container restarts

Why a Single HTML File?

Zero build step. No npm, no webpack, no node_modules. The entire dashboard is one HTML file with inline CSS and JavaScript. Chart.js is loaded from CDN. This means:

  • docker compose up -d and you're done
  • No frontend build pipeline to maintain
  • Easy to customize — it's just one file

Anomaly Detection

Instead of just showing numbers, DockProbe watches for problems. Six rules run on every collection cycle (10 seconds):

# CPU: must be high for 3 consecutive checks (30 seconds)
# This avoids false alarms from brief spikes
if cpu_pct > CPU_THRESHOLD:
    cpu_counts[container] += 1
    if cpu_counts[container] >= 3:
        trigger_alert("cpu_high", container, cpu_pct)
else:
    cpu_counts[container] = 0
Enter fullscreen mode Exit fullscreen mode

Each alert type has a 30-minute cooldown per target, so you won't get spammed if a container is consistently misbehaving. Alerts go to Telegram via a simple httpx POST — no email server, no Slack webhook setup.

The Result

  • 4 Python packages: fastapi, uvicorn, aiodocker, httpx
  • ~50MB memory usage
  • 10-second collection interval
  • 7-day data retention
  • One command to install
git clone https://github.com/deep-on/dockprobe.git && cd dockprobe && bash install.sh
Enter fullscreen mode Exit fullscreen mode

What I Learned

  1. aiodocker's stats API returns different formats depending on stream=True vs stream=False. Cost me hours of debugging.
  2. Self-signed HTTPS is important even for local tools — browsers increasingly block HTTP features.
  3. Single-file dashboards are underrated. No build step means no build breakage.

What's Next

I'm considering:

  • Container log viewer
  • Custom alert thresholds per container
  • Multi-host support
  • Webhook integrations beyond Telegram

GitHub: https://github.com/deep-on/dockprobe

If you're running Docker containers and want simple monitoring without the Prometheus/Grafana overhead, give it a try. Feedback and stars are very welcome!

Top comments (0)