This is a submission for the Agentic Postgres Challenge with Tiger Data
What I Built
Laptop Anomaly Analyzer is an autonomous, AI-powered monitoring system built on Agentic Postgres + Tiger MCP + Gemini.
It continuously collects local system metrics (CPU, RAM, Disk, Network I/O), detects performance anomalies in real time, and automatically explains why they happened — all from within Postgres itself.
No external ML service, no Python inference engine — just Agentic Postgres acting as its own AI brain.
The project began as a simple TimescaleDB-based logger for my laptop performance, but evolved into an agentic database experiment where the DB doesn’t just store telemetry, it understands, reasons, and reacts.
Demo
🔗 GitHub Repository: github.com/StephCurry07/laptop-anomaly-analyzer
(Repo includes collector script, MCP config, and dashboard setup)
Outputs:
Dashboard Preview:
![Grafana Dashboard]

How I Used Agentic Postgres
This project combines several of Agentic Postgres’ most advanced features:
⚙️ Tiger MCP (Model Context Protocol)
- Runs three autonomous database agents:
-
anomaly_detector→ runs every 10 minutes to detect CPU/RAM anomalies. -
root_cause_agent→ triggers on new anomalies, uses vector search to find similar incidents. -
daily_summary→ summarizes system performance once a day using Gemini reasoning.
-
- All logic executes inside the database, orchestrated through MCP — no external scripts required.
💬 Tiger CLI
- Provides a natural-language interface:
"Summarize anomalies in the last 24 hours"
"Find similar CPU spikes from past week"
Overall Experience
It was my first time using postgres for a project. Building with Agentic Postgres completely changed how I think about data systems. Instead of pushing data to an external model or pipeline, the database itself became the reasoning layer, thanks to MCP and TigerData.
Stack Used
Postgres + TimescaleDB
Tiger MCP
TigerData
Gemini CLI
Grafana (for visualization)
Python (for metric collection)




Top comments (1)
Nice Information