My submission for the Notion MCP Challenge
What I Built
NEXUS Ultra is an autonomous AI agent swarm that runs 24/7 on my local machine, hunting for high-intent developer conversations on Reddit, drafting targeted outreach copy, and scoring its own output — all without cloud APIs or human prompting.
Every cycle the swarm:
- Scrapes Reddit for threads matching specific pain signals (agent debugging, LLM observability)
- Runs 8+ specialized agents (SCOUT → COMMANDER → COPYWRITER → VALIDATOR → REWARD) in sequence on a shared blackboard
- Produces scored, paste-ready Reddit replies targeting real conversations
- Saves deployable copy to a local JSON queue for human review
The problem: I had no visibility into what the swarm was actually doing overnight. Logs are overwhelming. The JSON file is opaque. I couldn't quickly see which cycles scored highest, which agents were winning, or which outreach was ready to post.
Notion MCP fixed this. Now every completed swarm cycle automatically appears in a Notion database — score, MVP agent, outreach copy, target thread context, and posted status. Notion became the swarm's mission control dashboard.
How Notion MCP Is Used
I built nexus_notion_reporter.py — a side-car bridge script that:
-
Reads
nexus_deployable_copy.json(the swarm's output file) every 60 seconds - Compares against a local state file to find new cycles not yet logged
-
Creates a Notion database row for each new cycle via the Notion API with:
-
Cycle ID— unique swarm cycle identifier -
Score— REWARD agent's 0–1 quality score -
MVP Agent— which agent produced the best output -
Type— outreach type (REDDIT_REPLY, DM, etc.) -
Posted— checkbox, updated when human posts the copy -
Timestamp— exact cycle completion time -
Scout Context— what thread/signal was targeted
-
- Embeds the full outreach copy as a block inside each Notion page
The Notion database schema is auto-created on first run — no manual setup needed.
Architecture
[nexus_swarm_loop.py] ←— 8 AI agents running continuously
↓
[nexus_deployable_copy.json] ←— scored outreach output queue
↓
[nexus_notion_reporter.py] ←— side-car bridge (60s poll)
↓
[Notion: Swarm Cycle Log DB] ←— mission control dashboard
The bridge is intentionally decoupled — it reads the swarm's output file only, never touching the running swarm process. Zero risk of disrupting the autonomous loop.
Why This Matters
Before Notion, reviewing swarm output meant: tailing multi-thousand-line log files, manually parsing JSON, no way to track which copy was deployed.
After Notion:
- Every cycle is a database row — filterable, sortable, searchable
- Sort by score to instantly find the best copy to post
- Filter
Posted = false→ immediate deploy queue -
Real-time: new cycle completes → Notion row appears within 60 seconds
Results
8 historical swarm cycles backfilled to Notion on first run
New cycles auto-log within 60 seconds of completion
Zero impact on running swarm (pure side-car architecture)
DB schema auto-provisioned — one command to start
python nexus_notion_reporter.py # continuous mode
python nexus_notion_reporter.py --once # sync once and exit
Code
Code: nexus_notion_reporter.py (Gist)
Key file: nexus_notion_reporter.py
Built with: Python, Notion API (v2022-06-28), Ollama (local LLMs), Redis, SQLite
Swarm product: VeilPiercer — AI agent monitoring for local LLM developers
Top comments (0)