This is a submission for the Notion MCP Challenge
What I Built
Every developer knows this pain:
It’s 11 PM. A production bug hits.
You jump between GitHub → Jira → Slack → Notion → Terminal…
Five tools. Zero clarity.
This constant context switching is a hidden tax on every engineering team.
So I built NotionOps AI — an AI-powered DevOps brain that lives inside Notion.
👉 It automatically connects:
- GitHub activity
- AI analysis (Claude)
- Notion workspace
And turns them into a single source of truth.
What It Does
- Every commit, PR, and issue → automatically becomes a task
- Critical issues → instantly become incidents
- Deployments → logged without manual work
- Daily standups → generated automatically
Notion becomes your team’s central nervous system. AI does the rest.
Live Demo Flow
- Developer pushes:
hotfix: null pointer in payment gateway - GitHub webhook triggers
- AI detects CRITICAL severity
- A P0 incident page is created in Notion
- A high-priority task is generated
- AI standup updates automatically next morning
💡 Result:
No manual updates. No missed incidents. No confusion.
Show us the code
👉 GitHub Repository:
https://github.com/yashsonawane25/NotionOps-AI
Repo includes:
- MCP server (10 tools)
- FastAPI webhook system
- AI analysis engine
- Notion integration layer
- Local simulation script
How I Used Notion MCP
Notion MCP is the core backbone of this system — not just an add-on.
I designed a 4-database architecture inside Notion:
1. Tasks Database
- Auto-created from GitHub events
- Includes priority, category, and status
- No manual input required
2. Deployments Database
- Tracks every deployment
- Stores environment, version, and status
3. Incidents Database
- Auto-created for critical issues
- Includes severity (P0/P1), description, and logs
4. AI Digest Database
- Daily standups generated automatically
- AI reads tasks → writes summary
MCP Tools (Core Innovation)
I exposed 10 tools via MCP, including:
analyze_commitanalyze_prcreate_tasklog_incidentlog_deploymentgenerate_standupquery_projectprocess_github_push
🔥 Killer Feature: process_github_push
One function does everything:
➡️ Analyze commit
➡️ Create task
➡️ Detect severity
➡️ Trigger incident (if critical)
🧠 AI Query Feature (Game Changer)
You can literally ask:
“What are our highest priority tasks?”
And get a real-time answer from Notion.
No dashboards. No filters. Just ask.
Architecture
GitHub Webhook
│
▼
FastAPI Server
│
▼
Claude AI Analysis
│
▼
MCP Server
│
▼
Notion API
Tech Stack
- Python
- FastAPI
- MCP SDK
- Claude API (Anthropic)
- Notion API
- Pydantic
Running It Yourself
git clone https://github.com/yashsonawane25/NotionOps-AI
cd NotionOps-AI
pip install -r requirements.txt
cp .env.example .env
# Add your API keys
uvicorn webhook_server:app --reload --port 8000
# Run simulation
python test_simulate.py
What I Learned
- MCP forces clean, modular design
- Async processing is critical for webhooks
- Notion works surprisingly well as a DevOps backend
- Simulation scripts save massive dev time
What’s Next
- Bidirectional sync (Notion → GitHub)
- AI-based velocity tracking
- Weekly executive reports
- On-call automation
Final Thought
I didn’t want to build a demo.
I built something I would actually use in a real DevOps team.
NotionOps AI = GitHub + AI + Notion → Fully automated DevOps workflow
The best DevOps system is the one that works… even when you don’t.
Credits
Built by Yash Sonawane
B.Tech — GH Raisoni College of Engineering & Management, Pune
GitHub: https://github.com/yashsonawane25
Tags
notion mcp devops ai python hackathon automation github claude
Top comments (3)
Smart move using Claude to classify commit severity — how do you handle false positives where a routine refactor triggers a P1 incident?
Great question honestly!
So yeah, false positives are a real problem. My current fix is pretty simple — if the commit message has tags like
[refactor]or[chore], the system just skips the critical path entirely. Also, Claude gives back a confidence score along with the severity, so anything below a certain threshold doesn't auto-create an incident, it goes into a "needs review" state instead.There's also a small delay before the incident page actually gets created, so if something looks wrong you can catch it in time.
But honestly it's not perfect yet. The next version will have a feedback loop where devs can flag false positives, and that'll help improve the AI prompts over time.
What would you do differently? Curious to hear your approach!
Some comments may only be visible to logged-in visitors. Sign in to view all comments.