This is a submission for the Notion MCP Challenge
What I Built
APIVault — a local FastAPI app that turns raw route code or plain-English API descriptions into full API documentation, then stores everything in Notion through the Notion MCP.
The idea is simple: you point APIVault at a router file (or just describe an endpoint in plain English), and it generates structured documentation and pushes it directly into a Notion workspace it sets up for you — no copy-pasting, no manual page creation.
The frontend is a vanilla HTML/CSS/JS dashboard with:
- Live search across your documented endpoints
- Documentation preview pane
- Source vs. generated toggle so you can diff your raw code against what was documented
There's also a CLI tool (vault.py) for batch-documenting entire files without opening the UI.
Endpoints
| Route | What it does |
|---|---|
POST /api/setup |
Creates (or reuses) the APIVault Notion workspace — an 📖 API Reference database, a 🏷️ Services database, and an 📚 API Docs hub page |
POST /api/document-endpoint |
Documents a single endpoint, writes it to Notion, returns the generated content + the Notion page URL |
POST /api/document-collection |
Splits a router/controller file into individual endpoints and documents them in parallel |
POST /api/generate-readme |
Generates a 📄 [Service] — README Notion page from endpoints already stored |
GET /api/search?q= |
Powers the dashboard's live search |
GET /api/sidebar |
Feeds the left-side navigation tree |
Stack
- Backend: FastAPI + Python
- AI: HuggingFace Inference API (documentation generation)
-
MCP: Notion MCP via
https://mcp.notion.com/sse(SSE transport) - Frontend: Vanilla HTML/CSS/JS
Video Demo
No video for this submission.
Show us the code
🔗 github.com/himanshu748/dev-challenge-5
Getting started locally
# 1. Clone and set up
git clone https://github.com/himanshu748/dev-challenge-5
cd dev-challenge-5
python -m venv venv && source venv/bin/activate
pip install -r requirements.txt
# 2. Configure environment
cp .env.example .env
# Set NOTION_TOKEN (MCP OAuth token) and HF_TOKEN
# 3. Run
uvicorn app.main:app --reload
# Open http://127.0.0.1:8000
CLI usage for bulk documentation:
python vault.py \
--file routes/users.py \
--service UserService \
--base-url https://api.example.com
How I Used Notion MCP
APIVault uses the Notion MCP exclusively — there are zero calls to the direct Notion REST API anywhere in the codebase. NOTION_TOKEN must be a Notion MCP OAuth access token, not a Notion integration secret. This matches how the hosted MCP is meant to be used: access is granted through the MCP OAuth flow, not the legacy secret model.
On startup, POST /api/setup calls the Notion MCP to provision the entire workspace structure:
- An
📖 API Referencedatabase where each endpoint lives as a page with properties like method, path, service, parameters, and response schema - A
🏷️ Servicesdatabase for grouping endpoints by service name - An
📚 API Docshub page that ties everything together
When an endpoint is documented (either individually or from a batch file), the generated content is written to a new Notion page via the MCP and the returned Notion URL is surfaced in the dashboard so you can jump straight to it.
The generate-readme flow goes the other way: it queries Notion through the MCP to pull all endpoints for a given service, then synthesizes them into a formatted README page — so Notion stays the source of truth for the whole workflow.
A local JSON cache (data/apivault_state.json) exists purely for UI responsiveness (fast sidebar loads, instant search), but every write and authoritative read goes through the MCP.
What the Notion MCP unlocks here that wouldn't be practical otherwise:
- Structured storage without schema management — Notion databases give you typed properties (selects, rich text, URLs) without standing up a real database
- Human-readable side effect — every documented endpoint is immediately browsable in Notion by the whole team, not just queryable via API
- README generation from live data — because the MCP lets you query what's already stored, the README reflects the actual current state of your docs, not a stale snapshot
Top comments (0)