This is a submission for the Notion MCP Challenge
What I Built
Notion Life Review OS is a WhatsApp assistant that captures your day and organizes everything in your own Notion workspace — from a single message.
You send something like:
"Worked on the API integration today. Need to present to the client next Thursday. Also figured out why our Redis connection was dropping."
It extracts a task, a project, a learning, and your mood. Asks you to confirm. Saves everything to the right Notion database. No forms. No clicking. No friction.
The core idea is simple: your day lives in WhatsApp already. You're already typing there. So why open another tool?
It also works the other way. Ask it anything:
"What tasks are due this week?"
"What did I learn this week?"
And you can manage your Notion schema directly from WhatsApp — even via voice:
"Add a column called Who, select type, to the Tasks table"
The new field is available on the very next message.
One thing I really liked about how this came together: the project and task structure is completely generic. You can use it for work — a project called "API Backend" with tasks like "Deploy to production". But it works just as well for a grocery list — project "Supermarket", tasks "milk, eggs, bread". Or a personal to-do list. The system doesn't care. It just captures what you tell it and puts it in the right place.
Video Demo
Show me the code
github.com/vicente-r-junior/notion-life-review-os
Full setup instructions in the README.
How I Used Notion MCP
Notion MCP is the backbone of the entire system. Every single interaction with Notion goes through it — no direct API calls anywhere.
Reading schema at startup
When the app boots, it calls API-retrieve-a-database and API-retrieve-a-data-source for each of the 5 databases. The schemas get cached in Redis and injected directly into the GPT-4o system prompt — so the agent knows what fields exist, what types they are, and which ones are required, without any extra calls per message.
Writing data
When the user confirms, the app calls API-post-page for each item — daily log, tasks, projects, learnings. This part is pure deterministic Python, not an LLM. The write step is too important to leave non-deterministic.
Querying data
For questions like "what tasks are due this week?", the agent uses API-query-data-source with structured filters built from natural language. It resolves dates, applies status filters, and formats the answer for WhatsApp.
Updating schema dynamically
When the user asks to add a column — even via voice — the app calls API-update-a-data-source. The Redis cache refreshes immediately and the system prompt is rebuilt. The new field is available on the next message.
Bulk updates
For things like "set all tasks Who to Vicente", the app queries first, shows a confirmation with the affected records, then calls API-patch-page for each one.
Architecture
WhatsApp → Evolution API → FastAPI webhook
↓
Intent classifier (GPT-4o-mini)
↓
┌───────────────────┼──────────────────┐
↓ ↓ ↓
Conversational agent Query agent Add column flow
(GPT-4o) (GPT-4o) (GPT-4o-mini)
↓ ↓ ↓
SAVE_PAYLOAD Notion MCP Notion MCP
↓ (query) (update schema)
User confirms
↓
Notion Writer
(pure Python)
↓
Notion MCP
(write pages)
One conversational agent instead of a pipeline.
I started with separate extractor, matcher, and confirmation agents. It was complex and fragile. A single GPT-4o call with Redis conversation history turned out to be simpler, faster, and much easier to debug. The agent holds the full context of the conversation and knows when it has enough information to produce a SAVE_PAYLOAD.
The write step is never an LLM.
The notion_writer is pure Python calling Notion MCP directly. Every property format handled explicitly. Giving an LLM direct write access to your Notion is asking for trouble.
Schema-aware prompts.
The agent knows your exact Notion schema at all times. Custom fields like Who, Priority, or Energy are injected into the system prompt dynamically. If a field is marked required, the agent asks for it before saving — no partial records.
Stack
| Layer | Technology |
|---|---|
| Backend | Python 3.12 + FastAPI |
| AI | OpenAI GPT-4o + Whisper |
| Notion interface | Notion MCP (mcp/notion Docker image) |
| WhatsApp bridge | Evolution API |
| Session + cache | Redis 7 |
| Infrastructure | Docker Compose on Hostinger VPS |
Setup
1. Clone and configure
git clone https://github.com/vicente-r-junior/notion-life-review-os.git
cd notion-life-review-os
cp .env.example .env
2. Create 5 Notion databases
Inside a parent page called Life Review OS, create:
- Daily Logs
- Tasks
- Projects
- Learnings
- Weekly Reports
Copy each database ID into .env. Connect your Notion integration to the parent page — it propagates to all children automatically.
3. Configure .env
OPENAI_API_KEY=sk-...
NOTION_API_KEY=secret_...
NOTION_DB_DAILY_LOGS=...
NOTION_DB_TASKS=...
NOTION_DB_PROJECTS=...
NOTION_DB_LEARNINGS=...
NOTION_DB_WEEKLY_REPORTS=...
MCP_AUTH_TOKEN=any-random-string
EVOLUTION_API_URL=http://your-evolution-api:8080
EVOLUTION_API_KEY=...
EVOLUTION_INSTANCE=your-instance-name
WHATSAPP_NUMBER=5511999999999
REDIS_URL=redis://app-redis:6379
TIMEZONE=America/Sao_Paulo
4. Start
docker compose up -d
Point your Evolution API webhook to http://your-server:8000/webhook and you're live.
What I Learned
MCP response parsing trips you up the first time. Every Notion MCP response is SSE-wrapped JSON inside a content array. Once you have the unwrapping pattern it's trivial — but it's not obvious when you first hit it.
One agent beats a pipeline. I built the multi-agent version first. Extractor, matcher, confirmation, writer — each doing one thing. It looked clean on paper and was a nightmare in practice. Replacing it with a single conversational GPT-4o call and Redis history was the best decision I made on this project.
The write step should never be an LLM. Flexible conversation on the way in, deterministic code on the way out. That's the pattern that worked.
Prompt design is the real work. Getting the agent to always include SAVE_PAYLOAD when there's actionable content, never say "done" without confirming, correctly handle corrections mid-conversation — that's where most of the iteration went. The code was the easy part.
Redis for everything. Session state, schema cache, idempotency keys, conversation history — all in Redis with TTLs. No separate database needed. Cleanup is automatic.
Top comments (0)