DEV Community

RoTSL
RoTSL

Posted on • Originally published at Medium on

Health AI on Notion with Tribe V2

Local-first Notion health tracker with TRIBEv2 brain analysis, AI health insights, symptom logging, goals, medications, appointments, and a browser UI

Notion MCP Challenge*

image

This was supposed to be a Notion challenge submission.

I built most of it close to the deadline, got something working, and then missed the window. No big failure story. Just underestimated how long the messy parts would take.

After that, keeping it private felt pointless. So I pushed it to GitHub.

Around the same time, I came across Tribe v2. That changed how I looked at this project. Instead of treating it like a failed submission, I started treating it like something that could keep evolving in public.

That is what this is now. Not finished. Still useful.

The actual problem I was trying to solve

I sometimes already track things in Notion:

• Sleep

• Workouts

• Random notes about how I feel

The problem is not tracking. It is what happens after.

Nothing.

No aggregation. No patterns. No feedback loop. Just logs sitting there.

Every week I would think I should look at it properly. I never did.

So this project is basically me outsourcing that thinking step.

System design

The architecture is simple on paper and annoying in practice.

Pipeline

• Fetch data from Notion databases

• Normalize it into a consistent structure

• Send it to an LLM

• Write the output back into Notion

That is it. No fancy orchestration.

The difficulty is everything in between.

Notion is not a real database

At first glance, Notion feels structured. It is not.

Things that break over time:

• Property names change

• Data types shift

• Fields get added or removed

If you build with fixed schemas, your system breaks quietly.

What I did instead

I treated Notion as semi structured data:

• Map fields dynamically instead of hardcoding

• Use fallback parsing when fields do not match

• Normalize everything into an internal schema

Example internal format:

{
. "date": "2026–03–20",
. "sleep_hours": 6.5,
. "workout": "strength",
. "mood": "low"
}
Enter fullscreen mode Exit fullscreen mode

No matter how messy the source is, the model only sees this cleaned version.

Data normalization is the real system

Most of the work went here.

Steps

  1. Extract raw values from Notion API

  2. Convert them into usable types

  3. Handle missing or inconsistent fields

  4. Align everything by time

Examples:

• "6 hrs" becomes 6.0
 • Empty fields get dropped from inference
 • Mixed labels get standardized
Enter fullscreen mode Exit fullscreen mode

If this layer is weak, everything downstream gets worse.

LLM layer

The model is not used as a general assistant.

It has a narrow job:

• Summarize recent data

• Spot simple patterns

• Suggest small adjustments

Input structure

Each run includes:

• Recent data window

• Aggregated values

• Instructions that limit scope

Example:

Sleep: [6, 5.5, 7, 6]
Workout: [yes, no, yes, yes]
Mood: [low, medium, medium, high]
Enter fullscreen mode Exit fullscreen mode

Task:

Identify patterns

Avoid assumptions without enough data

State uncertainty clearly

The main issue: the model guesses

Even with weak data, it tries to sound confident.

That is a problem, especially for anything health related.

What I added

• Minimum data thresholds before running inference

• Prompts that force uncertainty

• Restrictions on long term claims

• Filtering outputs that sound too certain

It still makes mistakes. It just makes fewer confident ones.

Writing results back to Notion

Outputs are stored as:

• Daily summaries

• Weekly insights

• Separate logs for traceability

Each output includes:

• Timestamp

• Data window used

• Generated insight

This makes it easier to debug and iterate.

Why I stayed inside Notion

I considered building a separate app.

That would solve a lot of problems:

• Cleaner schema

• Better validation

• Fewer edge cases

But nobody wants another health app.

Notion already has the data. So I built on top of it instead.

The tradeoff is dealing with inconsistency.

Influence from Tribe v2

This project shifted direction after I came across Tribe v2.

The main idea that stuck:

You do not wait until something feels ready.

You ship it. Then improve it in the open.

That is exactly what this repo reflects. Some parts are solid. Some are clearly not. That is fine.

What is still broken

A few things are still rough:

• Sparse data leads to weak outputs

• The model confuses correlation with causation

• Some insights sound better than they are

• No feedback loop yet to measure usefulness

The system works. It just does not always matter.

What I would change

If I rebuilt/rework this:

• Define a stricter schema earlier

• Separate ingestion and AI layers properly

• Add better logging from day one

• Focus more on actionable insights, not just observations

Where this could go

A few directions that feel real:

• Long term memory instead of short windows

• Feedback loops to track if suggestions help

• Wearable integrations

• Confidence scoring for outputs

Or it might just stay like this. A small layer that makes Notion slightly smarter.

Closing

Missing the deadline changed the trajectory of this project.

If I had submitted it, I probably would have moved on.

Instead, it is now something I can keep improving without pretending it is finished.

Right now, it is useful enough to keep using.

That is enough.

Repo: https://github.com/rotsl/notion-Health-AI

Top comments (0)