DEV Community

Cover image for How I built an AI health coach with Next.js, Supabase & GPT-5.2 — from wearable APIs to recovery predictions
Markus Johannes Baier
Markus Johannes Baier

Posted on

How I built an AI health coach with Next.js, Supabase & GPT-5.2 — from wearable APIs to recovery predictions

I built ViQO — a web app that connects health data from multiple sources and uses AI to find personal patterns. Here's the technical deep dive.

The Architecture

┌─────────────┐     ┌──────────────┐     ┌─────────────┐
│   Whoop API │────▶│              │     │  GPT-5.2    │
│ Withings API│────▶│   Next.js    │────▶│  Analysis   │
│ Manual Logs │────▶│   App Router │     │  Engine     │
└─────────────┘     │              │     └─────────────┘
                    │   Supabase   │
                    │   (EU, RLS)  │
                    └──────────────┘
Enter fullscreen mode Exit fullscreen mode

Stack:

  • Next.js 14 (App Router, Server Components, Route Handlers)
  • Supabase (PostgreSQL, Auth, Row Level Security, Realtime)
  • GPT-5.2 (pattern analysis, coaching, predictions)
  • Tailwind + shadcn/ui (UI layer)
  • Vercel (Frankfurt edge, Cron Jobs)
  • PWA (Service Worker, Push Notifications)

Challenge 1: Wearable API Integration

Whoop and Withings have completely different data models. Whoop gives you "cycles" and "recoveries", Withings gives you "measures" with type codes.

I built a Unified Data Layer — source-agnostic tables that normalize everything:

// Adapter pattern — each source implements this
interface WearableAdapter {
  syncRecovery(userId: string): Promise<UnifiedMetric[]>
  syncSleep(userId: string): Promise<UnifiedSleep[]>
  syncBody(userId: string): Promise<UnifiedBody[]>
}

// Dual-write: legacy tables + unified tables
// Intelligence engine reads from unified, falls back to legacy
Enter fullscreen mode Exit fullscreen mode

The trickiest part? Timestamps. Whoop sleep records are timestamped at sleep START, but you want them aligned to the WAKE-UP day. Withings stores everything in UTC but the user thinks in local time. I ended up building lib/date-utils.ts with centralized timezone handling (Europe/Berlin).

Challenge 2: Statistical Correlation Engine

Not "AI magic" — actual Pearson correlation coefficients:

function pearsonCorrelation(x: number[], y: number[]): number {
  const n = x.length
  if (n < 5) return 0 // minimum data points

  const sumX = x.reduce((a, b) => a + b, 0)
  const sumY = y.reduce((a, b) => a + b, 0)
  const sumXY = x.reduce((s, xi, i) => s + xi * y[i], 0)
  const sumX2 = x.reduce((s, xi) => s + xi * xi, 0)
  const sumY2 = y.reduce((s, yi) => s + yi * yi, 0)

  const numerator = n * sumXY - sumX * sumY
  const denominator = Math.sqrt(
    (n * sumX2 - sumX ** 2) * (n * sumY2 - sumY ** 2)
  )

  return denominator === 0 ? 0 : numerator / denominator
}
Enter fullscreen mode Exit fullscreen mode

Key decisions:

  • Minimum 5 data points before showing any correlation
  • |r| ≥ 0.25 threshold to filter noise
  • Confidence badges visible to users ("based on 31 data points — high confidence")
  • Correlations re-calculated weekly via Vercel Cron

Challenge 3: Prediction Engine

Predicting tomorrow's recovery based on today's inputs:

// Simplified prediction flow
function predictRecovery(userId: string) {
  // 1. Get 7-day baseline (weighted, recent days matter more)
  const baseline = getWeightedBaseline(userId, 7, decay=0.80)

  // 2. Apply personal impact factors (learned from correlations)
  let predicted = baseline.recovery
  if (todayAlcohol > 0) predicted += personalImpact('alcohol', amount)
  if (todayStrain > 16) predicted += personalImpact('overtraining')
  if (todayMeditation) predicted += personalImpact('meditation')

  // 3. Mean reversion (only upward — don't punish good streaks)
  if (predicted < baseline.mean) {
    predicted += (baseline.mean - predicted) * 0.15
  }

  // 4. Self-calibration from past predictions
  predicted *= calibrationFactor(userId) // learned from prediction_log

  return { predicted, confidence: calculateConfidence(dataPoints) }
}
Enter fullscreen mode Exit fullscreen mode

The self-calibration loop is key: every prediction is logged, and when actual data comes in, the accuracy is calculated. The engine adjusts its bias over time. Currently at ~70% accuracy after 30 days.

Challenge 4: GDPR by Design

Health data is sensitive. I chose Supabase's EU region (Frankfurt) and built privacy in:

  • Row Level Security on every table (user_id = auth.uid())
  • Article 17 (Right to Erasure): /api/user/data-delete with audit trail
  • Article 20 (Data Portability): /api/user/data-export — full JSON export
  • No third-party analytics on health data
  • Deletion audit log for compliance

Challenge 5: AI That Doesn't Hallucinate

GPT-5.2 is powerful but can make up health advice. My approach:

  1. Always ground in data — the AI prompt includes actual numbers, never asks for opinions
  2. Structured output — JSON schemas, not free text
  3. Safety disclaimers — health profile (allergies, medications) is injected with explicit warnings
  4. Temperature 0.3-0.4 — reduce creativity, increase consistency
const systemPrompt = `You are a health coach.
IMPORTANT: Base ALL recommendations on the provided data.
Do NOT invent correlations not present in the data.
${healthProfilePrompt} // includes allergies, medications with warnings
${langPrompt(lang)} // bilingual support
`
Enter fullscreen mode Exit fullscreen mode

Results After 30 Days

  • 12 personal patterns detected automatically
  • ~70% prediction accuracy (self-calibrating)
  • PWA installs working on iOS + Android
  • $15/month total infrastructure cost
  • 147 visitors in first week post-launch

What I'd Do Differently

  1. Start with fewer modules. I built 7 health modules. 3 would have been enough for launch.
  2. Mobile-first from day 1. I built desktop-first, then adapted. Should've been the other way.
  3. Don't over-engineer the AI. Simple correlations impressed users more than fancy AI chat.

The app is live at viqolabs.com — free tier + Pro with 7-day free trial. Open to feedback.

What's your experience integrating wearable APIs? Any tips on health data normalization?

Top comments (0)