DEV Community

Nrk Raju Guthikonda
Nrk Raju Guthikonda

Posted on

Your Health Data Deserves Better: Building Privacy-First Wellness AI with Local LLMs

Would you hand a stranger your therapy journal, your eating habits, and your workout log — all at once? Probably not. Yet that's essentially what happens every time a wellness app sends your data to a cloud API.

I've been thinking about this problem a lot. As a Senior Software Engineer at Microsoft on the Copilot Search Infrastructure team, I work with large-scale AI systems daily. But when it comes to my own health and wellness data, I wanted something different — something that never leaves my machine.

So I built five open-source wellness and lifestyle AI tools, all powered by local LLMs through Ollama. No cloud APIs. No data exfiltration. No subscription fees. Just your hardware, your data, and an AI that genuinely helps you live better.

In this post, I'll walk you through each project, share the architecture decisions, and show you how to build privacy-first wellness AI yourself.

Why Health Data Should Never Leave Your Machine

Before diving into code, let's talk about why this matters.

Health data is uniquely sensitive. Your mood patterns, eating habits, fitness levels, and daily routines paint an intimate portrait of your life. In the wrong hands — or even in the right hands with the wrong incentives — this data becomes a liability.

Consider the risks:

  • HIPAA-adjacent concerns: While personal wellness apps may not technically fall under HIPAA, the type of data they collect (mood, mental health patterns, physical conditions) is exactly the kind regulators are watching. Data breaches involving health information carry severe reputational and legal consequences.
  • Third-party data sharing: Many wellness apps monetize your data by selling aggregated (or not-so-aggregated) insights to insurers, advertisers, and data brokers.
  • Model training on your data: When you send prompts to cloud LLMs, your conversations may be used to improve their models. Do you want your anxiety patterns training someone else's AI?
  • Permanence: Once data hits a server, you lose control. Deletion requests are promises, not guarantees.

The local LLM alternative changes everything. With Ollama running Gemma 3 or Llama 3.2 on your own hardware, your wellness data never touches a network. It's processed, analyzed, and discarded — all within your machine's memory.

The Tech Stack

All five projects share a common foundation:

  • Python 3.11+ — the backbone
  • Ollama — local LLM inference server
  • Gemma 3 / Llama 3.2 — the language models
  • Streamlit — web UI for interactive dashboards
  • FastAPI — REST API layer (some projects)
  • SQLite / JSON — local data persistence

Here's the core pattern every project uses to talk to Ollama:

import requests
import json

def query_local_llm(prompt: str, model: str = "gemma3:4b") -> str:
    """Send a prompt to the local Ollama instance. Zero network calls."""
    response = requests.post(
        "http://localhost:11434/api/generate",
        json={
            "model": model,
            "prompt": prompt,
            "stream": False,
            "options": {
                "temperature": 0.7,
                "top_p": 0.9,
            }
        }
    )
    return response.json()["response"]
Enter fullscreen mode Exit fullscreen mode

That localhost:11434 is the key. Your data goes from your app to your local Ollama server and back. No DNS resolution, no TLS handshake with a remote server, no data in transit.

Project 1: Fitness Coach Bot 🏋️

Repo: fitness-coach-bot

An AI-powered personal fitness coach that generates custom workout plans, tracks your progress, and provides exercise guidance — all running 100% locally.

The core idea is simple: you tell the bot your fitness goals, current level, and available equipment, and it designs a personalized program. But unlike cloud-based fitness apps, your body measurements, injury history, and performance data never leave your machine.

def generate_workout_plan(user_profile: dict) -> str:
    """Generate a personalized workout plan based on user profile."""
    prompt = f"""You are an expert fitness coach. Create a detailed weekly 
workout plan for this person:

- Goal: {user_profile['goal']}
- Fitness level: {user_profile['level']}
- Available equipment: {', '.join(user_profile['equipment'])}
- Time per session: {user_profile['session_minutes']} minutes
- Injuries/limitations: {user_profile.get('limitations', 'None')}

Provide exercises with sets, reps, rest periods, and form cues.
Include warm-up and cool-down routines."""

    return query_local_llm(prompt)

# Example usage
plan = generate_workout_plan({
    "goal": "build lean muscle",
    "level": "intermediate",
    "equipment": ["dumbbells", "pull-up bar", "resistance bands"],
    "session_minutes": 45,
    "limitations": "mild lower back sensitivity"
})
Enter fullscreen mode Exit fullscreen mode

The fitness coach also tracks workout history locally and adjusts recommendations based on your progress — progressive overload suggestions, deload week reminders, and plateau-breaking strategies.

Project 2: Meal Planner Bot 🥗

Repo: meal-planner-bot

An AI-powered meal planning assistant that generates personalized meal plans, detailed recipes, and consolidated shopping lists — all running 100% locally.

Dietary data is deeply personal. Allergies, intolerances, religious dietary restrictions, medical conditions like diabetes or celiac disease — this information has no business on someone else's server.

def create_meal_plan(preferences: dict, days: int = 7) -> dict:
    """Generate a weekly meal plan with recipes and shopping list."""
    prompt = f"""You are a professional nutritionist. Create a {days}-day 
meal plan with these requirements:

- Dietary style: {preferences['diet_type']}
- Daily calorie target: {preferences['calories']} kcal
- Allergies: {', '.join(preferences.get('allergies', ['None']))}
- Cuisine preferences: {', '.join(preferences.get('cuisines', ['Any']))}
- Budget: {preferences.get('budget', 'moderate')}

For each day, provide:
1. Breakfast, lunch, dinner, and one snack
2. Estimated calories and macros per meal
3. Brief recipe instructions

End with a consolidated shopping list for the entire period.
Respond in valid JSON format."""

    response = query_local_llm(prompt)
    return json.loads(response)

meal_plan = create_meal_plan({
    "diet_type": "high-protein vegetarian",
    "calories": 2200,
    "allergies": ["tree nuts"],
    "cuisines": ["Mediterranean", "Indian"],
    "budget": "moderate"
})
Enter fullscreen mode Exit fullscreen mode

The meal planner remembers your past preferences and learns what you actually cook versus what you skip, refining future suggestions — all stored in a local SQLite database.

Project 3: Mood Journal Bot 🧠

Repo: mood-journal-bot

A conversational AI journal that understands your emotions, tracks mood patterns, and provides personalized insights — powered by local Gemma 3 via Ollama. 100% private. Your data never leaves your machine.

This is the project where privacy matters most. Mental health data is arguably the most sensitive category of personal information. Mood patterns, anxiety triggers, depressive episodes — this is the kind of data that could affect insurance rates, employment decisions, and personal relationships if exposed.

from datetime import datetime

def analyze_journal_entry(entry: str, mood_history: list) -> dict:
    """Analyze a journal entry for mood, sentiment, and patterns."""
    recent_moods = ", ".join(
        [f"{m['date']}: {m['mood']}" for m in mood_history[-7:]]
    )

    prompt = f"""You are a compassionate AI wellness companion. Analyze 
this journal entry:

Entry: "{entry}"
Date: {datetime.now().strftime('%Y-%m-%d %H:%M')}
Recent mood history: {recent_moods}

Provide:
1. Detected mood (one word)
2. Sentiment score (-1.0 to 1.0)
3. Key emotional themes
4. A supportive, empathetic response
5. One gentle suggestion for self-care

Important: You are NOT a therapist. If the entry suggests crisis, 
recommend professional help resources."""

    response = query_local_llm(prompt)
    return {"analysis": response, "timestamp": datetime.now().isoformat()}
Enter fullscreen mode Exit fullscreen mode

The Streamlit UI provides beautiful mood visualizations — heatmaps, trend lines, and pattern detection — so you can see your emotional landscape over time. The architecture uses a three-layer stack: Streamlit web interface → journal processing engine (sentiment analysis, mood classification, pattern detection) → Ollama with Gemma 3. Every layer runs on localhost.

Project 4: Habit Tracker Analyzer 📊

Repo: habit-tracker-analyzer

A comprehensive habit tracking system with streak computation, completion rate analytics, habit correlation discovery, gamified achievements, calendar heatmaps, weekly/monthly reports, and AI-powered behavioral analysis — running 100% locally.

What makes this different from yet another habit tracker is the AI-powered correlation discovery. The LLM analyzes your habit data to find connections you might miss: "You tend to skip your morning run on days when you stayed up past midnight" or "Your meditation streak correlates with higher productivity scores."

def analyze_habit_correlations(habit_data: list) -> str:
    """Discover hidden correlations between habits using AI analysis."""
    # Format habit completion data for analysis
    formatted_data = "\\n".join([
        f"Date: {d['date']} | Completed: {', '.join(d['completed'])} | "
        f"Missed: {', '.join(d['missed'])}"
        for d in habit_data[-30:]  # Last 30 days
    ])

    prompt = f"""You are a behavioral analytics expert. Analyze these 
30 days of habit tracking data and identify patterns:

{formatted_data}

Provide:
1. Habit completion rates (percentage for each habit)
2. Streak analysis (current and longest streaks)
3. Correlation insights (which habits support or conflict with each other)
4. Day-of-week patterns
5. Three actionable recommendations to improve consistency

Be specific and data-driven in your analysis."""

    return query_local_llm(prompt)
Enter fullscreen mode Exit fullscreen mode

The gamification layer adds six achievement types — streak milestones, consistency badges, and more — to keep you motivated. The FastAPI backend exposes a clean REST API, while the Streamlit dashboard renders calendar heatmaps and trend charts.

Project 5: Time Management Coach ⏱️

Repo: time-management-coach

A comprehensive time management system with productivity scoring, Pomodoro planning, time-block scheduling, deep work analytics, weekly reviews, and AI-powered coaching — your personal productivity consultant running 100% locally.

Time data reveals your work patterns, focus capacity, peak hours, and how you actually spend your days. That's valuable intelligence — for you, not for advertisers.

def generate_daily_schedule(tasks: list, preferences: dict) -> str:
    """Create an AI-optimized daily schedule with time blocks."""
    task_list = "\\n".join([
        f"- {t['name']} (priority: {t['priority']}, "
        f"estimated: {t['duration_min']}min, type: {t['category']})"
        for t in tasks
    ])

    prompt = f"""You are an expert time management coach specializing in 
deep work and the Pomodoro technique. Create an optimized daily schedule:

Tasks for today:
{task_list}

Preferences:
- Work start: {preferences['start_time']}
- Work end: {preferences['end_time']}
- Peak focus hours: {preferences['peak_hours']}
- Pomodoro length: {preferences.get('pomodoro_min', 25)} minutes
- Break preference: {preferences.get('break_style', 'standard')}

Create a time-blocked schedule that:
1. Places deep work tasks during peak focus hours
2. Groups similar tasks to minimize context switching
3. Includes Pomodoro breaks and a lunch break
4. Adds buffer time between blocks
5. Assigns a productivity score prediction (0-100)"""

    return query_local_llm(prompt)

schedule = generate_daily_schedule(
    tasks=[
        {"name": "Write design doc", "priority": "high",
         "duration_min": 90, "category": "deep_work"},
        {"name": "Code review PRs", "priority": "medium",
         "duration_min": 45, "category": "collaborative"},
        {"name": "Team standup", "priority": "high",
         "duration_min": 15, "category": "meeting"},
    ],
    preferences={
        "start_time": "8:00 AM",
        "end_time": "5:00 PM",
        "peak_hours": "9:00 AM - 12:00 PM"
    }
)
Enter fullscreen mode Exit fullscreen mode

The weekly review feature is particularly powerful — the AI analyzes your time logs to identify patterns like "You spend 3x more time on meetings on Wednesdays" and "Your deep work efficiency drops after 3 PM on Fridays."

Getting Started

Every project follows the same setup pattern:

# 1. Install Ollama (one-time setup)
# Download from https://ollama.com

# 2. Pull a model
ollama pull gemma3:4b

# 3. Clone any project
git clone https://github.com/kennedyraju55/fitness-coach-bot
cd fitness-coach-bot

# 4. Install dependencies
pip install -r requirements.txt

# 5. Run
streamlit run app.py
Enter fullscreen mode Exit fullscreen mode

That's it. No API keys. No account creation. No terms of service to accept. Just ollama pull and streamlit run.

The Bigger Picture

These five projects are part of a larger portfolio of 116+ open-source repositories I've built, many exploring what becomes possible when you combine local LLMs with domain-specific applications. In my experience, the "local-first" approach isn't just about privacy — it's about ownership. When your AI tools run on your hardware, you control the model, the data, and the upgrade cycle.

The wellness and lifestyle category is where this philosophy matters most. Your health data is not training data. Your mood patterns are not product metrics. Your fitness journey is not a data point in someone else's aggregate.

Build local. Stay private. Live better.


Nrk Raju Guthikonda is a Senior Software Engineer at Microsoft on the Copilot Search Infrastructure team (Semantic Indexing, RAG). He builds open-source AI tools powered by local LLMs.

Top comments (0)