We’ve all been there: your wearable buzzes, telling you your Heart Rate Variability (HRV) is tanking, or your resting heart rate is spiking. Usually, we just ignore the notification until we're actually sick. But what if your data could take action?
In this tutorial, we are building a Health Loop Agent—a sophisticated AI system that doesn't just monitor data but acts on it. Using LangGraph for orchestration and Browser-use (powered by Playwright) for web automation, we'll create an agent that detects health anomalies via the Oura Cloud API and automatically navigates a medical portal to find an available doctor.
By the end of this post, you'll understand how to bridge the gap between "Passive Monitoring" and "Active Intervention" using state-of-the-art AI Agents and automated web navigation.
The Architecture: From Pulse to Appointment
Building an autonomous agent requires a robust state machine. We use LangGraph to manage the logic flow, ensuring the agent only proceeds to "Booking" if the "Analysis" phase confirms a sustained health risk.
graph TD
A[Start: Daily Sync] --> B{Fetch Oura Data}
B --> C[Analyze HRV & Sleep Trends]
C -->|Normal| D[Log & Sleep]
C -->|Anomaly Detected| E[Search Doctor Schedules]
E --> F[Browser-use: Navigate Portal]
F --> G{Slot Available?}
G -->|Yes| H[Book Appointment & Notify]
G -->|No| I[Alert User: Manual Check Required]
H --> J[End]
D --> J
Prerequisites
To follow along, you’ll need:
- LangGraph: For the agent's cognitive architecture.
- Browser-use & Playwright: For interacting with the "real world" web.
- Oura Cloud API: To fetch real-time biometric data.
- OpenAI GPT-4o: To serve as the brain for decision-making.
Step 1: Defining the Agent State
In LangGraph, everything revolves around the State. Our agent needs to keep track of biometric readings, the detected health status, and any appointment details found.
from typing import TypedDict, List, Optional
class HealthAgentState(TypedDict):
hrv_readings: List[float]
health_status: str # "optimal", "warning", "critical"
appointment_needed: bool
doctor_specialty: str
available_slots: List[str]
final_report: str
Step 2: The Analysis Node
We use the Oura Cloud API to pull recent HRV data. A "Critical" status is triggered if the HRV is 20% below the user's 7-day rolling average.
import requests
from datetime import datetime, timedelta
def analyze_biometrics(state: HealthAgentState):
# Mocking Oura API call for brevity
# In production, use: requests.get(OURA_API_URL, headers=headers)
recent_hrv = [45, 42, 30, 28] # Sustained drop
avg_hrv = sum(recent_hrv[:2]) / 2
current_hrv = recent_hrv[-1]
status = "optimal"
if current_hrv < avg_hrv * 0.8:
status = "critical"
return {
"hrv_readings": recent_hrv,
"health_status": status,
"appointment_needed": status == "critical",
"doctor_specialty": "Cardiologist" if status == "critical" else "None"
}
Step 3: Web Automation with Browser-use
This is where the magic happens. If the agent decides an appointment is needed, it triggers Browser-use. This library allows LLMs to "see" and "click" elements on a webpage just like a human.
from browser_use import Agent
from langchain_openai import ChatOpenAI
async def search_and_book_appointment(state: HealthAgentState):
if not state["appointment_needed"]:
return {"final_report": "All healthy. No action taken."}
# Initialize the Browser Agent
browser_agent = Agent(
task=f"Navigate to health-portal.example.com, search for a {state['doctor_specialty']}, and find the earliest available slot next Monday.",
llm=ChatOpenAI(model="gpt-4o"),
)
result = await browser_agent.run()
return {
"final_report": f"Detected HRV drop to {state['hrv_readings'][-1]}. {result.final_result}"
}
The "Official" Way: Advanced Patterns 🥑
While this demo provides a functional loop, production-grade health agents require stricter privacy controls (HIPAA compliance), human-in-the-loop (HITL) verification, and robust error handling for web UI changes.
For more production-ready examples and advanced agentic patterns, I highly recommend checking out the technical deep-dives at WellAlly Tech Blog. They cover how to scale these "Closed-Loop" systems in enterprise environments and handle complex LLM observability.
Step 4: Connecting the Graph
Finally, we link our nodes together into a cohesive workflow.
from langgraph.graph import StateGraph, END
workflow = StateGraph(HealthAgentState)
# Add Nodes
workflow.add_node("analyze", analyze_biometrics)
workflow.add_node("automate_booking", search_and_book_appointment)
# Define Edges
workflow.set_entry_point("analyze")
workflow.add_conditional_edges(
"analyze",
lambda x: "automate_booking" if x["appointment_needed"] else END
)
workflow.add_edge("automate_booking", END)
# Compile
app = workflow.compile()
Conclusion: The Future of Proactive Health
We've just built a system that moves from Data -> Insight -> Action. By combining the reasoning capabilities of LangGraph with the "hands" of Browser-use, we've turned a simple wearable into a proactive health concierge.
What’s next?
- Human-in-the-loop: Add a Slack notification node to ask for user approval before clicking "Confirm Booking."
- Multi-Modal Analysis: Use GPT-4o to analyze photos of symptoms alongside HRV data.
The era of the "Passive Dashboard" is over. The era of the Autonomous Health Agent has begun. 🚀
What do you think? Would you trust an AI to book your doctor's appointments? Let’s chat in the comments! 👇
If you enjoyed this build, don't forget to ❤️ and follow for more "Learning in Public" tutorials!
Top comments (0)