<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Fede C</title>
    <description>The latest articles on DEV Community by Fede C (@fede_ag).</description>
    <link>https://dev.to/fede_ag</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/fede_ag"/>
    <language>en</language>
    <item>
      <title>From 60% to 84%: Building an AI Agent for Public Health Data</title>
      <dc:creator>Fede C</dc:creator>
      <pubDate>Tue, 24 Mar 2026 21:18:04 +0000</pubDate>
      <link>https://dev.to/fede_ag/from-60-to-84-building-an-ai-agent-for-public-health-data-38jd</link>
      <guid>https://dev.to/fede_ag/from-60-to-84-building-an-ai-agent-for-public-health-data-38jd</guid>
      <description>&lt;p&gt;&lt;em&gt;The fixes that actually worked weren't about prompts.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;We built &lt;a href="https://github.com/saludai-labs/saludai" rel="noopener noreferrer"&gt;SaludAI&lt;/a&gt;, an open-source AI agent that takes clinical questions in natural language and queries a FHIR R4 server. Think: &lt;em&gt;"How many patients with type 2 diabetes over 60 are in Buenos Aires?"&lt;/em&gt; — the agent resolves the terminology, builds the FHIR queries, follows references across resource types, and returns a traceable answer.&lt;/p&gt;

&lt;p&gt;No LangChain. The agent loop is ~300 lines of Python, every step logged to &lt;a href="https://langfuse.com" rel="noopener noreferrer"&gt;Langfuse&lt;/a&gt;. When something breaks, we can read exactly why.&lt;/p&gt;

&lt;p&gt;We benchmarked it: 100 questions, 200 synthetic Argentine patients, 10 FHIR resource types, 4 terminology systems. Inspired by &lt;a href="https://arxiv.org/abs/2509.19319" rel="noopener noreferrer"&gt;FHIR-AgentBench&lt;/a&gt; (Verily/KAIST/MIT) — but on synthetic data, so the scores aren't directly comparable to their clinical dataset.&lt;/p&gt;

&lt;p&gt;Here's how accuracy evolved, and what each fix taught us.&lt;/p&gt;

&lt;h2&gt;
  
  
  60% → 82%: The agent wasn't seeing the data
&lt;/h2&gt;

&lt;p&gt;Our FHIR client returned only the first page. With 687 immunizations and 437 encounters across 200 patients, most counts came back wrong. The fix was auto-pagination (&lt;code&gt;search_all()&lt;/code&gt;). Boring. Fixed 11 questions overnight.&lt;/p&gt;

&lt;p&gt;Before optimizing the AI, make sure it sees all the data.&lt;/p&gt;

&lt;h2&gt;
  
  
  82% → 94%: Give LLMs a calculator
&lt;/h2&gt;

&lt;p&gt;Questions like "average age of patients with hypertension" forced the LLM to do arithmetic in its head. It's bad at this. We added &lt;code&gt;execute_code&lt;/code&gt; — a sandboxed Python environment. The agent writes &lt;code&gt;len([e for e in entries if ...])&lt;/code&gt; instead of trying to count mentally.&lt;/p&gt;

&lt;p&gt;+12 points from one tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  94% → 79% → 89%: Scaling broke everything
&lt;/h2&gt;

&lt;p&gt;We went from 50 questions to 100, from 55 patients to 200. Accuracy dropped 15 points. The 94% was fragile.&lt;/p&gt;

&lt;p&gt;The problem: a pure ReAct loop with no plan. Complex questions needing 3-4 resource traversals left the agent wandering. We added a query planner — lightweight FHIR knowledge graph, 11 query patterns, and action space reduction: instead of telling the agent "don't use search_fhir for counting," we remove the tool entirely. The agent can't misuse what it can't see.&lt;/p&gt;

&lt;h2&gt;
  
  
  5 LLMs, same infrastructure
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Accuracy&lt;/th&gt;
&lt;th&gt;Simple&lt;/th&gt;
&lt;th&gt;Medium&lt;/th&gt;
&lt;th&gt;Complex&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Sonnet 4.5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;84%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;94%&lt;/td&gt;
&lt;td&gt;93%&lt;/td&gt;
&lt;td&gt;72%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Haiku 4.5&lt;/td&gt;
&lt;td&gt;77%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;td&gt;80%&lt;/td&gt;
&lt;td&gt;65%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPT-4o&lt;/td&gt;
&lt;td&gt;63%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;td&gt;73%&lt;/td&gt;
&lt;td&gt;40%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Llama 3.3 70B&lt;/td&gt;
&lt;td&gt;48%&lt;/td&gt;
&lt;td&gt;94%&lt;/td&gt;
&lt;td&gt;63%&lt;/td&gt;
&lt;td&gt;16%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Qwen 3.5 9B&lt;/td&gt;
&lt;td&gt;25%&lt;/td&gt;
&lt;td&gt;50%&lt;/td&gt;
&lt;td&gt;29%&lt;/td&gt;
&lt;td&gt;12%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Simple questions are a commodity — every model above 9B gets 94%+. The gap is in complex multi-hop reasoning. And the planner + tool design is what makes Sonnet hit 84% instead of the ~60% you'd get with a naive loop.&lt;/p&gt;

&lt;p&gt;One surprise: schema flattening fixed GPT-4o (Simple: 53% → 100%) but broke Qwen (29% → 13%). Every change needs validation across models.&lt;/p&gt;

&lt;h2&gt;
  
  
  What stuck
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Benchmark everything.&lt;/strong&gt; Our first "88%" was on 25 easy questions. The honest baseline was 60%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Analyze per question, not averages.&lt;/strong&gt; "82% accuracy" tells you nothing. "11 failures are truncated data, 3 are arithmetic errors" tells you what to fix.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tool design &amp;gt; prompt engineering.&lt;/strong&gt; &lt;code&gt;execute_code&lt;/code&gt; was worth +12pp. No prompt tweak comes close.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/saludai-labs/saludai.git
&lt;span class="nb"&gt;cd &lt;/span&gt;saludai &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; uv &lt;span class="nb"&gt;sync
&lt;/span&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
uv run saludai query &lt;span class="s2"&gt;"¿Cuántos pacientes tienen diabetes tipo 2?"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Early-stage, Apache 2.0. Built for Argentina's health system, designed for LATAM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/saludai-labs/saludai" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; · &lt;a href="https://github.com/saludai-labs/saludai/issues" rel="noopener noreferrer"&gt;Issues&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>healthcare</category>
      <category>python</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
