<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Breno dos Santos Alves</title>
    <description>The latest articles on DEV Community by Breno dos Santos Alves (@brenosalves).</description>
    <link>https://dev.to/brenosalves</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/brenosalves"/>
    <language>en</language>
    <item>
      <title>Healthcare AI that runs where there's no internet — Gemma 4 on a $150 phone</title>
      <dc:creator>Breno dos Santos Alves</dc:creator>
      <pubDate>Wed, 13 May 2026 23:08:25 +0000</pubDate>
      <link>https://dev.to/brenosalves/healthcare-ai-that-runs-where-theres-no-internet-gemma-4-on-a-150-phone-53p5</link>
      <guid>https://dev.to/brenosalves/healthcare-ai-that-runs-where-theres-no-internet-gemma-4-on-a-150-phone-53p5</guid>
      <description>&lt;p&gt;&lt;strong&gt;Track:&lt;/strong&gt; Build With Gemma 4 · &lt;strong&gt;Variant used:&lt;/strong&gt; &lt;code&gt;gemma4:e2b&lt;/code&gt; (default) and &lt;code&gt;gemma4:e4b&lt;/code&gt; (compared) · &lt;strong&gt;Submission:&lt;/strong&gt; individual&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Not a medical device.&lt;/strong&gt; This is a research proof-of-concept and does not replace clinical evaluation or laboratory confirmation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The scenario that motivated this
&lt;/h2&gt;

&lt;p&gt;A community health worker is 80 km from the nearest health center. A child has a fever. The visit kit is a stack of single-T lateral-flow rapid diagnostic tests — COVID-19 antigen, HIV TR1, syphilis DPP, dengue NS1, hepatitis B/C, leishmaniasis rK39, and pregnancy hCG. In Brazil, these are public-health-system mandated tests, available in every primary care unit (UBS). Misreading them is consequential: a false-positive HIV result is a life-altering moment for the patient, and a false-negative dengue NS1 in the first week of infection misses the only window where the antigen test works at all.&lt;/p&gt;

&lt;p&gt;There is no mobile signal. Cloud-based "AI triage" tools do not run here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The proof of concept I am submitting:&lt;/strong&gt; Gemma 4 2B, running entirely on the phone, reads a photo of the cassette and returns a structured JSON verdict. No network call. No telemetry. No remote logging. The health worker keeps the decision — the model is just a second pair of eyes.&lt;/p&gt;

&lt;p&gt;This post is what I learned building it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Repository
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/brenosalves/gemma-poct" rel="noopener noreferrer"&gt;github.com/brenosalves/gemma-poct&lt;/a&gt;&lt;/strong&gt; — MIT licensed. The system prompt, the analysis script, the synthetic benchmark generator, the Streamlit UI, and the Playwright recorder are all there. The demo video is in &lt;code&gt;poc/demo_video/demo.webm&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the model returns
&lt;/h2&gt;

&lt;p&gt;A single Ollama call with the cassette photo and a system prompt returns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"left_half_observation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"vertical red/pink pigment band"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"right_half_observation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mostly white, no pigment"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"visual_description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The window has a red vertical band on its left half and is blank on its right half."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"c_line_present"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"t_line_present"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"image_quality"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"good"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"valid"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"result"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"non_reactive"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"confidence"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"high"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"notes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The control line is present, but the test line is absent."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three things to notice:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The model describes before it concludes.&lt;/strong&gt; The first three fields are pure observation. The remaining fields are derived. This ordering matters; I will come back to it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Two coupled gates: &lt;code&gt;status&lt;/code&gt; and &lt;code&gt;result&lt;/code&gt;.&lt;/strong&gt; A test with no control line is &lt;code&gt;invalid&lt;/code&gt; no matter what the T position looks like. The contract enforces it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No clinical recommendation in &lt;code&gt;notes&lt;/code&gt;.&lt;/strong&gt; The UI layer maps the verdict to actions (repeat the test, refer to a specialist, take a confirmatory test) using a per-disease protocol — the model does not.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Why Gemma 4, and why &lt;code&gt;gemma4:e2b&lt;/code&gt; specifically
&lt;/h2&gt;

&lt;p&gt;The challenge's "Build With Gemma 4" track lets submissions pick any variant. I picked &lt;code&gt;gemma4:e2b&lt;/code&gt; (~1.5 GB int4 quantized) and the &lt;strong&gt;constraint that forced the choice is the audience, not the leaderboard&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The intended user is a community health worker in the field. The intended device is the phone they already own. In the Brazilian context that means entry-level Android devices with 4 GB RAM — Moto G14, Galaxy A05, Redmi 12C. Those phones can host &lt;code&gt;e2b&lt;/code&gt; comfortably. &lt;code&gt;e4b&lt;/code&gt; (~3 GB int4) needs 6 GB+ RAM and excludes that audience. The 31B-dense and 26B-MoE variants are eligible for the challenge but disqualified by the on-device constraint of the project.&lt;/p&gt;

&lt;p&gt;So the variant choice rolls up to a single sentence: &lt;strong&gt;the model has to fit in the phone the health worker is already carrying, otherwise the project does not exist.&lt;/strong&gt; I benchmarked &lt;code&gt;e4b&lt;/code&gt; here for comparison because that comparison is informative — see below — but the default everywhere in the codebase is &lt;code&gt;e2b&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Two other Gemma 4 properties that matter for this use case:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Native multimodality.&lt;/strong&gt; No separate OCR pipeline, no image-to-text intermediate. One call, one JSON.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;128k context.&lt;/strong&gt; It opens a door I have not walked through yet: loading the full clinical interpretation manual for each test as part of the system prompt. That is on the roadmap below.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The architecture is boring (and that is the point)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[phone camera]
     ↓
[optional crop]
     ↓
[Gemma 4 e2b on-device, via MediaPipe LLM Inference (mobile) / Ollama (POC)]
     ↓
[structured JSON]
     ↓
[UI: verdict + per-disease protocol action]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Everything happens on the device. There is no backend. There is no analytics. The capture-to-result loop never leaves the phone.&lt;/p&gt;

&lt;h2&gt;
  
  
  No fine-tuning. Just prompting.
&lt;/h2&gt;

&lt;p&gt;The whole behavior of the model is steered by a single system prompt (&lt;a href="https://github.com/brenosalves/gemma-poct/blob/main/poc/prompts/system.md" rel="noopener noreferrer"&gt;&lt;code&gt;poc/prompts/system.md&lt;/code&gt;&lt;/a&gt;). Four decisions in that prompt matter more than the rest:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. A validity gate before any result
&lt;/h3&gt;

&lt;p&gt;The prompt enforces a logical order: image quality first, control line second, test line third. The consistency rules at the bottom of the contract make this impossible to break:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- c_line_present: false → status: "invalid" and result: "indeterminate"
                          (no exceptions, regardless of T).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A test with no control line is biologically meaningless — the reagent did not flow, the strip is dead. The model treats it that way, even if it thinks it sees a T.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Image quality as step zero
&lt;/h3&gt;

&lt;p&gt;Before reading lines, the model must assess whether the window is in focus, whether glare is covering the lines, whether the cassette is even in frame. Blurred or framed-out images return &lt;code&gt;image_quality: "poor"&lt;/code&gt; and that locks the verdict to &lt;code&gt;invalid/indeterminate/low&lt;/code&gt;. Without this, the model hallucinates lines in fuzzy images.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Spatial decomposition before naming
&lt;/h3&gt;

&lt;p&gt;This is the decision that earned its keep. Read on.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Doubt is not evidence
&lt;/h3&gt;

&lt;p&gt;The contract explicitly states: "To mark &lt;code&gt;t_line_present: true&lt;/code&gt; you need concrete visual evidence (a red/pink horizontal pigment mark at the T position). Shadow, texture noise or glare do NOT count. When in doubt, mark &lt;code&gt;false&lt;/code&gt;." This sounds obvious. It was not obvious to the model.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bug we could not write our way out of (and the fix)
&lt;/h2&gt;

&lt;p&gt;Here is the failure mode I spent the most time on. With an earlier version of the prompt, &lt;code&gt;gemma4:e2b&lt;/code&gt; would look at a clean, well-lit negative test — a single red control line in the left half of the window, pristine white in the right half — and write:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Both the control and test lines are clearly visible."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Word for word, in case after case. This is not a model that is uncertain; it is a model that is &lt;strong&gt;affirmatively hallucinating a line that is not there&lt;/strong&gt;. The prompt already contained a paragraph telling the model that the letters "C" and "T" printed above the window are just plastic, not lines. The model would repeat the warning back to me and then hallucinate the line anyway.&lt;/p&gt;

&lt;p&gt;Adding stronger anti-hallucination warnings did not help. Adding base-rate priors ("most field tests are negative") helped a little but not enough. Rewriting the warning in a more imperative tone moved nothing.&lt;/p&gt;

&lt;p&gt;What worked was changing &lt;strong&gt;how&lt;/strong&gt; the model is asked to look. Instead of "is there a C line? is there a T line?", the prompt now asks the model to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Find the white horizontal window.&lt;/li&gt;
&lt;li&gt;Mentally split it into a LEFT HALF and a RIGHT HALF. &lt;strong&gt;Do not yet name these halves "C" and "T".&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Describe what is in each half. Pigment band? White? Unclear?&lt;/li&gt;
&lt;li&gt;Only then map left → control line, right → test line.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the JSON contract, this shows up as two extra fields &lt;strong&gt;before&lt;/strong&gt; the conclusion:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"left_half_observation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="s2"&gt;"vertical red/pink pigment band"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
                          &lt;/span&gt;&lt;span class="s2"&gt;"mostly white, no pigment"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"unclear"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"right_half_observation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;same&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The reasoning order matters. When the model has to commit to a description of the right half &lt;strong&gt;before&lt;/strong&gt; seeing the words "T" or "test", the label-driven hallucination disappears. The synthetic benchmark went from 3/6 to 5/6 immediately. The narrative &lt;code&gt;notes&lt;/code&gt; field still occasionally drifts ("both lines visible") but the structured fields — which are what the app actually consumes — are now reliable in the horizontal-layout case.&lt;/p&gt;

&lt;p&gt;The takeaway, for anyone using small multimodal models: the order of fields in your JSON contract is part of your prompt. The model commits to whatever it writes first.&lt;/p&gt;

&lt;h2&gt;
  
  
  The capture convention (and why it is enough)
&lt;/h2&gt;

&lt;p&gt;Real cassettes ship in two physical layouts: &lt;strong&gt;horizontal&lt;/strong&gt; (e.g. typical pregnancy hCG strips — window is wide, C and T labels sit above it, lines are vertical inside the window) and &lt;strong&gt;vertical&lt;/strong&gt; (e.g. many COVID-19 antigen kits — window is tall and narrow, C and T labels are stacked on one side, lines are horizontal). My synthetic benchmark images are all horizontal. The model is excellent on horizontal. On vertical it gets confused: "left half" and "right half" of a vertical window mean nothing useful.&lt;/p&gt;

&lt;p&gt;I tried generalizing the prompt with an orientation-detection step ("if C is above and T is below, use top/bottom halves"). It regressed performance on both layouts: the model occasionally misidentified the orientation, and the extra branching diluted the spatial decomposition that had been working.&lt;/p&gt;

&lt;p&gt;The solution is on the capture side, not the inference side. &lt;strong&gt;The app instructs the user to rotate the cassette so the C label is on the left and T label is on the right before taking the photo.&lt;/strong&gt; A vertical cassette becomes a horizontal one with a 90° turn. The model sees the layout it knows. The Streamlit app reminds the user with a sticky tip; on mobile this becomes a camera-overlay guide.&lt;/p&gt;

&lt;p&gt;This is the kind of compromise I have come to appreciate working with small models: do not bend the model toward the messiness of the world if you can bend the capture step toward the model.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it actually performs
&lt;/h2&gt;

&lt;p&gt;Two benchmarks. The synthetic one drives prompt iteration. The real one keeps me honest.&lt;/p&gt;

&lt;h3&gt;
  
  
  Synthetic benchmark (6 cases, deterministic generator)
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;generate_synthetic.py&lt;/code&gt; produces six PIL-drawn cassettes with ground truth encoded in the filename: positive (clear), positive (faint T), negative (clear), negative (clean), invalid (no lines), invalid (only T present, no C). Each isolates one decision point.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Prompt iteration&lt;/th&gt;
&lt;th&gt;&lt;code&gt;gemma4:e2b&lt;/code&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Baseline (protocol only, anti-hallucination warning, base-rate prior)&lt;/td&gt;
&lt;td&gt;3/6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;+ spatial decomposition (left half / right half before C/T)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5/6&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The remaining miss is &lt;code&gt;invalid_only_t_01&lt;/code&gt; — a synthetic case where the control line is absent but the T line is present. This is biologically rare (the reagent failed but the antigen capture worked anyway) and the model now classifies it as &lt;code&gt;valid/non_reactive&lt;/code&gt; instead of &lt;code&gt;invalid/indeterminate&lt;/code&gt;. The error is on the "fail safe" side for most real-world workflows, but it remains a known limitation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Real benchmark (4 photos, public-domain or owned)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Photo&lt;/th&gt;
&lt;th&gt;Layout&lt;/th&gt;
&lt;th&gt;&lt;code&gt;gemma4:e2b&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;&lt;code&gt;gemma4:e4b&lt;/code&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;negative_covid_01&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;vertical → rotated&lt;/td&gt;
&lt;td&gt;✓ non_reactive&lt;/td&gt;
&lt;td&gt;✓ non_reactive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;positive_covid_01&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;vertical, faint Ts&lt;/td&gt;
&lt;td&gt;✗ non_reactive&lt;/td&gt;
&lt;td&gt;✗ invalid&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;positive_covid_02&lt;/code&gt; (Wikimedia, &lt;a href="https://commons.wikimedia.org/wiki/File:Positive_Covid-19_Rapid_Antigen_Test.jpg" rel="noopener noreferrer"&gt;CC BY-SA 4.0&lt;/a&gt;)&lt;/td&gt;
&lt;td&gt;horizontal, faint T&lt;/td&gt;
&lt;td&gt;✗ non_reactive&lt;/td&gt;
&lt;td&gt;✓ reactive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;positive_pregnancy_01&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;horizontal, bold lines&lt;/td&gt;
&lt;td&gt;✓ reactive&lt;/td&gt;
&lt;td&gt;✓ reactive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Score&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2/4&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3/4&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;What this tells me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;e2b&lt;/code&gt; reads bold, high-contrast lines (the pregnancy hCG cassette is the easy case for any model). The pattern of misses is &lt;strong&gt;faint T lines&lt;/strong&gt; on real-world cassettes — the COVID-19 antigen tests in particular ship with subtle test lines that even people miss.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;e4b&lt;/code&gt; catches one of those faint-T cases (&lt;code&gt;positive_covid_02&lt;/code&gt;). This is the only case where &lt;code&gt;e4b&lt;/code&gt; improves over &lt;code&gt;e2b&lt;/code&gt;. If you have the RAM, you get one fewer false-negative. If you do not, you have the same blind spot as the average human reader, which is at least an honest blind spot.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;positive_covid_01&lt;/code&gt; photo defeats both models. The lines are extremely subtle and the lighting is mediocre. This is a model-capability ceiling, not a prompt issue.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So: prompt engineering moves the synthetic score from 3/6 to 5/6 and reveals where capacity, not steering, is the bottleneck.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/hq-I-2HY1Cw"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Four cases in order:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;positive_pregnancy_01&lt;/code&gt; on &lt;code&gt;e2b&lt;/code&gt; → Reactive ✓&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;positive_covid_02&lt;/code&gt; on &lt;code&gt;e2b&lt;/code&gt; → Non-reactive ✗ (faint T missed)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;positive_covid_02&lt;/code&gt; on &lt;code&gt;e4b&lt;/code&gt; → Reactive ✓ (same photo, bigger model)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;negative_covid_01&lt;/code&gt; on &lt;code&gt;e2b&lt;/code&gt; → Non-reactive ✓&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Streamlit UI shows the JSON, the verdict, the per-disease action, and a debug expander with the spatial observations. Latency per analysis is 15-30 seconds on an Apple M-series laptop running &lt;code&gt;e2b&lt;/code&gt;; on a real entry-level phone this will be slower (rough target: under a minute), which is fine because the workflow is "snap photo, wait while you write the patient's name down".&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations (the part to read before quoting any number)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Not a medical device.&lt;/strong&gt; Research and educational POC. Any deployment would require clinical validation and regulatory clearance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Single-T scope.&lt;/strong&gt; Malaria Pf/Pv and dengue IgM/IgG are explicitly out — they need a multi-T schema and per-disease combination rules.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image conditions matter.&lt;/strong&gt; Severe glare, motion blur, partial framing → invalid/low-confidence verdicts. Good, because alternative is a confidently wrong call.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vertical layouts depend on capture rotation.&lt;/strong&gt; The model is optimized for horizontal-window layouts. The app guides the user, but a user who ignores the guide will hit accuracy degradation on vertical cassettes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Brand variability.&lt;/strong&gt; Different manufacturers use different dyes and window geometries. The four photos I evaluated are not a substitute for a per-manufacturer study.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Roadmap
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Package as a Flutter app&lt;/strong&gt; using MediaPipe LLM Inference — that is the path to actually getting it onto the target phones. The desktop POC and the mobile app share the same system prompt and JSON contract.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-T support.&lt;/strong&gt; Malaria Pf/Pv and dengue IgM/IgG need a &lt;code&gt;t_lines: [{position, present}]&lt;/code&gt; array and per-disease combination rules.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use the 128k context.&lt;/strong&gt; Load the clinical interpretation manual for each test as part of the prompt; the model could then explain &lt;em&gt;why&lt;/em&gt; a faint T is biologically plausible for the disease in question.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Larger real benchmark.&lt;/strong&gt; Public-domain photos of HIV, dengue NS1 and syphilis rapid tests are scarce; a structured collection effort (informed-consent, anonymized) is needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Acknowledgements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Cover photo by &lt;a href="https://unsplash.com/photos/pAYpAGDUv80" rel="noopener noreferrer"&gt;John Cameron&lt;/a&gt; on Unsplash.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;positive_covid_02&lt;/code&gt; photo used in the demo video and benchmark table is cropped from &lt;a href="https://commons.wikimedia.org/wiki/File:Positive_Covid-19_Rapid_Antigen_Test.jpg" rel="noopener noreferrer"&gt;"Positive Covid-19 Rapid Antigen Test"&lt;/a&gt; on Wikimedia Commons, licensed under &lt;a href="https://creativecommons.org/licenses/by-sa/4.0/" rel="noopener noreferrer"&gt;CC BY-SA 4.0&lt;/a&gt;. Cropped to focus on the cassette; no other modifications. The benchmark photo files themselves are not committed to the repository — only the demo video that uses them is.&lt;/li&gt;
&lt;li&gt;The clinical action mappings in &lt;code&gt;poc/protocols.json&lt;/code&gt; are a simplified summary of the Brazilian Ministry of Health protocols for each test. They are illustrative, not authoritative.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Closing
&lt;/h2&gt;

&lt;p&gt;I started this project thinking the interesting question would be &lt;strong&gt;which model&lt;/strong&gt;. The interesting question turned out to be &lt;strong&gt;how do you structure the JSON so the model commits to observation before conclusion&lt;/strong&gt;. The model is plenty capable for the easy case. Steering it is most of the work, and steering it is free — no fine-tuning, no GPU rental, no data labeling, no cloud infrastructure.&lt;/p&gt;

&lt;p&gt;That is the part that I think generalizes beyond this project: for small on-device multimodal models, the prompt is the product. Iterate on the prompt the way you iterate on UI — measure, change one thing, measure again — and your whole system becomes a single file someone can read, audit, and improve.&lt;/p&gt;

&lt;p&gt;— Breno Alves&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built for the &lt;a href="https://dev.to/challenges/google-gemma-2026-05-06"&gt;Google Gemma 4 Challenge&lt;/a&gt;. Code: &lt;a href="https://github.com/brenosalves/gemma-poct" rel="noopener noreferrer"&gt;github.com/brenosalves/gemma-poct&lt;/a&gt;. Connect on &lt;a href="https://www.linkedin.com/in/breno-alves-075a31148" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;. Feedback welcome in the comments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>gemmachallenge</category>
      <category>gemma</category>
      <category>healthcare</category>
    </item>
  </channel>
</rss>
