A lot of the hesitation around AI medical scribes is not just about whether the summary is good. It is also about privacy, trust, auditability, and whether the output can actually go somewhere useful afterwards.
That made me curious about a different approach: what if the scribe lived in the browser, kept data local by default, and still tried to offer structured review and handoff?
So I built AI Medical Scribe, a browser-based prototype for live consultation transcription, on-device summarisation, document drafting, structured extraction, review tooling, local audit logging, and client-side FHIR export.
Everything runs in the browser.
No project backend.
No API keys.
No server-side processing.
No data leaving the device by default.
Repo here:
https://github.com/hutchpd/AI-Medical-Scribe
Why I built it
This was not an attempt to make a production-ready clinical tool.
It was an experiment.
I wanted to explore whether modern browser capabilities are now good enough that you could build the front end of a medical scribe as a local-first app, and whether that changes the conversation around privacy, trust, and workflow.
A lot of the friction around AI scribes is not just about summarisation quality. It is also about where the data goes, who processes it, what gets retained, how audit works, and whether the output actually fits the systems clinicians already use.
So instead of asking, “can I build a cloud AI scribe?” I asked, “can something like this live in the browser and still be useful?”
What the prototype does
The current version supports:
- live consultation transcription using Chrome speech recognition
- manual notes alongside the live transcript
- timeline markers for important moments
- on-device summary generation after the session ends
- rich text document drafting from transcript content
- structured extraction into buckets like problems, medications, allergies, investigations, follow-up actions, diagnoses, safety netting, and admin tasks
- review mode with confidence highlighting and provenance cues
- local append-only audit logging
- client-side FHIR R4 document Bundle export
- optional direct browser-side FHIR POST to a configured endpoint
- optional encrypted session storage, app lock, inactivity lock, retention controls, and ephemeral consultation mode
All of that happens in-browser.
It is still a prototype and still very dependent on recent Chrome builds and built-in AI features that are rolling out unevenly, but it is now much more than a simple “transcribe and summarise” demo.
The original idea was smaller
The first version was much simpler.
It handled:
- transcription
- notes
- summary generation
- document drafting
- session history
That was enough to test the basic idea, but once I shared it and started getting feedback, the same concerns kept coming up.
Not “can the browser summarise a consultation?”
More like:
- how do you review and trust what it generated?
- how do you audit what happened?
- how do you secure local data properly?
- how do you get anything useful back into the EHR?
- if it stays in the browser, how do you prove that is safe and governable?
Those are much more interesting questions.
What the feedback made clear
One of the most useful things about sharing an experiment publicly is that people immediately point at the real blockers.
The responses were thoughtful, and honestly pretty consistent.
1. Local-first is appealing, but not enough on its own
Several people liked the idea of keeping data on the device, especially in settings where privacy, trust, procurement, or data residency matter.
That matches my own instinct. Even if cloud systems can be compliant, the simple fact that consultation content is being sent elsewhere changes how some clinicians and patients feel about the interaction.
But local-first is not magic. It does not remove the need for security, retention rules, auditability, review workflows, or governance.
What it does do is shift the trust boundary. Instead of “trust my cloud pipeline”, it becomes “trust the endpoint, deployment environment, and browser capabilities”.
That feels like a different and, in some cases, more workable conversation.
2. The real blocker is not transcription, it is handoff
This came up again and again.
Even if you can transcribe locally, summarise locally, and draft locally, that does not automatically make the tool useful.
If the output cannot get back into Epic, Cerner, Galeon, or another EHR in a structured and visible way, you risk ending up with something that is just a fancy sidecar.
That feedback pushed me to think beyond “generate a nice document” and toward structured interoperability, even in a prototype.
3. Review burden matters more than raw generation
Another fair criticism was that if clinicians have to review everything anyway, are you actually saving time?
I think that is the right challenge.
The value is not “AI writes the note and you trust it blindly.”
The value, if there is any, is in reducing the blank-page problem, surfacing likely useful structure, and making review faster and more focused.
That is part of why I added confidence-aware review tooling rather than treating all transcript and generated output as equally reliable.
4. Auditability matters even for local tools
A browser app that says “everything stays local” still has to answer awkward questions:
- what happened during the session?
- what was generated and when?
- what was edited?
- what was exported?
- can the workflow be traced at all?
That made it clear that a serious local-first experiment needs more than browser storage. It needs some concept of local traceability as well.
How I tried to address those concerns in the newer version
I do not think the current version “solves” these problems.
But I do think it starts to show what direction a browser-based local-first scribe could go if you wanted to take those concerns seriously.
Review and trust
I added:
- structured extraction
- confidence indicators
- review mode
- provenance cues
- stale and needs-review badges
- quick actions to re-run generation or jump to relevant transcript sections
The goal was to make the output easier to validate, not just prettier to look at.
Auditability
I added a local append-only audit log that records key actions such as:
- session lifecycle changes
- edits
- generation actions
- extraction runs
- FHIR downloads and sends
- archive, restore, and delete actions
- lock and unlock activity
The logs stay local, and they can be viewed or exported as text or JSON.
Would browser-local audit logs satisfy a real hospital security team? Probably not on their own. But they are a better answer than having no traceability at all, and they help demonstrate what a local-first audit model might look like.
Security and privacy controls
I also added:
- optional encrypted storage at rest using Web Crypto
- passphrase unlock mode
- session-only key mode
- app-level lock and inactivity auto-lock
- ephemeral consultations
- retention-based cleanup and destructive deletion workflows
Again, this does not magically make it enterprise-ready. But it moves the experiment closer to the actual questions people ask once local data is involved.
Structured handoff
The biggest change in response to integration feedback was adding client-side FHIR export and optional browser-side delivery to a configured endpoint.
That does not solve the full EHR integration problem. Not even close.
But it does move the handoff from “here is some transcript text” to “here is a structured document Bundle with a Composition and related resources.”
That felt like an important step, because one of the takeaways from the feedback was that a local-first tool only starts to become interesting when it can hand something structured to the next system.
What I think this experiment shows
I do not think this project proves that browser-based medical scribes are ready for real deployment.
But I do think it shows something interesting:
you can now build a surprisingly capable local-first scribe workflow in the browser
Not a complete one.
Not a clinically safe one.
Not a production-ready one.
But enough to make the idea concrete.
And once the idea is concrete, the discussion gets better.
Instead of vaguely asking whether local-first healthcare AI is possible, you can point at something working and ask better questions:
- what would make this auditable enough?
- what would make this secure enough?
- what kind of structured handoff would actually fit workflow?
- where should trust sit?
- which parts belong in the browser, and which parts do not?
That is much more useful than vendor-style hype, and much more useful than treating the whole thing as impossible.
What I learned
The biggest lesson for me is that the interesting part is not the AI summary itself.
It is the combination of:
- capture
- review
- trust
- audit
- privacy
- structured handoff
That is the real product surface.
A tool like this is not just “speech recognition plus LLM output.” It lives or dies on workflow fit and confidence.
The other lesson is that browser capability is getting close enough that these experiments are worth doing now. The Chrome built-in AI ecosystem is still uneven and fiddly, but it is no longer science fiction to imagine local-first tools that do meaningful work without a project backend.
What it definitely is not
This is not a medical device.
It is not suitable for clinical use.
It is not production-ready.
It does not solve governance, compliance, or EHR integration.
It is an experiment.
But I think it is an experiment that points in a useful direction.
Not “look, I built the perfect AI scribe.”
More:
look, you could build these tools to live in the browser, keep data local by default, add review and auditability, and start handing off structured outputs.
That feels like a direction worth exploring.
If you want to look at the code
The repo is here:
https://github.com/hutchpd/AI-Medical-Scribe
If you work in health IT, clinical systems, browser AI, or local-first tooling, I’d be genuinely interested in whether this direction feels promising to you, and where you think the hard boundary really is.
Is it browser capability?
Is it audit and governance?
Is it review burden?
Or is it still the last mile into the EHR?
Top comments (0)