DEV Community

Michael Nikitin
Michael Nikitin

Posted on • Originally published at itirra.com

The Architecture Behind FHIR-Based Member Appeals Automation

A lot of member appeals products are solving the wrong problem.

They make it faster to submit an appeal – better forms, cleaner portals, fewer clicks. But the actual bottleneck was never submission. It's the 30–90 minutes a nurse spends hunting through an EHR to assemble clinical evidence that already exists in structured form, just not anywhere the appeals workflow can reach it.

If you're building in the prior auth or claims space, this distinction matters. The companies that treat appeals as a data retrieval problem build fundamentally different and better products than the ones treating it as a submission problem. This post is about the architecture, decisions, and sequencing behind the first approach.

Briefly about member appeal lifecycle

A member appeal is a formal request to overturn a health plan's denial of coverage. The member or their provider submits clinical evidence arguing medical necessity. Most appeals start with prior authorization denials, not billing disputes.

The denial reasons are predictable: incomplete clinical documentation (data was in the EHR, just not attached), coding mismatches, payer-specific rule failures, or missed submission windows. In nearly every case, the data to prevent the denial or win the appeal was already captured somewhere in the system. It wasn't surfaced, structured, or transmitted in time.

The implication for builders: if your product only handles the appeal after the denial, you're entering the workflow at the most expensive, lowest-leverage point. The earlier you can pull structured clinical data into the process, the more valuable the product becomes, and the architecture choices you make in month one determine whether that upstream expansion is even possible later.

What changes when you have live EHR access

EHR integration doesn't just speed up appeals — it makes entirely different workflows possible

Automated evidence assembly is the starting point. Instead of a person copying chart data into a submission, your system queries the EHR for the specific FHIR resources a payer requires for that denial reason (Condition, Observation, MedicationRequest, DocumentReference) and packages them programmatically. The appeal gets built in seconds from data that already exists.

Gap detection and triage is where it gets interesting. With live clinical data, you can score appeal probability before submission. Flag the ones likely to win, route the ones that need more clinical input, and stop wasting time on appeals that were dead on arrival. This alone changes how teams allocate their skilled staff.

Faster payer-provider exchange compresses review cycles. When both sides exchange structured FHIR data rather than faxed PDFs, the payer gets machine-readable clinical information, and the provider gets structured status updates instead of portal notifications.

Upstream denial prevention is the end game. If you're integrated deeply enough to analyze prior auth submissions before they go out, you can catch documentation gaps that would trigger a denial. No denial, no appeal. This is the highest-value capability, but it requires a foundation that most teams don't lay early enough, which is why sequencing matters.

Architecture decisions that matter more than the API calls

The FHIR queries themselves are straightforward. You'll pull from a small set of resources – Patient, Condition, Observation, MedicationRequest, DocumentReference, Procedure, and Claim/ClaimResponse on the payer side. That's well-documented. What's less documented is where the real architectural decisions live.

Payer rules as configuration, not code

Every payer has different medical necessity criteria, documentation requirements, and submission formats. The temptation in Phase 1 is to hard-code logic for your first payer and move fast.

It’s important you don’t do it.

Build a rules engine from the start – even a simple one. Map denial reason codes to required FHIR resources and payer-specific evidence criteria. Define lookback periods, required observation categories, and whether step therapy evidence or a letter of medical necessity is needed – all as configuration, not conditionals.

When you onboard payer number two, you want to be adding configuration rows instead of rewriting query logic. The alternative of a growing chain of payer-specific if blocks is one of the most common architectural dead ends in appeals products. It works for one payer, then breaks at three. By five, you're looking at a rewrite.

Handling the "structured data isn't actually structured" problem

FHIR gives you structured access. It does not guarantee structured content. This is the gap that catches most teams off guard.

Clinical notes come back as free text inside DocumentReference. Lab results use local codes instead of LOINC. Diagnosis codes are outdated or too general to support medical necessity arguments. You need a data quality layer between the FHIR response and your evidence package.

That layer doesn't need to be sophisticated on day one. Start by checking whether lab results have standard LOINC coding, whether clinical notes are structured or free-text blobs, and whether diagnosis codes are specific enough for the payer's criteria. When quality checks fail, route to a human for review rather than submitting a weak package that will get bounced, which restarts the entire cycle and costs more than the delay.

The goal is to build this quality layer into the architecture from the beginning, even if the checks are basic. Bolting it on later means retrofitting every pipeline that touches clinical data.

Auth: the timeline you should actually plan for

Implementing SMART on FHIR for OAuth2 authorization is well-documented. What's less obvious is the non-technical timeline around it.

Vendor registration with the EHR (Epic, Oracle Health, etc.) can take weeks on its own. Then comes the health system's security review – they'll want to know exactly what data you're accessing, how you're storing it, and your BAA status. You may request access to Observation and DocumentReference, but get pushback on the breadth of clinical data you're pulling. Scope negotiation is real.

The technical auth flow is standard OAuth2. Budget your timeline for the administrative process, not the code. A realistic range for a first production integration is 3–6 months, and admin approvals (not engineering) drive most of that.

On the payer side, the CMS Interoperability and Prior Authorization Final Rule is pushing health plans toward FHIR-based APIs. This regulatory tailwind is real and worth building toward, but implementation timelines vary significantly by payer.

Failure modes that aren't in the docs

Skipping the provider workflow. If your tool requires providers to change how they document, adoption stalls. The best appeals integrations are invisible to the clinician – they pull data that's already captured without adding documentation steps. The moment you ask a physician to fill in a new field "for the appeals tool," you've lost them.

Building denial prevention before proving evidence assembly. This is a sequencing mistake driven by ambition. Denial prevention requires deep integration on both the provider and payer sides, plus enough data to build reliable gap detection models. If you haven't proven you can save a nurse 45 minutes on appeal prep, you're not ready to pitch upstream prevention to enterprise buyers. Walk before you run.

Treating EHR integration as a phase two feature. This is the most expensive mistake on the list. Startups that build the submission layer first and bolt on EHR integration later end up constrained by early architecture decisions — the data model, the payer rules structure, the evidence packaging pipeline. All of these are shaped by whether you assumed manual data entry or programmatic retrieval. Retrofitting is painful, slow, and often means rebuilding the core of the product.

Phased roadmap

Phase 1 (Months 1–3): Read-only evidence assembly. Connect to one EHR environment. Pull clinical data for the most common denial types. Measure time savings for your first pilot customer. Ask them for their top five denial reason codes by volume and scope Phase 1 around those – it keeps the build tight and makes ROI easy to prove.

Phase 2 (Months 3–6): Structured submission and status tracking. Compile and submit appeal packages in structured formats. Integrate status tracking so users aren't manually checking payer portals. Begin supporting a second EHR vendor or additional sites.

Phase 3 (Months 6–12): Bi-directional payer-provider exchange. Payers request and receive structured clinical data through your platform. This is where payer partnerships start to matter, and your platform becomes the exchange layer, not just the submission tool.

Phase 4 (Months 12+): Upstream denial prevention. Pre-submission analysis catches documentation gaps before the prior auth goes out. Highest value, highest complexity. Don't start here, but make sure your Phase 1 architecture doesn't prevent you from getting here.

The real argument

The regulatory environment is moving toward FHIR-based exchange, and the CMS Prior Authorization Final Rule is accelerating it, but regulation isn't the reason to build this way.

The reason is simpler: the clinical data needed for most appeals already exists in the EHR, and the teams that build products around retrieving it programmatically will outperform the ones that build better paperwork.

Every architectural decision – how you model payer rules, how you handle data quality, how you scope your first integration – either moves you toward that or away from it. Start with read-only evidence assembly. Prove the time savings. Then expand upstream. The foundation you lay in Phase 1 determines whether Phase 4 is a natural extension or a rebuild.

Top comments (0)