You're building a FHIR platform that serves multiple customers. Each one needs isolated data through the same API. There's no best practice for this — Darren Devitt's FHIR Architecture Decisions calls multi-tenancy a "deal breaker" requirement that can eliminate entire classes of FHIR servers.
Three models exist. We've built two of them in production with HAPI FHIR.
Model 1: Separate Database Per Tenant
Each tenant gets their own database schema — or for true physical isolation, their own database host.
What you get: Strongest isolation. Independent scaling. DROP SCHEMA to delete a tenant. Compliance teams love it (especially separate hosts).
What breaks: You're managing N connection pools, N migrations, N backups. HAPI's HikariCP defaults to 10 connections — we exhausted the pool with 8 concurrent workers and left HAPI in a zombie state. Cold starts per tenant add 30-45 seconds when a database hasn't been accessed recently.
When to use it: Regulations mandate physical separation. Tenants are large enough to justify the overhead.
Model 2: Shared Database, Partitioned Tables
One database, but every resource row gets a PARTITION_ID column. HAPI supports this natively:
partitioning:
database_partition_mode_enabled: true
request_tenant_partitioning_mode: true
allow_references_across_partitions: true
Creating a partition:
POST /fhir/$partition-management-create-partition
{"resourceType": "Parameters", "parameter": [
{"name": "id", "valueInteger": 101},
{"name": "name", "valueCode": "tenant_acme"}
]}
What you get: Logical isolation at the storage layer. One database to manage. Cross-partition references for shared data (practitioner directories, formularies).
What breaks: HAPI bugs surface here. #1099: patient=UUID triggers "Non-unique ID" errors — fix by rewriting to patient:Patient._id=UUID. #6665: bulk export background jobs lose partition context entirely — we rebuilt the Bulk Data Access IG in Python.
No built-in access control. HAPI doesn't know who's calling — your API gateway handles auth and routing.
When to use it: Many tenants, moderate data volumes. You want logical isolation without N databases.
Model 3: Shared Database, Tag-Based Filtering
One database, no partitions. Every resource gets a meta.tag for its tenant. Your gateway appends _tag=tenant_acme to every query.
extra_params["_tag"] = f"https://yourapp.com/tenant|{tenant_id}"
What you get: Simplest to implement. Zero overhead per tenant. Works with any FHIR server, not just HAPI.
What breaks: No real isolation — the boundary is your gateway's tag-filtering logic. A missed _tag parameter on one endpoint leaks data across tenants.
Worse: HAPI ignores _tag on instance reads. GET /fhir/Patient/abc-123 returns the resource regardless of tags. You need post-fetch verification:
resource = resp.json()
resource_tags = {t["system"] + "|" + t["code"] for t in resource.get("meta", {}).get("tag", [])}
if not resource_tags.intersection(allowed_tags):
return 404
When to use it: Prototyping. Internal tools. Small deployments. A starting point before migrating to partitions.
How to Choose
| Concern | Separate DB | Partitions | Tags |
|---|---|---|---|
| Isolation | Strongest | Logical (storage) | Logical (query) |
| Compliance | Happy | Usually OK | Nervous |
| Ops cost per tenant | High | Low | Zero |
| Performance isolation | Full | Shared DB | Shared everything |
| HAPI bugs | Fewer | More | Fewer |
| Works with any server | Yes | HAPI-specific | Yes |
| Instance read safety | Inherent | Inherent | Post-fetch check |
The direction of migration matters: you can move from tags → partitions → separate databases, but not easily the other way. Start with the simplest model that meets your compliance requirements.
Whichever model you choose, the FHIR server handles storage. Auth, routing, and access control are your gateway's job. We wrote about what that gateway looks like.
mock.health handles multi-tenant isolation so you can focus on your integration. Isolated FHIR data with SMART auth — no partition management required. Try it free →

Top comments (0)