If you've ever asked an LLM to "find the LOINC code for procalcitonin" or "list the active ingredients in Janumet," you've probably watched it confidently invent a code that doesn't exist. Medical terminologies are exactly the kind of structured, frequently-updated reference data that language models are bad at memorizing and good at looking up — if you give them the right tool.
medical-terminologies-mcp is a Model Context Protocol server that gives any MCP-compatible client (Claude Desktop, Claude Code, Continue, and others) unified access to seven medical terminology systems:
- ICD-11 (WHO International Classification of Diseases, 11th Revision)
- LOINC (Logical Observation Identifiers Names and Codes)
- RxNorm (NIH normalized clinical drug names)
- MeSH (NLM Medical Subject Headings)
- ATC (WHO Anatomical Therapeutic Chemical, served via NLM RxClass)
- CID-10 (Brazilian Portuguese translation of ICD-10, DataSUS V2008 — bundled)
- SNOMED CT (Systematized Nomenclature of Medicine, optional, license required)
Twenty-six tools work out of the box with no authentication for LOINC, RxNorm, MeSH, ATC, CID-10, plus a bundled authoritative ICD-10 → ICD-11 mapping, a cross-terminology batch validator, and versioning + cross-revision diff. ICD-11 live lookup needs free WHO API credentials (a five-minute signup), bringing the default count to 31. SNOMED is gated behind an explicit feature flag and requires an IHTSDO license plus a self-hosted Snowstorm instance — more on why below.
The server also exposes 3 MCP Prompts that orchestrate tool calls into named user actions (find-medical-code, drug-info, cid10-portuguese-lookup) and 3 Resources for in-process reference content (server metadata, CID-10 chapter listing, per-terminology license disclaimers) — MCP clients render Prompts as one-click actions and read Resources by URI.
This post walks through three concrete clinical and research workflows where the server earns its place in the toolbelt.
Setup in 60 seconds
{
"mcpServers": {
"medical-terminologies": {
"command": "npx",
"args": ["-y", "medical-terminologies-mcp"],
"env": {
"WHO_CLIENT_ID": "your-who-id",
"WHO_CLIENT_SECRET": "your-who-secret"
}
}
}
}
Drop that into your Claude Desktop config (%APPDATA%\Claude\claude_desktop_config.json on Windows, ~/Library/Application Support/Claude/claude_desktop_config.json on macOS), restart Claude, and 31 tools appear. WHO credentials are optional — without them the 5 ICD-11 live-lookup tools throw a clear configuration error and the other 26 work fine (including the authoritative ICD-10 → ICD-11 mapping, which uses bundled WHO data and needs no auth).
Prefer a hosted endpoint? The server runs on Cloudflare Workers at the edge — point any Streamable-HTTP MCP client at https://medical-terminologies-mcp.sidneybissoli.workers.dev/mcp and skip the local install. Also listed on Smithery, Glama, and mcpservers.org.
Use case 1: sepsis triage — finding the right LOINC for procalcitonin
A clinical scenario. An ED resident is documenting a sepsis workup and needs the LOINC code for serum procalcitonin to attach to the lab order in the EMR. They could open loinc.org, run a search, scroll through results, and copy the code. Or they could ask their LLM-powered scribe.
You: What's the LOINC code for procalcitonin in serum?
Claude (calling loinc_search):
## LOINC Search Results for "procalcitonin"
Found 23 total results (showing 10):
1. **33959-8** - Procalcitonin [Mass/volume] in Serum or Plasma
Component: Procalcitonin | Property: MCnc | Timing: Pt | System: Ser/Plas
Class: CHEM | Status: ACTIVE
2. **75241-0** - Procalcitonin [Mass/volume] in Body fluid
...
The LLM picks 33959-8 and proposes it. The clinician confirms or asks for details:
You: Give me the full LOINC details for 33959-8.
Claude (calling loinc_details):
# 33959-8 - Procalcitonin [Mass/volume] in Serum or Plasma
| Attribute | Value |
|-----------|-------|
| Component | Procalcitonin |
| Property | MCnc |
| Timing | Pt |
| System | Ser/Plas |
| Scale Type| Qn |
| Method | (none) |
| Class | CHEM |
| Status | ACTIVE |
What this replaces in practice: a 30-second context switch from EMR to a browser tab, every time a clinician needs to pin down a specific code. Multiplied across the day, across a team, that's real time.
The same workflow applies to any biomarker, vital sign, or observation: lactate (2524-7), troponin I high-sensitivity (67151-1), SpO₂ (59408-5). The tool is loinc_search for the discovery step, loinc_details for the verification step. Both are deterministic API calls against NLM Clinical Tables — the LLM doesn't memorize codes, it looks them up.
Use case 2: medication reconciliation — breaking down a combination product
A scenario familiar to any inpatient pharmacist or hospitalist: a patient is admitted on Janumet (a brand combo of sitagliptin and metformin), and the team needs to reconcile this against the formulary, check for class-level interactions, and document the active ingredients separately.
You: I need to break down Janumet for the admission med rec.
What are the active ingredients and their drug classes?
Claude (calling rxnorm_search, then rxnorm_ingredients, then atc_classify):
Step 1 — search for Janumet:
**860975** - Janumet (sitagliptin 50 MG / metformin hydrochloride 500 MG Oral Tablet)
Type: SBD | Synonym: sitagliptin/metformin
Step 2 — get the ingredients:
| RxCUI | Name | Type |
|-------|------|------|
| 593411 | sitagliptin | Single Ingredient |
| 6809 | metformin | Single Ingredient |
Step 3 — get ATC classes for each ingredient:
For sitagliptin (593411): A10BH (Dipeptidyl-Peptidase IV Inhibitors)
For metformin (6809): A10BA (Biguanides)
A10BD (Combinations of oral blood glucose lowering drugs)
Three tool calls, structured data, no hallucination risk. The LLM can compose this into a med rec note, flag that both ingredients sit in the same A10B (oral glucose-lowering) ATC bucket, and surface the ATC codes for any pharmacy system that uses ATC.
What this enables that bare LLMs don't: the chain brand → ingredients → ATC classes is a graph traversal across a real database. The model isn't recalling that sitagliptin is a DPP-4 inhibitor; it's looking it up against the same source the FDA uses.
Use case 3: evidence synthesis — building a precise PubMed search with MeSH
A scenario for a researcher running a systematic review on the impact of primary health care interventions on avoidable hospitalizations. The quality of a literature search hinges on hitting the right MeSH descriptors and qualifiers — not on free-text keyword guesswork.
You: I'm building a search strategy for a systematic review on
primary health care and avoidable hospitalizations. Help me find
the right MeSH descriptors and applicable qualifiers.
Claude (calling mesh_search):
| MeSH ID | Label |
|---------|-------|
| D011320 | Primary Health Care |
| D006297 | Health Services Accessibility |
| D004472 | Hospitalization |
| D006308 | Health Services Misuse |
You: Get me the tree position and allowed qualifiers for Primary Health Care.
Claude (calling mesh_tree, then mesh_qualifiers):
# Tree Numbers for D011320
### Health Care
- `N04.590.233.727`
- `N02.421.143.827`
# Allowed Qualifiers for D011320
| Qualifier ID | Label |
|--------------|-------|
| Q000208 | ethics |
| Q000379 | methods |
| Q000523 | organization & administration |
| Q000592 | standards |
| Q000656 | trends |
| Q000706 | statistics & numerical data |
| Q000941 | utilization |
*Use qualifiers with descriptors like: "Primary Health Care/utilization"*
That last detail — the seven allowed qualifiers, with /utilization and /statistics & numerical data highlighted — is exactly what a precise PubMed search needs. The model now has enough to draft ("Primary Health Care/utilization"[MeSH] OR "Primary Health Care/statistics & numerical data"[MeSH]) AND "Hospitalization/prevention and control"[MeSH] and explain why each leg matters.
The use case generalizes: any research informatics workflow that needs controlled vocabulary mapping (PubMed, Cochrane, OVID) benefits from mesh_search + mesh_qualifiers + mesh_tree. For systematic reviews specifically, the qualifier list is the part that's hardest to remember and easiest to get wrong.
What's new in 1.4.0 (data-integrity release)
Three additions that change the depth of what the server can answer:
-
map_icd10_to_icd11now returns an authoritative WHO mapping, not a text-search heuristic. The 2025-01 release of the WHO transition tables is bundled (5.4 MB raw / 0.95 MB gzipped) covering 11,243 ICD-10 categories. 1,461 of those have multiple WHO-documented ICD-11 candidates — the tool surfaces the primary mapping plus all alternatives, with Foundation/Linearization URIs ready to navigate into ICD-11. -
validate_codesis a new batch validator. Pass up to 50{ code, terminology }pairs and get back per-item{ valid, active, title, replaced_by, source, error }. Designed for retrospective database analysis: flag codes that no longer exist, surface ICD-10 → ICD-11 replacements at scale, grade activity status where the source terminology exposes it (SNOMED's active flag, LOINC's STATUS field). -
terminology_versions+terminology_difffor pipeline maintainers. The first lists current version, release date, publisher, and update cadence across all 8 terminologies. The second is guidance-only for terminologies without bundled history — except for ICD-10, where the bundled transition tables let it surface a real cross-revision summary (1:1 mappings vs splits vs average alternatives when split).
Together with the per-tool language parameter added to SNOMED + MeSH, that's four sub-tasks shipped as one data-integrity release. The earlier 1.3.0 added MCP Prompts (orchestration templates) and Resources (in-process reference content) — both visible in clients like LobeChat and Claude Desktop as one-click actions.
What the server doesn't claim to do
A few things worth being explicit about, because the README is honest about them and the LLM should be too:
-
map_loinc_to_snomedreturns guidance, not a mapping. Direct LOINC ↔ SNOMED CT mappings live in UMLS Metathesaurus (license required) or the LOINC SNOMED CT Expression Association files (LOINC license required). -
map_snomed_to_icd10also returns guidance only today. The authoritative source is SNOMED International's ICD-10 Complex Map refset (447562003), which needs a Snowstorm instance with the refset loaded — planned as Phase 13.7 once that's tractable. -
SNOMED tools are off by default because the historical public IHTSDO Snowstorm endpoint was retired (HTTP 410 Gone). Operators with an IHTSDO license and a self-hosted Snowstorm instance flip them on with
ENABLE_SNOMED_TOOLS=true SNOMED_BASE_URL=.... - None of this is a substitute for clinical judgment. It's a lookup layer for already-known codes, not a diagnostic tool.
Under the hood, briefly
For developers curious about the engineering: TypeScript on Node 20+, bundled with esbuild, built around a token-bucket rate limiter (5 req/s for WHO, 10 req/s for NLM, 20 req/s for RxNorm) and exponential-backoff retry with ±25% jitter. WHO OAuth tokens are cached using the actual expires_in from the API response, not a hardcoded TTL. Every default tool declares outputSchema and returns structuredContent alongside markdown — so MCP clients that consume structured data get typed objects, not parsed prose. The 313-test Vitest suite (unit + contract via nock + integration against live APIs gated by env flag) gates CI on PR. A daily integration cron catches upstream API drift — work that already caught three silent production regressions where the API had drifted and the client gracefully returned empty data.
Two transports: stdio (default — Claude Desktop, IDE clients) and Streamable HTTP. The HTTP path runs on Cloudflare Workers at the edge by default — single web-standard fetch handler, ~0.95 MB gzipped including the bundled CID-10 (DataSUS V2008) and ICD-10 → ICD-11 (WHO 2025-01) datasets, well within Cloudflare's 3 MB compressed script limit. Same source tree; per-isolate cache + rate limiter (KV + Durable Object swap available when traffic crosses the threshold).
It's MIT licensed. The medical terminology content has its own licenses, all linked in the README and surfaced as the info://licenses Resource.
Try it
- npm: https://www.npmjs.com/package/medical-terminologies-mcp
- GitHub: https://github.com/SidneyBissoli/medical-terminologies-mcp
- Hosted endpoint: https://medical-terminologies-mcp.sidneybissoli.workers.dev/mcp (Streamable HTTP, Cloudflare Workers)
- Smithery: https://smithery.ai/server/@SidneyBissoli/medical-terminologies-mcp
- Glama: https://glama.ai/mcp/servers/SidneyBissoli/medical-terminologies-mcp
- mcpservers.org: https://mcpservers.org/servers/sidneybissoli/medical-terminologies-mcp
-
MCP Registry:
io.github.SidneyBissoli/medical-terminologies-mcp
WHO API credentials (free): https://icd.who.int/icdapi.
Issues, PRs, and use-case reports welcome — especially from clinical informatics teams, research informatics groups, and public-health data analysts using LLMs in real workflows. There's a real gap between "LLM as scribe" and "LLM with reliable terminology access," and closing it is what this server is for.
Top comments (0)