AI doesn't fix messy thinking. It accelerates it.
Give an LLM a vague idea and you get a vague response. Give it a well-structured document with clear terminology, scoped problems, and tracked gaps — and it becomes the most productive collaborator you've ever worked with.
The bottleneck in 2026 isn't writing code. AI writes code fast. The bottleneck is knowing what to build — and being able to articulate it clearly enough that both humans and AI can execute on it.
This post introduces a framework for doing exactly that. It's not about AI tools. It's about how you organize your thoughts so that any complex problem becomes approachable.
The Framework: 7 Phases
Dream → Concrete → Divide → Gaps → Fix → Realize → Share
↑ |
└───────────────── feedback loop ────────────────────┘
Each phase transforms your thinking. Each one produces an artifact. Here's how they work.
Phase 1: Dream
You have an idea. It's half-formed, ambitious, probably unrealistic in its current shape. That's fine.
Prompt AI with whatever is in your head — even a single sentence — and let it generate a dream document. This is aspirational, not precise. The point is to get the idea out of your head and into a document where you can see it, react to it, and start shaping it.
Example prompt:
I want to build an AI-powered medical diagnosis platform
that helps doctors catch rare diseases earlier. A patient
describes symptoms, the system triages, suggests possible
diagnoses, and routes to the right specialist.
Generate a dream document with:
- The vision (what does this look like when it's done?)
- Current landscape (what exists today and what's missing?)
- Key concepts I'll need to define
- Open questions
What you get back:
A structured dream doc. Maybe 2-3 pages. It captures your intent, maps the landscape (UpToDate, Isabel Healthcare, Ada Health — what they do well, where they fall short), and surfaces questions you hadn't thought of yet. "What does 'catch earlier' mean? Faster triage? Better differential diagnosis? Flagging patterns across patient history?"
You didn't need to organize your thoughts first — AI did the first pass. Now you have something to react to.
The dream document is not a plan. It's a compass. It tells you the direction, not the steps.
Artifact: dreams/ai-diagnosis-platform.md
Phase 2: Concrete
Now you take the dream and turn it into a concrete reference document. This is structured, precise, and scoped. Define your terms. Draw your architecture. Specify what's in and what's out.
This is where AI earns its keep. Feed it the dream doc:
Here's my dream doc: [paste].
Turn this into a concrete reference document with:
- Defined terms and concepts
- Architecture sections with clear boundaries
- Scope (what this covers, what it doesn't)
- Open questions marked as [GAP] placeholders
Example result:
What started as "I want AI to help doctors catch rare diseases" becomes a reference document covering:
- Symptom intake engine: How patients describe symptoms, structured vs. free-text, multi-language support
- Diagnosis model: Differential diagnosis ranking, confidence scores, explainability requirements
- Specialist routing: Matching diagnoses to specialties, urgency tiers, availability awareness
- Patient records: EHR integration, data formats (FHIR/HL7), privacy boundaries
- Compliance layer: HIPAA, FDA Software as Medical Device (SaMD), audit logging
- Clinical UI: Doctor-facing dashboard, patient-facing intake, mobile vs. desktop
- Scope boundaries: "V1 does NOT include: prescription management, insurance billing, lab ordering"
This document is now your single source of truth. Not the code. Not chat history. Not someone's memory. The document.
Artifact: docs/Diagnosis_Platform.md
Phase 3: Divide
Your concrete doc is getting long. Some sections are 10 pages on their own. That's the signal to divide.
Break the concrete document into focused division documents, each deep enough to stand alone but connected to the whole.
Here's my concrete doc: [paste].
Which sections are complex enough to deserve their own
focused document? For each, suggest a title and scope.
Example:
A medical diagnosis platform reference doc naturally divides into:
| Division Doc | Covers |
|---|---|
Symptom_Engine.md |
Intake flow, symptom normalization, multi-language parsing |
Diagnosis_Model.md |
Differential ranking, confidence thresholds, explainability |
Specialist_Routing.md |
Specialty matching, urgency tiers, referral workflows |
Patient_Records.md |
EHR integration, FHIR/HL7 formats, data retention |
Compliance.md |
HIPAA controls, SaMD classification, audit trail |
Clinical_UI.md |
Doctor dashboard, patient intake, mobile responsiveness |
These aren't just knowledge divisions — they're your code modules. Each division doc maps directly to a real unit of code: a microservice, a package in a monorepo, a module in a monolith, or a bounded context in a domain-driven design. The doc describes what the module does, the code implements it.
Division Doc → Code Module
─────────────────────────────────────────────
Symptom_Engine.md → packages/symptom-engine/
Diagnosis_Model.md → packages/diagnosis-model/
Specialist_Routing.md → packages/specialist-routing/
Patient_Records.md → packages/patient-records/
Compliance.md → packages/compliance/
Clinical_UI.md → apps/clinical-dashboard/
Once docs map to code, you create an index document — a single file that sits at the root of your project and maps every doc to its code module. Think of it as the table of contents for your entire system. When an AI agent (or a new team member) needs to work on routing, they look at the index, find Specialist_Routing.md → packages/specialist-routing/, and have full context before touching a line of code.
## Project Index
| Working In | Primary Doc | Gaps |
|---|---|---|
| `packages/symptom-engine/` | [Symptom_Engine.md](docs/Symptom_Engine.md) | [Symptom_Engine_Gaps.md](docs/Symptom_Engine_Gaps.md) |
| `packages/diagnosis-model/` | [Diagnosis_Model.md](docs/Diagnosis_Model.md) | [Diagnosis_Model_Gaps.md](docs/Diagnosis_Model_Gaps.md) |
| `packages/specialist-routing/` | [Specialist_Routing.md](docs/Specialist_Routing.md) | [Specialist_Routing_Gaps.md](docs/Specialist_Routing_Gaps.md) |
| `packages/patient-records/` | [Patient_Records.md](docs/Patient_Records.md) | [Patient_Records_Gaps.md](docs/Patient_Records_Gaps.md) |
| `packages/compliance/` | [Compliance.md](docs/Compliance.md) | [Compliance_Gaps.md](docs/Compliance_Gaps.md) |
| `apps/clinical-dashboard/` | [Clinical_UI.md](docs/Clinical_UI.md) | [Clinical_UI_Gaps.md](docs/Clinical_UI_Gaps.md) |
This index becomes the most important file in the repo. It's the bridge between organized thought and organized code.
Artifact: docs/DX_Symptom_Engine.md, docs/DX_Compliance.md, + root CLAUDE.md index
Phase 4: Gaps
Here's where the framework really works. Once you have concrete docs and division docs, you can see what's missing.
Gaps aren't failures. They're the framework doing its job. You can't see what's missing until you've written what's there.
Track gaps with gap codes — short identifiers that make them searchable and referenceable:
### GAP-DX004: Model suggests diagnosis but doesn't explain reasoning
- **Phase**: Diagnosis output
- **Area**: Diagnosis model — result presentation layer
- **Impact**: Doctor sees "Possible: Lupus (82% confidence)"
but no explanation of which symptoms contributed or why
alternatives were ruled out. Doctor can't trust or verify
the suggestion. Useless in clinical practice.
- **Root Cause**: Model outputs a ranked list with scores
but no reasoning chain. Explainability was never specified
in the architecture — it was assumed to be a UI concern.
- **Fix**: Add an explainability module that maps each
diagnosis to contributing symptoms, relevant patient
history, and ruled-out alternatives. Display as
"evidence for / evidence against" in the clinical UI.
- **Priority**: Critical
- **Status**: Open
The gap code format gives you:
| Field | Purpose |
|---|---|
| Code | Searchable ID (GAP-DX004, GAP-C001) with category prefix |
| Phase | When in the system this surfaces |
| Area | Which module or concept is affected |
| Impact | What users experience when this gap exists |
| Root Cause | Why the gap exists — not symptoms, causes |
| Fix | Proposed solution (can be a phased plan) |
| Priority | Critical / High / Medium / Low |
| Status | Open / In Progress / Fixed (with date) |
Category prefixes keep gaps organized: GAP-DX for diagnosis, GAP-SE for symptom engine, GAP-C for compliance, GAP-UI for clinical interface. You'll develop your own categories as your project grows.
Artifact: docs/DX_Diagnosis_Gaps.md
Phase 5: Fix → Update
Fix the gaps. But not all at once.
Always ask for a phased implementation plan. Gaps have dependencies. Some are prerequisites for others. Trying to fix everything simultaneously creates chaos.
Here are my open gaps: [paste gap doc].
Here's the main concrete doc for context: [paste relevant section].
Create a phased implementation plan:
- Phase 1: Critical gaps that unblock other work
- Phase 2: High-priority gaps
- Phase 3: Medium-priority improvements
For each phase, estimate scope and list dependencies.
When a gap is fixed:
-
Update the gap doc — mark status as
Fixed (2026-02-22), note what was done - Update the main concrete doc — the reference doc must reflect reality
-
Archive the old version — move superseded docs to
archive/. Don't delete them. They're the history of your thinking
The concrete document evolves. Version 1 had 20 gaps. Version 2 closes 15 and discovers 5 new ones. Version 3 closes those and the doc stabilizes. This is convergence — each pass gets you closer to a complete, honest description of your system.
Artifact: Updated docs/Diagnosis_Platform.md, archived archive/gaps/DX_Diagnosis_Gaps_v1.md
Phase 6: Realize
Build real things from your docs. Projects are proof that the framework works.
Each project tests your framework against reality. When something breaks, the question is: which doc is wrong? The issue traces back to a gap in docs, not just a bug in code.
Example:
From one set of diagnosis platform docs, you build pilot deployments: one for an urgent care clinic (fast triage, high volume, common conditions), one for a rural hospital (limited specialists on-site, telemedicine routing), one for a pediatric practice (age-adjusted symptom interpretation, different reference ranges).
Each deployment tests the docs against a different reality. When the rural hospital pilot revealed that the specialist routing assumed specialists were in-house — but rural hospitals route to remote telemedicine providers with different availability windows — you didn't just patch the code. You:
- Created
GAP-SR002in the routing gaps doc - Documented the root cause (routing model assumed same-building availability, not async telemedicine)
- Fixed it with a phased plan (Phase 1: add remote-provider availability model, Phase 2: async consultation workflow)
- Updated
Specialist_Routing.md
Now the urgent care and pediatric deployments automatically handle telemedicine routing. Every project makes the docs better. Every doc improvement makes future projects more reliable.
Cross-module composition
Here's where the divide phase pays compound interest. Because each module has its own self-contained doc, you can mix and match modules across entirely different projects. Not every project needs every module. New projects are assembled by selecting which division docs to include.
| Project | Modules Used |
|---|---|
| Urgent care triage kiosk |
Symptom_Engine + Clinical_UI
|
| Rural telemedicine platform |
Symptom_Engine + Diagnosis_Model + Specialist_Routing
|
| Clinical research tool |
Diagnosis_Model + Patient_Records + Compliance
|
| Patient self-check app |
Symptom_Engine + Clinical_UI (patient-facing subset) |
| Hospital compliance auditor |
Compliance + Patient_Records
|
The urgent care kiosk doesn't need the full diagnosis model — it just needs fast symptom intake and a clean UI to hand off to the doctor. The research tool doesn't need specialist routing — it needs deep diagnosis data with compliant record access. Each project pulls the modules it needs, and the docs tell you exactly what you're getting.
This is the same principle behind monorepo packages, microservice composition, or even Unix pipes — small, well-documented units that combine into larger systems. The difference is that here, the composition starts at the documentation level. You decide what a project needs by reading docs, not by reading code.
Artifact: projects/urgent-care-pilot/, projects/rural-hospital-pilot/, etc.
Phase 7: Share
Write about your process. Teaching forces you to understand what you actually did.
Blog posts, READMEs, tutorials, internal docs — the medium doesn't matter. What matters is that you explain your framework to someone who doesn't have your context. This surfaces assumptions you didn't know you were making.
Sharing also closes the loop. A blog post about your diagnosis explainability approach might inspire a reader to suggest symptom-graph visualization instead of flat evidence lists. That becomes a new dream. The cycle restarts.
Artifact: Blog posts, tutorials, public documentation
The Takeaway
The framework is simple:
Dream it → Write it down → Break it apart → Find what's missing → Fix it → Build it → Share it.
In the AI age, organized thought is the highest-leverage skill you can develop. Not prompt engineering. Not knowing which model to use. Clarity of thought — the ability to take a messy idea and refine it into something precise enough that both humans and AI can execute on .
A note on maintenance: docs drift. Code changes, APIs shift, dependencies update. Treat drift as another gap — audit periodically, update docs when code changes, and archive old versions instead of deleting them. The history of your thinking is as valuable as the thinking itself.
Top comments (0)