What I Built
MediGraph is a medical question answering system that compares
three AI pipelines — LLM Only, Basic RAG, and GraphRAG — and
proves that graphs dramatically reduce token consumption.
The Problem
LLMs are expensive. Every query consumes tokens. Companies pay
more every quarter just to run AI in production.
My Solution — GraphRAG with TigerGraph
Instead of dumping entire documents into the LLM prompt,
GraphRAG traverses a knowledge graph and returns only the
exact facts needed.
Results
| Pipeline | Tokens | Cost |
| LLM Only | 813 | $8.1e-05 |
| Basic RAG | 173 | $1.8e-05 |
| GraphRAG | 125 | $1.5e-05 |
GraphRAG saves 70-85% tokens vs LLM Only!
Tech Stack
- TigerGraph Savanna — graph database
- Groq API — LLM inference
- FAISS — vector search
- Streamlit — dashboard
- Python 3.11
How It Works
- User selects a disease
- TigerGraph traverses Disease → Symptoms → Drugs
- Compact structured context is passed to LLM
- LLM generates precise answer
Top comments (0)