DEV Community

chinmayi-r-hegde
chinmayi-r-hegde

Posted on

How I Built a GraphRAG System That Saves 70-85% LLM Tokens Using TigerGraph

What I Built

MediGraph is a medical question answering system that compares
three AI pipelines — LLM Only, Basic RAG, and GraphRAG — and
proves that graphs dramatically reduce token consumption.

The Problem

LLMs are expensive. Every query consumes tokens. Companies pay
more every quarter just to run AI in production.

My Solution — GraphRAG with TigerGraph

Instead of dumping entire documents into the LLM prompt,
GraphRAG traverses a knowledge graph and returns only the
exact facts needed.

Results

| Pipeline | Tokens | Cost |
| LLM Only | 813 | $8.1e-05 |
| Basic RAG | 173 | $1.8e-05 |
| GraphRAG | 125 | $1.5e-05 |

GraphRAG saves 70-85% tokens vs LLM Only!

Tech Stack

  • TigerGraph Savanna — graph database
  • Groq API — LLM inference
  • FAISS — vector search
  • Streamlit — dashboard
  • Python 3.11

How It Works

  1. User selects a disease
  2. TigerGraph traverses Disease → Symptoms → Drugs
  3. Compact structured context is passed to LLM
  4. LLM generates precise answer

GitHub

https://github.com/chinmayi-r-hegde/graphrag-hackathon

Top comments (0)