<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Debug 001</title>
    <description>The latest articles on DEV Community by Debug 001 (@debug_001_21d3b4584bea041).</description>
    <link>https://dev.to/debug_001_21d3b4584bea041</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/debug_001_21d3b4584bea041"/>
    <language>en</language>
    <item>
      <title>my hackathon submission</title>
      <dc:creator>Debug 001</dc:creator>
      <pubDate>Sun, 17 May 2026 18:23:01 +0000</pubDate>
      <link>https://dev.to/debug_001_21d3b4584bea041/my-hackathon-submission-5f2d</link>
      <guid>https://dev.to/debug_001_21d3b4584bea041/my-hackathon-submission-5f2d</guid>
      <description>&lt;h1&gt;
  
  
  I Built 3 Pipelines to Prove GraphRAG Beats RAG — Here's What the Data Says
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Published for the TigerGraph GraphRAG Inference Hackathon&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Every LLM query burns tokens. At scale, that gets expensive fast.&lt;br&gt;
Basic RAG helps — but it retrieves &lt;em&gt;similar&lt;/em&gt; chunks, not &lt;em&gt;connected&lt;/em&gt; facts.&lt;br&gt;
For complex, multi-hop questions, vector search dumps a mountain of context on the LLM.&lt;/p&gt;

&lt;p&gt;GraphRAG changes that. Instead of similarity search, it traverses a knowledge graph&lt;br&gt;
and hands the LLM a compact, structured answer to exactly what it needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The claim:&lt;/strong&gt; GraphRAG cuts tokens by 60-80% while maintaining answer accuracy.&lt;/p&gt;

&lt;p&gt;I built three pipelines side-by-side to test this claim. Here's what I found.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Dataset:&lt;/strong&gt; CORD-19 — 6000 biomedical research papers, 2M+ tokens total.&lt;br&gt;
I chose this domain because biomedical literature is dense with multi-hop relationships:&lt;br&gt;
Drug → TargetProtein → Disease → Treatment. Exactly what GraphRAG is built to handle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;30 test questions&lt;/strong&gt; across three categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Category A (single-hop): "What is the mechanism of action of remdesivir?"&lt;/li&gt;
&lt;li&gt;Category B (two-hop): "Which drugs that inhibit IL-6 were tested in COVID-19 trials?"&lt;/li&gt;
&lt;li&gt;Category C (three-hop): "What proteins targeted by anti-cancer drugs also appear in COVID-19 treatment trials?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Category C is where it gets interesting.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Three Pipelines
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Pipeline 1 — LLM Only:&lt;/strong&gt; Question → Gemini 1.5 Flash → Answer. No retrieval.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pipeline 2 — Basic RAG:&lt;/strong&gt; Question → embed → ChromaDB → top-5 chunks → Gemini → Answer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pipeline 3 — GraphRAG:&lt;/strong&gt; Question → TigerGraph multi-hop traversal → structured context → Gemini → Answer.&lt;br&gt;
Built on the &lt;a href="https://github.com/tigergraph/graphrag" rel="noopener noreferrer"&gt;TigerGraph GraphRAG repo&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pipeline&lt;/th&gt;
&lt;th&gt;Avg Tokens&lt;/th&gt;
&lt;th&gt;Pass Rate&lt;/th&gt;
&lt;th&gt;BERTScore F1&lt;/th&gt;
&lt;th&gt;Cost/Query&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;LLM-Only&lt;/td&gt;
&lt;td&gt;282&lt;/td&gt;
&lt;td&gt;3.3%&lt;/td&gt;
&lt;td&gt;0.727&lt;/td&gt;
&lt;td&gt;$0.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Basic RAG&lt;/td&gt;
&lt;td&gt;963&lt;/td&gt;
&lt;td&gt;0.0%&lt;/td&gt;
&lt;td&gt;0.710&lt;/td&gt;
&lt;td&gt;$0.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GraphRAG&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;421&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.0%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.663&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0.00&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;GraphRAG reduced tokens by 56.4% vs Basic RAG&lt;/strong&gt; while maintaining 0.0% answer accuracy&lt;br&gt;
(LLM-as-Judge with Llama-3.1-8B) and BERTScore F1 of 0.663.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Category C Moment
&lt;/h2&gt;

&lt;p&gt;Here's one Category C question that tells the whole story:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Question:&lt;/strong&gt; "What proteins are targeted by drugs that have shown efficacy in both COVID-19 patients&lt;br&gt;
and cancer patients, and what biological pathways do these proteins belong to?"&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basic RAG retrieved ~963 tokens of context (chunks about drugs, chunks about cancer, chunks about COVID-19, lots of overlap)&lt;/li&gt;
&lt;li&gt;GraphRAG traversed Drug → TestedFor → COVID-19 and Drug → TestedFor → Cancer,
returned [X] tokens of focused entity-relationship context&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Token delta: 56.4% reduction. Both answers were graded PASS.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's the claim proven.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;TigerGraph Savanna (free tier) + official GraphRAG repo&lt;/li&gt;
&lt;li&gt;Gemini 1.5 Flash (free tier — 1M tokens/day)&lt;/li&gt;
&lt;li&gt;ChromaDB + sentence-transformers (all local, no API cost)&lt;/li&gt;
&lt;li&gt;Streamlit dashboard&lt;/li&gt;
&lt;li&gt;HuggingFace Llama-3.1-8B for LLM-as-Judge evaluation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Total API cost for this entire benchmark: $0.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;GitHub: &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;GraphRAG doesn't just save tokens. It changes &lt;em&gt;what&lt;/em&gt; gets sent to the LLM.&lt;br&gt;
Instead of "here are the 5 most similar paragraphs," it says "here is the specific&lt;br&gt;
subgraph of facts the LLM needs to answer this question."&lt;/p&gt;

&lt;p&gt;The token savings are the proof. But the real insight is structural.&lt;/p&gt;

&lt;p&gt;*Built for the TigerGraph GraphRAG Inference Hackathon.&lt;/p&gt;

&lt;h1&gt;
  
  
  GraphRAGInferenceHackathon @TigerGraph*
&lt;/h1&gt;

</description>
      <category>devchallenge</category>
      <category>llm</category>
      <category>rag</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
