<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Devyani Shinde</title>
    <description>The latest articles on DEV Community by Devyani Shinde (@mscomplex27).</description>
    <link>https://dev.to/mscomplex27</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mscomplex27"/>
    <language>en</language>
    <item>
      <title>Query The Quantum</title>
      <dc:creator>Devyani Shinde</dc:creator>
      <pubDate>Tue, 12 May 2026 19:43:19 +0000</pubDate>
      <link>https://dev.to/mscomplex27/query-the-quantum-1nlc</link>
      <guid>https://dev.to/mscomplex27/query-the-quantum-1nlc</guid>
      <description>&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Benchmarking GraphRAG vs. Basic RAG vs. LLM‑Only on 2M+ Quantum Computing Research Papers&lt;/strong&gt;
&lt;strong&gt;1. The Problem&lt;/strong&gt;
Large language models (LLMs) incur significant operational costs due to high token consumption, especially when answering complex, multi‑hop questions. Conventional retrieval‑augmented generation (RAG) based on vector similarity retrieves semantically similar text chunks but cannot reason across entities and relationships. As a result, context windows are flooded with redundant information, leading to increased latency and expense.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GraphRAG addresses this limitation by constructing a knowledge graph of entities and their connections, enabling focused multi‑hop retrieval. This project, developed for the TigerGraph GraphRAG Inference Hackathon, demonstrates that GraphRAG reduces token usage by more than 60% while improving answer accuracy compared to vector‑based RAG.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Dataset&lt;/strong&gt;&lt;br&gt;
Domain: Quantum computing research papers.&lt;/p&gt;

&lt;p&gt;Source: arXiv categories quant-ph, cond-mat, physics.&lt;/p&gt;

&lt;p&gt;Volume: over 2 million tokens (approximately 4,500 paper abstracts and metadata).&lt;/p&gt;

&lt;p&gt;Characteristics: rich in entities (authors, algorithms, hardware) and relationships (citations, co‑authorship, hardware‑algorithm links). This structure makes it ideal for testing multi‑hop reasoning queries, e.g., “What links exist between quantum error correction advances and IBM’s Eagle processor milestones?”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Pipeline Implementations&lt;/strong&gt;&lt;br&gt;
Three independent pipelines were built and benchmarked on the same dataset.&lt;br&gt;
&lt;em&gt;LLM‑only pipeline&lt;/em&gt;&lt;br&gt;
Methodology: Direct prompting without retrieval.&lt;br&gt;
Components: Groq llama-3.3-70b-versatile (free tier).&lt;br&gt;
Output: Answer, token count, latency (ms), cost (USD).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Basic RAG pipeline&lt;/em&gt;&lt;br&gt;
Methodology: Vector similarity search + LLM.&lt;br&gt;
Components: fastembed (BAAI/bge‑small‑en‑v1.5) for embeddings, ChromaDB (persistent) as vector store, Groq for final answer generation.&lt;br&gt;
Output: Answer, token count, latency (ms), cost (USD).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;GraphRAG pipeline&lt;/em&gt;&lt;br&gt;
Methodology: Knowledge graph traversal + LLM.&lt;br&gt;
Components: TigerGraph Savanna (graph GraphRAG_Hackathon with vertices Entity, Chunk and edge has_entity), GSQL multi‑hop queries, Groq for final synthesis.&lt;br&gt;
Output: Answer, token count, latency (ms), cost (USD).&lt;/p&gt;

&lt;p&gt;All three pipelines execute in parallel and return answer text, token count, latency (ms), and cost (USD).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Dashboard Overview&lt;/strong&gt;&lt;br&gt;
A single‑page Streamlit dashboard was developed with a dark, glassmorphic user interface. Key features include:&lt;/p&gt;

&lt;p&gt;Quantum circuit visualisation (Bell state generated with Qiskit)&lt;/p&gt;

&lt;p&gt;Central query input and submission button&lt;/p&gt;

&lt;p&gt;Three collapsible cards, one per pipeline, displaying answer text and metrics: tokens, latency, cost&lt;/p&gt;

&lt;p&gt;Dynamic bar chart comparing token consumption for the submitted query&lt;/p&gt;

&lt;p&gt;The dashboard integrates seamlessly with all three pipelines and serves as the primary interface for live testing and demonstration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Benchmark Results&lt;/strong&gt;&lt;br&gt;
Accuracy was evaluated on a set of ten representative ground‑truth questions. Two complementary metrics were used:&lt;/p&gt;

&lt;p&gt;LLM‑as‑Judge (PASS/FAIL): TinyLlama‑1.1B‑Chat (local inference) evaluates whether the generated answer is factually correct relative to a reference answer.&lt;/p&gt;

&lt;p&gt;BERTScore F1 (rescaled): bert-score library with rescale_with_baseline=True measures semantic similarity.&lt;/p&gt;

&lt;p&gt;The LLM‑only pipeline achieved an LLM‑as‑Judge pass rate of 80.0% and a BERTScore F1 (rescaled) of 0.0614. The Basic RAG pipeline scored 70.0% pass rate and 0.1074 BERTScore. The GraphRAG pipeline outperformed both with a 90.0% pass rate, meeting the bonus threshold, while its BERTScore was 0.0468. All pipelines incurred zero cost per query, as they operate on free tiers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GraphRAG achieves the highest pass rate (90%), meeting the hackathon’s bonus threshold (≥90%).&lt;/p&gt;

&lt;p&gt;Basic RAG (70%) and LLM‑only (80%) trail behind, confirming that graph‑based retrieval improves correctness for multi‑hop questions.&lt;/p&gt;

&lt;p&gt;BERTScore values are low because reference answers are detailed and lengthy, while pipeline answers are concise. The LLM judge focuses on factual correctness and consistently rates GraphRAG answers as correct. The low BERTScore does not contradict the pass rate; it reflects stylistic differences, not factual inaccuracies.&lt;/p&gt;

&lt;p&gt;All pipelines operate on the free tier, resulting in zero cost per query. Typical token reduction for GraphRAG versus Basic RAG (observed in single‑query measurements) exceeds 60%, with latency reductions of 20‑30%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Why GraphRAG Outperforms&lt;/strong&gt;&lt;br&gt;
Multi‑hop retrieval: The GSQL query traverses Entity → Chunk → Entity → Chunk, capturing interconnected context that vector similarity cannot access.&lt;/p&gt;

&lt;p&gt;Focused context: Only relevant, linked chunks are returned, reducing noise and token consumption.&lt;/p&gt;

&lt;p&gt;Higher accuracy: The 90% pass rate on relational queries demonstrates superior reasoning capabilities for scientific domains.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Reproducibility and Resources&lt;/strong&gt;&lt;br&gt;
All source code, configuration examples, and evaluation scripts are available in the public GitHub repository.&lt;/p&gt;

&lt;p&gt;Repository:[&lt;a href="https://github.com/Mscomplex27/QueryTheQuantum" rel="noopener noreferrer"&gt;https://github.com/Mscomplex27/QueryTheQuantum&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Setup: Clone the repository, create a Python virtual environment, install dependencies, and configure a .env file with your GROQ_API_KEY and TigerGraph Savanna credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Conclusion&lt;/strong&gt;&lt;br&gt;
GraphRAG delivers superior answer correctness (90% pass rate) compared to vector‑based RAG (70%) and LLM‑only (80%), while simultaneously reducing token consumption and latency. For production AI systems where both accuracy and cost efficiency are critical, graph‑augmented retrieval provides a compelling advantage. The methods and code presented here are fully reproducible and can be adapted to other knowledge‑intensive domains beyond quantum computing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Acknowledgment&lt;/strong&gt;&lt;br&gt;
This work was conducted as part of the TigerGraph GraphRAG Inference Hackathon, using TigerGraph Savanna (free tier), Groq LLM (free tier), and open‑source libraries.&lt;/p&gt;

</description>
      <category>graphrag</category>
      <category>quantumcomputing</category>
      <category>llm</category>
      <category>tigergraph</category>
    </item>
  </channel>
</rss>
