<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nikhil Agrawal</title>
    <description>The latest articles on DEV Community by Nikhil Agrawal (@nikhil_agrawal_dc58a32b09).</description>
    <link>https://dev.to/nikhil_agrawal_dc58a32b09</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nikhil_agrawal_dc58a32b09"/>
    <language>en</language>
    <item>
      <title>Your RAG is Basic. Here's the KG-RAG Pattern We Used to Build a Real AI Agent.</title>
      <dc:creator>Nikhil Agrawal</dc:creator>
      <pubDate>Thu, 14 Aug 2025 08:49:27 +0000</pubDate>
      <link>https://dev.to/nikhil_agrawal_dc58a32b09/your-rag-is-basic-heres-the-kg-rag-pattern-we-used-to-build-a-real-ai-agent-3hej</link>
      <guid>https://dev.to/nikhil_agrawal_dc58a32b09/your-rag-is-basic-heres-the-kg-rag-pattern-we-used-to-build-a-real-ai-agent-3hej</guid>
      <description>&lt;p&gt;Let's be honest. Slapping a vector search on top of an LLM is the "hello world" of GenAI. It's a great start, but it breaks down fast when faced with real-world, interconnected data. You can't answer multi-hop questions, and you're constantly fighting to give the LLM enough context.&lt;/p&gt;

&lt;p&gt;We hit that wall. So we re-architected. Here's the pattern we implemented: Knowledge Graph RAG (KG-RAG). It's less about a single tool and more about orchestrating specialized data stores.&lt;/p&gt;

&lt;p&gt;The Stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source of Truth: MongoDB&lt;/li&gt;
&lt;li&gt;Vector Store (Semantic Search): Weaviate&lt;/li&gt;
&lt;li&gt;Graph Store (Context &amp;amp; Relationships): Neo4j&lt;/li&gt;
&lt;li&gt;LLM/Embeddings: Google Gemini&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Problem with Vector-Only RAG:&lt;br&gt;
A query like "What did team A conclude about the feature B launch?" is hard. Vector search might find docs about team A and docs about feature B, but it struggles to guarantee the retrieved context contains team A's conclusions about feature B.&lt;/p&gt;

&lt;p&gt;The KG-RAG Solution:&lt;br&gt;
We built a ChatService orchestrator that follows a "Graph-First" retrieval pattern.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpabcieuuuda0wrr6pzj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpabcieuuuda0wrr6pzj.png" alt=" " width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's the pseudo-code for the backend logic:&lt;/p&gt;

&lt;p&gt;`async function handleQuery(userQuery) {&lt;br&gt;
  // 1. Identify entities in the query (e.g., "team A", "feature B")&lt;br&gt;
  const entities = extractEntities(userQuery);&lt;/p&gt;

&lt;p&gt;// 2. Query the Knowledge Graph for context and specific chunk IDs&lt;br&gt;
  // CYPHER: MATCH (t:Team {name:"team A"})-[]-&amp;gt;(d:Doc)-[]-&amp;gt;(c:Chunk)&lt;br&gt;
  // WHERE (d)-[:DISCUSSES]-&amp;gt;(:Feature {name:"feature B"}) RETURN c.id&lt;br&gt;
  const contextualChunkIds = await neo4j.run(cypherQuery, { entities });&lt;/p&gt;

&lt;p&gt;// 3. Perform a &lt;em&gt;filtered&lt;/em&gt; vector search in Weaviate&lt;br&gt;
  // This is way more accurate than a broad search.&lt;br&gt;
  const relevantChunks = await weaviate.search({&lt;br&gt;
    vector: embed(userQuery),&lt;br&gt;
    filters: { chunkId: { $in: contextualChunkIds } }&lt;br&gt;
  });&lt;/p&gt;

&lt;p&gt;// 4. Synthesize prompt and generate with LLM&lt;br&gt;
  const prompt = createPrompt(userQuery, relevantChunks);&lt;br&gt;
  const answer = await gemini.generate(prompt);&lt;/p&gt;

&lt;p&gt;// 5. Also, return the graph data from step 2 for UI visualization&lt;br&gt;
  return { chatAnswer: answer, graphData: getGraphFrom(contextualChunkIds) };&lt;br&gt;
}`&lt;/p&gt;

&lt;p&gt;This pattern is a game-changer. It grounds the LLM in structured, factual relationships before feeding it semantic context, dramatically improving accuracy.&lt;br&gt;
The best part? The UI is now a dual-pane view: a standard chatbot on the left and an interactive 3D graph on the right, both powered by the same API call. Clicking a node on the graph fires a new query. It's the interactive dev loop we've always wanted for data.&lt;br&gt;
Stop building basic RAG toys. Start thinking about orchestration.&lt;/p&gt;

</description>
      <category>rag</category>
      <category>ai</category>
      <category>llm</category>
      <category>python</category>
    </item>
    <item>
      <title>I spent hours debugging a None error in my RAG chatbot. The fix was one line.</title>
      <dc:creator>Nikhil Agrawal</dc:creator>
      <pubDate>Mon, 07 Jul 2025 07:30:22 +0000</pubDate>
      <link>https://dev.to/nikhil_agrawal_dc58a32b09/i-spent-hours-debugging-a-none-error-in-my-rag-chatbot-the-fix-was-one-line-4bml</link>
      <guid>https://dev.to/nikhil_agrawal_dc58a32b09/i-spent-hours-debugging-a-none-error-in-my-rag-chatbot-the-fix-was-one-line-4bml</guid>
      <description>&lt;p&gt;I was so sure my Gemini API key was invalid.&lt;br&gt;
I'm building a RAG chatbot with Flask, Weaviate, and Gemini. The main API for summarizing docs worked perfectly. The chatbot API? RuntimeError: Gemini model is not initialized.&lt;br&gt;
I checked my .env file a dozen times. I generated new API keys. I put print() statements everywhere. The initialization code was definitely running and succeeding at startup! So why was my model None when the API route was called?&lt;br&gt;
Turns out, it was a classic Flask circular dependency issue.&lt;/p&gt;

&lt;p&gt;My app/&lt;strong&gt;init&lt;/strong&gt;.py was doing this (the wrong way):&lt;br&gt;
from .routes import main # &amp;lt;-- Imports routes at the top&lt;/p&gt;

&lt;p&gt;def create_app():&lt;br&gt;
    app = Flask(&lt;strong&gt;name&lt;/strong&gt;)&lt;br&gt;
    #...&lt;br&gt;
    init_gemini_model(app) # &amp;lt;-- Initializes the model&lt;br&gt;
    app.register_blueprint(main)&lt;br&gt;
    return app&lt;/p&gt;

&lt;p&gt;The problem is that Python imports routes before create_app is ever called. So my RAG service was importing the GEMINI_MODEL when it was still None.&lt;br&gt;
The one-line fix was to change the import structure (the right way):&lt;/p&gt;

&lt;p&gt;def create_app():&lt;br&gt;
    app = Flask(&lt;strong&gt;name&lt;/strong&gt;)&lt;br&gt;
    #...&lt;br&gt;
    init_gemini_model(app)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from . import routes # &amp;lt;-- Import INSIDE the function
app.register_blueprint(routes.main)
return app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;By importing the routes after the client is initialized, the circular dependency is broken. The error vanished.&lt;br&gt;
It's a huge reminder that sometimes the bug isn't in your fancy AI logic, but in the fundamentals of the framework you're using. Hope this saves someone else a few hours of head-scratching!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2yd39raaf7vzwhkzisk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2yd39raaf7vzwhkzisk.png" alt="Image description" width="800" height="762"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjcvzk6a4wx2ufiiflqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjcvzk6a4wx2ufiiflqn.png" alt="Image description" width="800" height="1353"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>webdev</category>
      <category>ai</category>
      <category>flask</category>
    </item>
  </channel>
</rss>
