<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sriharsha Makineni</title>
    <description>The latest articles on DEV Community by Sriharsha Makineni (@sriharsha_makineni).</description>
    <link>https://dev.to/sriharsha_makineni</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sriharsha_makineni"/>
    <language>en</language>
    <item>
      <title>Graph RAG vs Vector RAG: A Practitioner's Guide to Choosing the Right Architecture</title>
      <dc:creator>Sriharsha Makineni</dc:creator>
      <pubDate>Sat, 04 Apr 2026 17:50:45 +0000</pubDate>
      <link>https://dev.to/sriharsha_makineni/graph-rag-vs-vector-rag-a-practitioners-guide-to-choosing-the-right-architecture-1lg1</link>
      <guid>https://dev.to/sriharsha_makineni/graph-rag-vs-vector-rag-a-practitioners-guide-to-choosing-the-right-architecture-1lg1</guid>
      <description>&lt;p&gt;If you've been building LLM-powered applications for any amount of time, you've hit the retrieval problem. Your model is smart, but it doesn't know your data. You need to give it context and how you structure that context retrieval is one of the most consequential architectural decisions you'll make.&lt;/p&gt;

&lt;p&gt;Most teams reach for vector RAG by default. It's well-documented, there are great libraries for it, and it works for a wide range of use cases. But there's another approach 'Graph RAG' that outperforms vector search on specific problem types in ways that aren't always obvious until you've been burned by vector RAG's limitations.&lt;/p&gt;

&lt;p&gt;This isn't a "one is better than the other" post. It's a guide to understanding what each approach actually does, where each one breaks down, and how to make the right call for your specific problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WHAT RAG ACTUALLY SOLVES&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;LLMs have a knowledge boundary. Retrieval Augmented Generation (RAG) solves this by retrieving relevant context from an external knowledge source at inference time and injecting it into the prompt. The quality of what you retrieve determines almost everything about the quality of the final answer. This is where vector RAG and Graph RAG diverge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VECTOR RAG: HOW IT WORKS AND WHERE IT SHINES&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vector RAG is the dominant approach for a reason:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Index your data - chunk documents and encode each chunk as a dense vector embedding&lt;/li&gt;
&lt;li&gt;Query - encode the user's question as a vector&lt;/li&gt;
&lt;li&gt;Retrieve - find the k chunks closest to the query vector&lt;/li&gt;
&lt;li&gt;Generate - pass retrieved chunks + query to the LLM&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Vector RAG shines when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your knowledge base is a collection of documents without strong relational structure&lt;/li&gt;
&lt;li&gt;Questions are semantically similar to the content being retrieved&lt;/li&gt;
&lt;li&gt;You need breadth across a large corpus&lt;/li&gt;
&lt;li&gt;Speed and simplicity matter&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Where it struggles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-hop questions requiring connection across multiple documents&lt;/li&gt;
&lt;li&gt;Questions about relationships between entities&lt;/li&gt;
&lt;li&gt;Queries requiring logical chain traversal&lt;/li&gt;
&lt;li&gt;Chunks retrieved in isolation that are misleading without context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;GRAPH RAG: A DIFFERENT MENTAL MODEL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Graph RAG encodes knowledge as a graph — nodes representing entities or concepts, edges representing relationships. Instead of finding closest vectors, you traverse the graph, following edges to discover structurally related information.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Think of it as&lt;/u&gt;: vector search finds books that sound like your question. Graph RAG follows the citations and cross-references to build a complete picture.&lt;/p&gt;

&lt;p&gt;Graph RAG shines when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your domain has strong entity relationships (people, orgs, concepts, events)&lt;/li&gt;
&lt;li&gt;Questions require multi-hop reasoning&lt;/li&gt;
&lt;li&gt;You need to trace a chain of logic or causality&lt;/li&gt;
&lt;li&gt;Context matters — same info means different things based on its connections&lt;/li&gt;
&lt;li&gt;Your knowledge base has hierarchical or taxonomic structure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;WHERE EACH ONE BREAKS DOWN&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vector RAG limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chunk boundary problem — splitting loses structural context&lt;/li&gt;
&lt;li&gt;Relationship blindness — finds similarity, not connection&lt;/li&gt;
&lt;li&gt;Semantic drift in specialized domains&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Graph RAG limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Construction cost — significantly more work than chunking&lt;/li&gt;
&lt;li&gt;Schema design is hard — you must model entities and relationships explicitly&lt;/li&gt;
&lt;li&gt;Doesn't handle truly unstructured content well&lt;/li&gt;
&lt;li&gt;Latency — graph traversal can be slower&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;HOW TO CHOOSE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Start with Vector RAG if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Large volumes of unstructured text&lt;/li&gt;
&lt;li&gt;Questions are primarily semantic — "find me information about X"&lt;/li&gt;
&lt;li&gt;You need to ship quickly&lt;/li&gt;
&lt;li&gt;No strong entity relationships in the domain&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reach for Graph RAG if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inherently relational domain (people, orgs, processes, dependencies)&lt;/li&gt;
&lt;li&gt;Getting bad results on multi-hop or relationship questions&lt;/li&gt;
&lt;li&gt;Need explainability — graph traversal gives traceable reasoning&lt;/li&gt;
&lt;li&gt;Connections between information are as important as the information itself&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Consider a hybrid when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Both unstructured documents and structured relational data&lt;/li&gt;
&lt;li&gt;Mix of semantic and multi-hop questions&lt;/li&gt;
&lt;li&gt;Use vector search to find entry points into the graph, then traverse&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;A PRACTICAL EXAMPLE: KNOWLEDGE-GROUNDED PERSONALIZATION&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One area where Graph RAG consistently outperforms vector RAG is adaptive personalization. In a coaching or learning system, a user's profile isn't a document — it's a web of relationships: skills they have, skills they're building, goals, gaps. "What should this person work on next?" requires traversing that graph, not finding semantically similar chunks.&lt;/p&gt;

&lt;p&gt;Vector RAG finds content that looks like the user's question. Graph RAG finds content that fits the user's actual situation. That's a meaningful difference when personalization is the core value proposition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;THE BOTTOM LINE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vector RAG is the right default for most document-heavy, semantically-driven use cases. Graph RAG earns its complexity when relationships between entities are central to the problem.&lt;/p&gt;

&lt;p&gt;Before choosing, ask yourself one question: Is the answer I need in a piece of text, or is it in the relationship between pieces of text?&lt;/p&gt;

&lt;p&gt;That distinction will point you in the right direction.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>rag</category>
      <category>architecture</category>
    </item>
    <item>
      <title>ARC-AGI-3 Proves AI Still Can't Replace Human Judgment - And That's the Point</title>
      <dc:creator>Sriharsha Makineni</dc:creator>
      <pubDate>Mon, 30 Mar 2026 02:47:28 +0000</pubDate>
      <link>https://dev.to/sriharsha_makineni/arc-agi-3-proves-ai-still-cant-replace-human-judgment-and-thats-the-point-1g1o</link>
      <guid>https://dev.to/sriharsha_makineni/arc-agi-3-proves-ai-still-cant-replace-human-judgment-and-thats-the-point-1g1o</guid>
      <description>&lt;p&gt;Every few months, something drops that cuts through the AI hype and forces the conversation back to reality. This week, that something was ARC-AGI-3.&lt;/p&gt;

&lt;p&gt;The results were blunt: every frontier AI model scored below 1%. Every human scored 100%.&lt;/p&gt;

&lt;p&gt;Let that sink in for a second. Not some humans. Not specially trained humans. Every single person who attempted it, regardless of background, aced it. Meanwhile, the most powerful AI systems in existence, the same ones passing bar exams and writing production code, nearly completely failed.&lt;/p&gt;

&lt;p&gt;If you've been building AI systems for any length of time, your gut reaction was probably somewhere between "I told you so" and "okay, but what does this mean for what I'm shipping?" That's exactly the question worth digging into.&lt;/p&gt;

&lt;h2&gt;
  
  
  WHAT ARC-AGI-3 ACTUALLY TESTS
&lt;/h2&gt;

&lt;p&gt;ARC-AGI isn't your typical benchmark, and that's the point. It's not about trivia recall, coding ability, or text summarization. It was specifically designed to test abstract reasoning from first principles. The kind of task where you're shown a small number of examples of a visual pattern transformation and have to figure out the underlying rule from scratch.&lt;/p&gt;

&lt;p&gt;No prior knowledge helps. No retrieval helps. You can't Google your way to the answer. You have to look at the examples, abstract the logic, and apply it to a new case you've never seen before.&lt;/p&gt;

&lt;p&gt;Humans find this intuitive. We abstract patterns constantly, it's one of the most fundamental things our brains do. Show a child three examples of a rule they've never been taught, and they'll generalize it. Show a frontier LLM the same examples, and it will confidently give you a wrong answer based on a pattern it half-remembers from training.&lt;/p&gt;

&lt;p&gt;That's not a knowledge gap. That's a reasoning gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  THIS ISN'T A SCALE PROBLEM
&lt;/h2&gt;

&lt;p&gt;The predictable response to any AI failure is: "give it more data, more parameters, more compute." It's become almost a mantra. But that narrative is getting harder to sustain.&lt;/p&gt;

&lt;p&gt;We've been scaling aggressively for years. Each new model family came with promises of emergent capabilities and breakthrough reasoning. Some of those promises were real - coding, analysis, writing, structured reasoning have all improved substantially. But ARC-AGI has barely moved despite years of scaling.&lt;/p&gt;

&lt;p&gt;LeCun has been arguing this point for a while: next-token prediction has a fundamental ceiling for certain types of reasoning, and throwing more compute at it won't fix the architecture. His new venture just raised $1 billion to pursue Energy-Based Models as an alternative. Whether EBMs actually work at scale remains to be seen, but the underlying observation is increasingly hard to dismiss.&lt;/p&gt;

&lt;p&gt;If the architecture is the constraint, you can't benchmark-engineer or fine-tune your way out of it. And for engineers building production systems, that has real consequences.&lt;/p&gt;

&lt;h2&gt;
  
  
  HOW THIS PLAYS OUT IN PRODUCTION
&lt;/h2&gt;

&lt;p&gt;Here's what happens when teams treat the latest model as a near-complete replacement for human judgment.&lt;/p&gt;

&lt;p&gt;The system works beautifully on the easy cases — which is most of the traffic. Speed is good, costs are low. Then edge cases start showing up. And AI failures at the boundary don't fail quietly. They fail confidently. The model produces an answer that looks perfectly reasonable, stated with full certainty, that happens to be completely wrong in a way that violates a rule it was never taught to reason about from scratch.&lt;/p&gt;

&lt;p&gt;These aren't just wrong answers. They're wrong answers that pass basic sanity checks. They slip through unless someone who actually understands the domain is paying attention.&lt;/p&gt;

&lt;p&gt;That's the gap ARC-AGI-3 is measuring. Not "can this model do most things well", it clearly can. But "can it reason to a correct answer when it has never encountered this specific structure before, with no retrieved knowledge to fall back on?" The answer, consistently, is no.&lt;/p&gt;

&lt;h2&gt;
  
  
  HITL ISN'T A CRUTCH - IT'S AN ARCHITECTURAL DECISION
&lt;/h2&gt;

&lt;p&gt;Human-in-the-Loop has an image problem. In a lot of technical conversations, it gets framed as the fallback you use before the AI is good enough - something you eventually eliminate as the model improves.&lt;/p&gt;

&lt;p&gt;ARC-AGI-3 is strong evidence that this framing is wrong. Not all HITL is about compensating for a model that isn't there yet. Some of it is about designing honestly around what AI systems structurally cannot do.&lt;/p&gt;

&lt;p&gt;The question isn't whether to include humans in the loop. It's where, how much, and for what. Getting that right is actually a sophisticated engineering problem, and most teams rush past it.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Route the hard cases, not all the cases. Build a reliable routing layer using confidence scoring, uncertainty quantification, or task-specific heuristics to identify inputs where the model is likely to fail and route only those for human review.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Treat every human correction as a training signal. Every override or validation is a labeled data point. Systems that capture this systematically get better over time. Systems that treat human review as a one-way gate leave the most valuable feedback on the table.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Design for graceful degradation, not silent failure. The worst failure mode is confident wrongness. A well-designed system knows when it's uncertain and triggers a handoff rather than producing a confident bad output.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Be honest about the boundary. There's a category of task where the model shouldn't be the final decision-maker, not because the model is bad, but because the task requires novel abstract reasoning. Identifying that boundary honestly is the job.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  THE DEEPER TAKEAWAY
&lt;/h2&gt;

&lt;p&gt;ARC-AGI-3 is useful precisely because it forces clarity on a question teams tend to answer optimistically: what can AI actually do, and what is it structurally limited at?&lt;/p&gt;

&lt;p&gt;The honest answer in 2026 is that AI systems are genuinely capable and getting better and they have a real ceiling on certain types of abstract reasoning that hasn't moved despite years of scaling. Both things are true at the same time.&lt;/p&gt;

&lt;p&gt;The teams that build the most reliable AI systems aren't the ones who bet hardest on that ceiling disappearing. They're the ones who design honestly around it, keeping humans in the loop not because the model is bad, but because they've thought carefully about where human judgment is irreplaceable.&lt;/p&gt;

&lt;p&gt;Human judgment isn't going away. The job is to figure out exactly where it matters most, and build accordingly.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>architecture</category>
      <category>hitl</category>
    </item>
  </channel>
</rss>
