<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bart Amin</title>
    <description>The latest articles on DEV Community by Bart Amin (@bartamin).</description>
    <link>https://dev.to/bartamin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bartamin"/>
    <language>en</language>
    <item>
      <title>CDRAG: RAG with LLM-guided document retrieval — outperforms standard cosine retrieval on legal QA</title>
      <dc:creator>Bart Amin</dc:creator>
      <pubDate>Sat, 18 Apr 2026 13:00:00 +0000</pubDate>
      <link>https://dev.to/bartamin/cdrag-rag-with-llm-guided-document-retrieval-outperforms-standard-cosine-retrieval-on-legal-qa-4o84</link>
      <guid>https://dev.to/bartamin/cdrag-rag-with-llm-guided-document-retrieval-outperforms-standard-cosine-retrieval-on-legal-qa-4o84</guid>
      <description>&lt;p&gt;Hi all,&lt;/p&gt;

&lt;p&gt;I developed an addition on a CRAG (Clustered RAG) framework that uses LLM-guided cluster-aware retrieval. Standard RAG retrieves the top-K most similar documents from the entire corpus using cosine similarity. While effective, this approach is blind to the semantic structure of the document collection and may under-retrieve documents that are relevant at a higher level of abstraction.&lt;/p&gt;

&lt;p&gt;CDRAG (Clustered Dynamic RAG) addresses this with a two-stage retrieval process:&lt;/p&gt;

&lt;p&gt;Pre-cluster all (embedded) documents into semantically coherent groups&lt;br&gt;
Extract LLM-generated keywords per cluster to summarise content&lt;br&gt;
At query time, route the query through an LLM that selects relevant clusters and allocates a document budget across them&lt;br&gt;
Perform cosine similarity retrieval within those clusters only&lt;br&gt;
This allows the retrieval budget to be distributed intelligently across the corpus rather than spread blindly over all documents.&lt;/p&gt;

&lt;p&gt;Evaluated on 100 legal questions from the legal RAG bench dataset, scored by an LLM judge:&lt;/p&gt;

&lt;p&gt;Faithfulness: +12% over standard RAG&lt;br&gt;
Overall quality: +8%&lt;br&gt;
Outperforms on 5/6 metrics&lt;br&gt;
Code and full writeup available on GitHub. Interested to hear whether others have explored similar cluster-routing approaches.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/BartAmin/Clustered-Dynamic-RAG" rel="noopener noreferrer"&gt;https://github.com/BartAmin/Clustered-Dynamic-RAG&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvderlfy1d5dsl2pw7hyo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvderlfy1d5dsl2pw7hyo.png" alt=" " width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tutorial</category>
      <category>agents</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
