Hi all,
I developed an addition on a CRAG (Clustered RAG) framework that uses LLM-guided cluster-aware retrieval. Standard RAG retrieves the top-K most similar documents from the entire corpus using cosine similarity. While effective, this approach is blind to the semantic structure of the document collection and may under-retrieve documents that are relevant at a higher level of abstraction.
CDRAG (Clustered Dynamic RAG) addresses this with a two-stage retrieval process:
Pre-cluster all (embedded) documents into semantically coherent groups
Extract LLM-generated keywords per cluster to summarise content
At query time, route the query through an LLM that selects relevant clusters and allocates a document budget across them
Perform cosine similarity retrieval within those clusters only
This allows the retrieval budget to be distributed intelligently across the corpus rather than spread blindly over all documents.
Evaluated on 100 legal questions from the legal RAG bench dataset, scored by an LLM judge:
Faithfulness: +12% over standard RAG
Overall quality: +8%
Outperforms on 5/6 metrics
Code and full writeup available on GitHub. Interested to hear whether others have explored similar cluster-routing approaches.

Top comments (0)