DEV Community

Cover image for Agentic RAG for Developers!
Pavan Belagatti
Pavan Belagatti

Posted on

Agentic RAG for Developers!

In the rapidly evolving field of artificial intelligence, Agentic RAG has emerged as a game-changing approach to information retrieval and generation. This advanced technique combines the power of Retrieval Augmented Generation (RAG) with autonomous agents, offering a more dynamic and context-aware method to process and generate information. As businesses and researchers seek to enhance their AI capabilities, understanding and implementing Agentic RAG has become crucial to staying ahead in the competitive landscape.

This guide delves into the intricacies of mastering Agentic RAG using two powerful tools: LangChain and CrewAI. It explores the evolution from traditional RAG to its agentic counterpart, highlighting the key differences and benefits. The article also examines how LangChain serves as the foundation for implementing Agentic RAG and demonstrates the ways CrewAI can be leveraged to create more sophisticated and efficient AI systems.

The Evolution of RAG: From Traditional to Agentic

Limitations of traditional RAG

Naive RAG

Traditional Retrieval Augmented Generation (RAG) systems have revolutionized AI by combining Large Language Models (LLMs) with vector databases to overcome off-the-shelf LLM limitations. However, these systems face challenges while multi-tasking and are not suitable for complex use cases. It is okay until you are building simple Q&A chatbot, support bots, etc but as soon the things get a little complex, the traditional RAG approach fails. They often struggle with contextualizing retrieved data, leading to superficial responses that may not fully address query nuances.

Introducing Agentic RAG

Agentic RAG

Agentic RAG emerges as an evolution of traditional RAG, integrating AI agents to enhance the RAG approach. This approach employs autonomous agents to analyze initial findings and strategically select effective tools for data retrieval. These AI agents have the capability to breakdown the complex task into several subtasks so it becomes easy to handle. They also possess the memory (like chat history) so they know what has happened and what steps needs to be taken further.

Also, these AI agents are so smart they can call any API or tool whenever there is a requirement to solve the tasks. The agents can come up with logic, reasoning and take actions accordingly. This is what makes an agentic RAG approach so prominent. The system deconstructs complex queries into manageable segments, assigning specific agents to each part while maintaining seamless coordination.

Key benefits and use cases of Agentic RAG

Naive RAG vs Agentic RAG

Agentic RAG offers numerous advantages over traditional systems. Its autonomous agents work independently, allowing for efficient handling of complex queries in parallel. The system's adaptability enables dynamic adjustment of strategies based on new information or evolving user needs. In marketing, Agentic RAG can analyze customer data to generate personalized communications and provide real-time competitive intelligence. It also enhances decision-making in campaign management and improves search engine optimization strategies.

LangChain: The Backbone of Agentic RAG

Overview of LangChain

LangChain

LangChain has emerged as a powerful framework for building Large Language Model (LLM) applications, showing exponential growth in its capabilities. It serves as a versatile tool, offering greater compatibility with various platforms compared to other frameworks. At its core, LangChain integrates cutting-edge technologies to enhance model performance with each interaction. The framework operates on a modular principle, allowing for flexibility and adaptability in processing natural language interactions.

Essential components for Agentic RAG

LangChain's architecture supports both short-term and long-term memory capabilities, crucial for Agentic RAG systems. Short-term memory utilizes in-context learning, while long-term memory leverages external vector stores for infinite information retention and fast retrieval. These components enable LangChain to excel in understanding context, tone, and nuances within conversations, leading to more human-like interactions.

Integrating LangChain with external tools

To implement Agentic RAG, LangChain can be integrated with various external tools. This integration introduces intelligent agents that can plan, reason, and learn over time. The system typically includes document agents for question answering and summarization, and a meta-agent to oversee and coordinate their efforts. This hierarchical structure enhances capabilities in tasks requiring strategic planning and nuanced decision-making, elevating the agent's performance to new heights.

Leveraging CrewAI for Advanced Agentic RAG

Introduction to CrewAI

CrewAI

CrewAI is an open-source framework designed to create and manage teams of intelligent agents . Unlike traditional chatbots, these agents can collaborate and share information, tackling complex tasks together. CrewAI serves as a sophisticated platform that empowers organizations to structure their AI operations effectively, simulating software development team roles and responsibilities.

Implementing multi-agent workflows

CrewAI facilitates multi-agent workflows by allowing users to define tasks, roles, goals, and backstories for agents. This approach enhances productivity, decision-making processes, and product design within organizations. The framework supports various collaboration models, including sequential, hierarchical, and asynchronous workflows. By leveraging CrewAI, teams can streamline operations and maximize efficiency through coordinated efforts.

Optimizing agent interactions and decision-making

CrewAI optimizes agent interactions through features like role-playing, focus maintenance, and tool utilization. The platform incorporates guardrails for safety measures and protocols, ensuring reliable and ethical operations. Memory capabilities enable agents to store and recall past interactions, enhancing decision-making processes. By integrating CrewAI with advanced language models like Groq's Llama3–70B, organizations can further improve content generation and task performance.

Agentic RAG Workflow Tutorial 

We are going to see how agents can be involved in the RAG system to retrieve the most relevant information by calling tools.
I'll be using SingleStore Notebooks (just like your Google colab or Jupyter Notebooks but with added features) to run my code. You can also use the same. SingleStore has a free shared tier, you can sign up and start using the services for free.

Sign up now and get started with your notebook.

Once you create your SingleStore notebook, let's keep adding the below code and run it in a step-by-step manner.

SingleStore Notebooks

Install the required libraries



!pip install crewai==0.28.8 crewai_tools==0.1.6 langchain_community==0.0.29 sentence-transformers langchain-groq --quiet


Enter fullscreen mode Exit fullscreen mode


from langchain_openai import ChatOpenAI
import os
from crewai_tools import PDFSearchTool
from langchain_community.tools.tavily_search import TavilySearchResults
from crewai_tools  import tool
from crewai import Crew
from crewai import Task
from crewai import Agent


Enter fullscreen mode Exit fullscreen mode

Mention the Groq API Key



import os

# Set the API key
os.environ['GROQ_API_KEY'] = 'Add Your Groq API Key'


Enter fullscreen mode Exit fullscreen mode

Mention the LLM being used



llm = ChatOpenAI(
    openai_api_base="https://api.groq.com/openai/v1",
    openai_api_key=os.environ['GROQ_API_KEY'],
    model_name="llama3-8b-8192",
    temperature=0.1,
    max_tokens=1000,
)


Enter fullscreen mode Exit fullscreen mode


import requests

pdf_url = 'https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf'
response = requests.get(pdf_url)

with open('attenstion_is_all_you_need.pdf', 'wb') as file:
    file.write(response.content)


Enter fullscreen mode Exit fullscreen mode

Create a RAG tool variable to pass our PDF



rag_tool = PDFSearchTool(pdf='attenstion_is_all_you_need.pdf',
    config=dict(
        llm=dict(
            provider="groq", # or google, openai, anthropic, llama2, ...
            config=dict(
                model="llama3-8b-8192",
                # temperature=0.5,
                # top_p=1,
                # stream=true,
            ),
        ),
        embedder=dict(
            provider="huggingface", # or openai, ollama, ...
            config=dict(
                model="BAAI/bge-small-en-v1.5",
                #task_type="retrieval_document",
                # title="Embeddings",
            ),
        ),
    )
)


Enter fullscreen mode Exit fullscreen mode


rag_tool.run("How did self-attention mechanism evolve in large language models?")


Enter fullscreen mode Exit fullscreen mode

Mention the Tavily API Key



import os

# Set the Tavily API key
os.environ['TAVILY_API_KEY'] = 'Add Your Tavily API Key'


Enter fullscreen mode Exit fullscreen mode


web_search_tool = TavilySearchResults(k=3)


Enter fullscreen mode Exit fullscreen mode


web_search_tool.run("What is self-attention mechansim in large language models?")


Enter fullscreen mode Exit fullscreen mode

Define a Tool



@tool
def router_tool(question):
  """Router Function"""
  if 'self-attention' in question:
    return 'vectorstore'
  else:
    return 'web_search'


Enter fullscreen mode Exit fullscreen mode

Create Agents to work with



Router_Agent = Agent(
  role='Router',
  goal='Route user question to a vectorstore or web search',
  backstory=(
    "You are an expert at routing a user question to a vectorstore or web search."
    "Use the vectorstore for questions on concept related to Retrieval-Augmented Generation."
    "You do not need to be stringent with the keywords in the question related to these topics. Otherwise, use web-search."
  ),
  verbose=True,
  allow_delegation=False,
  llm=llm,
)


Enter fullscreen mode Exit fullscreen mode

Here is the complete step-by-step video tutorial to follow along.

Here is the complete notebook code.

Top comments (1)

Collapse
 
danshalev7 profile image
Dan Shalev

Very cool. We've seen how integrating knowledge graphs can significantly enhance context relevance and reduce hallucinations. Building graph-based RAG systems could complement setups like LangChain or CrewAI for more complex workflows (leveraging multi-agent orchestration and ontology-driven retrieval).
Let's connect if you want to explore more!