DEV Community

Cover image for What is Retrieval Augmented Generation (RAG)? 🚀
Jeffrey Ip for Confident AI

Posted on • Edited on • Originally published at confident-ai.com

What is Retrieval Augmented Generation (RAG)? 🚀

TL;DR

In this article, I’m going to talk about what RAG is and how to implement a RAG-based LLM application (yes, with a complete code sample 😚)

Let’s dive right in.


DeepEval - open-source evaluation framework for LLM applications

DeepEval is a framework that help engineers evaluate the performance of their LLM applications by providing default metrics to measure hallucination, relevancy, and much more.

We are just starting out, and we really want to help more developers build safer AI apps. Would you mind giving it a star to help spread the word, please? 🥺❤️🥺

🌟 DeepEval on GitHub

Github stars


What is RAG?

Retrieval augmented generation is a technique in NLP that allows LLMs like ChatGPT to generate customized outputs that are outside the scope of the data it was trained on. An LLM application without RAG, is akin to asking ChatGPT to summarize an email without providing the actual email as context.

A RAG system consists of two primary components: the retriever and the generator.

The retriever is responsible for searching through the knowledge base for the most relevant pieces of information that correlate with the given input, which is referred to as retrieval results. On the other hand, the generator utilizes these retrieval results to craft a series of prompts based on a predefined prompt template to produce a coherent and relevant response to the input.

Here’s a diagram of a RAG architecture.

A typical RAG architecture

In most cases, your “knowledge base” consists of vector embeddings stored in a vector database like ChromaDB, and your “retriever” will 1) embed the given input at runtime and 2) search through the vector space containing your data to find the top K most relevant retrieval results 3) rank the results based on relevancy (or distance to your vectorized input embedding). This will then be processed into a series of prompts and passed onto your “generator”, which is your LLM of choice (GPT-4, lLama2, etc.).

Image description

For more curious users, here are the models a retriever commonly employs to extract the most pertinent retrieval results:

  1. Neural Network Embeddings (eg. OpenAI/Cohere’s embedding models): ranks documents based on their locational proximity in a multidimensional vector space, enabling an understanding of textual relationships and relevance between an input and the document corpus.

  2. Best Match 25 (BM25): a probabilistic retrieval model that enhances text retrieval precision. By considering term frequencies with inverse document frequencies, it takes into account term significance, ensuring that both common and rare terms influence the relevance ranking.

  3. TF-IDF (Term Frequency — Inverse Document Frequency): calculates the significance of a term within a document relative to the broader corpus. By juxtaposing a term’s occurrence in a document with its rarity across the corpus, it ensures a comprehensive relevance ranking.

  4. Hybrid Search: optimizes the relevance of the search results by assigning distinctive weights to different methodologies, such as Neural Network Embeddings, BM25, and TF-IDF.


Applications

RAG has various applications across different fields due to its ability to combine retrieval and generation of text for enhanced responses. Having worked with numerous companies building LLM applications at Confident, here is the top four use cases I’ve seen:

  1. Customer support / user onboarding chatbots: No surprises here, retrieve data from internal documents to generate more personalized responses. Click here to read a full tutorial on how to build one yourself using lLamaindex.

  2. Data Extraction. Interestingly, we can use RAG to extract relevant data from documents such as PDFs. You can find a tutorial on how to do it here.

  3. Sales enablement: retrieve data from LinkedIn profiles and email threads to generate more personalized outreach messages

  4. Content creation and enhancement: retrieve data from past message conversations to generate suggested message replies

In the following code walkthrough, we’ll be building a very generalized chatbot, and you’ll be able to customize it’s functionality into any of the use cases listed above by tweaking prompts and data stored in your vector database.


Project Setup

For this project, we’re going to build a question-answering (QA) chatbot based on your knowledge base. We’re not going to cover the part on how to index your knowledge base, as that’s a discussion for another day.

We’re going to be using python, ChromaDB for our vector database, and OpenAI for both vector embeddings and chat completion. We’re going to build a chatbot on your favorite Wikipedia page.

First, set up a new project directory and install the dependencies we need.

mkdir rag-llm-app
cd rag-llm-app
python3 -m venv venv
source venv/bin/activate
Enter fullscreen mode Exit fullscreen mode

Your terminal should now start with something like this:

(venv)
Enter fullscreen mode Exit fullscreen mode

Installing dependencies

pip install openai chromadb
Enter fullscreen mode Exit fullscreen mode

Next, create a new main.py file — the entry point to your LLM application.

touch main.py
Enter fullscreen mode Exit fullscreen mode

Getting your API keys

Lastly, go ahead and get your OpenAI API key here if you don’t already have one, and set it as an enviornment variable:

export OPENAI_API_KEY="your-openai-api-key"
Enter fullscreen mode Exit fullscreen mode

You’re good to go! Let’s start coding.


Building a RAG-based LLM application

Begin by creating an Retriever class that will retrieve the most relevant data from ChromaDB for a given user question.

Open main.py and paste in the following code:

import chromadb
from chromadb.utils import embedding_functions
import openai

client = chromadb.Client()
client.heartbeat()

class Retriver:
    def __init__(self):
        pass

    def get_retrieval_results(self, input, k):
        openai_ef = embedding_functions.OpenAIEmbeddingFunction(api_key="your-openai-api-key", model_name="text-embedding-ada-002")
        collection = client.get_collection(name="my_collection", embedding_function=openai_ef)
        retrieval_results = collection.query(
            query_texts=[input],
            n_results=k,
        )
        return retrieval_results["documents"][0]
Enter fullscreen mode Exit fullscreen mode

Here, openai_ef is the embedding function used under the hood by ChromaDB to vectorize an input. When a user sends a question to your chatbot, a vector embedding will be created from this question using OpenAI’s text-embedding-ada-002 model. This vector embedding will then be used for ChromaDB to perform a vector similarity search in the collection vector space, which contains data from your knowledge base (remember, we’re assuming you’ve already indexed data for this tutorial). This process allows you to search for the top K most relevant retrieval results on any given input.

Now that you’ve created your retriever, paste in the following code to create a generator:

...

class Generator:
    def __init__(self, openai_model="gpt-4"):
        self.openai_model = openai_model
        self.prompt_template = """
            You're a helpful assistant with a thick country accent. Answer the question below and if you don't know the answer, say you don't know.

            {text}
        """

    def generate_response(self, retrieval_results):
        prompts = []
        for result in retrieval_results:
            prompt = self.prompt_template.format(text=result)
            prompts.append(prompt)
        prompts.reverse()

        response = openai.ChatCompletion.create(
            model=self.openai_model,
            messages=[{"role": "assistant", "content": prompt} for prompt in prompts],
            temperature=0,
        )

        return response["choices"][0]["message"]["content"]
Enter fullscreen mode Exit fullscreen mode

Here, we constructed a series of prompts in the generate_response method based on a list of retrieval_results that will be provided by the retriever we built earlier. We then send this series of prompts to OpenAI to generate an answer. Using RAG, your QA chatbot can now produce more customized outputs by enhancing the generation with retrieval results!

To wrap things up, lets put everything together:

...

class Chatbot:
    def __init__(self):
        self.retriver = Retriver()
        self.generator = Generator()

    def answer(self, input):
        retrieval_results = self.retriver.get_retrieval_results(input)
        return self.generator.generate_response(retrieval_results)


# Creating an instance of the Chatbot class
chatbot = Chatbot()

while True:
    user_input = input("You: ")  # Taking user input from the CLI
    response = chatbot.answer(user_input)
    print(f"Chatbot: {response}")
Enter fullscreen mode Exit fullscreen mode

That’s all folks! You just built your very first RAG-based chatbot.

Conclusion

In this article, you’ve learnt what RAG is, some use cases for RAG, and how to build your own RAG-based LLM application. However, you might have noticed that building your own RAG application is pretty complicated, and indexing your data is often a non-trivial task. Luckily, there are existing open-source frameworks like LangChain and lLamaIndex that allows you to implement what we’ve demonstrated in a much simpler way.

If you like the article, don’t forget to give us a star on Github ❤️: https://github.com/confident-ai/deepeval

You can also find the full code example here: https://github.com/confident-ai/blog-examples/tree/main/rag-llm-app

Till next time!

Top comments (7)

Collapse
 
pavanbelagatti profile image
Pavan Belagatti

Thanks for sharing valuable content on RAG. I have also started writing articles on GenAI and LLMs. Its so fascinating to see how these new technologies work.

Collapse
 
guybuildingai profile image
Jeffrey Ip

That's good to hear, are you mainly writing tutorials/guides or something else?

Collapse
 
shreya_gr profile image
Shreya

Detailed blog !! Keep sharing your knowledge !!

Collapse
 
guybuildingai profile image
Jeffrey Ip

Thank you! don't forget to star our repo <3

Collapse
 
nevodavid profile image
Nevo David

Great stuff! Thank you for posting!

Collapse
 
guybuildingai profile image
Jeffrey Ip

Glad you liked it!

Collapse
 
ranjancse profile image
Ranjan Dailata • Edited

Great blog post. It's a RAG time and let's party with "canopy"

github.com/pinecone-io/canopy/tree...