<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yasir Mushtaq</title>
    <description>The latest articles on DEV Community by Yasir Mushtaq (@yasir23).</description>
    <link>https://dev.to/yasir23</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yasir23"/>
    <language>en</language>
    <item>
      <title>RAG app with GROQ with unlimited rate limit!</title>
      <dc:creator>Yasir Mushtaq</dc:creator>
      <pubDate>Sun, 05 May 2024 18:09:16 +0000</pubDate>
      <link>https://dev.to/yasir23/rag-app-with-groq-with-unlimited-rate-limit-2fgn</link>
      <guid>https://dev.to/yasir23/rag-app-with-groq-with-unlimited-rate-limit-2fgn</guid>
      <description>&lt;p&gt;Today, we’re diving into the exciting world of LangChain RAG applications, a revolutionary approach to building intelligent and informative AI experiences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is LangChain RAG?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine an LLM that can not only access its vast internal knowledge but can also tap into the specific data you provide. That’s the magic of Retrieval-Augmented Generation (RAG) with LangChain. Here’s the gist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Retrieval: LangChain’s RAG framework lets you create a custom index of your data, be it documents, code, or any other relevant information.&lt;/li&gt;
&lt;li&gt;Smart Assistant: When a user interacts with your application, LangChain retrieves the most relevant data points from your index based on their query.&lt;/li&gt;
&lt;li&gt;Supercharged LLM: This retrieved data is then fed to the LLM, essentially giving it context and boosting its ability to understand and respond to the user’s specific needs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why Use LangChain RAG?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The benefits of LangChain RAG are numerous. Here are a few to get you excited:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Domain Expertise: Inject your LLM with the knowledge of your specific field, allowing it to answer complex questions and generate human-quality text tailored to your domain.&lt;/li&gt;
&lt;li&gt;Private Data Power: Use your own private data that LLMs wouldn’t normally have access to, unlocking a whole new level of personalization and accuracy.&lt;/li&gt;
&lt;li&gt;Always Up-to-Date: Keep your LLM sharp by incorporating the latest information from your data source, ensuring your application stays relevant.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*&lt;em&gt;Building Your First LangChain RAG App with no rate limit!&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
The LangChain framework offers a user-friendly platform to build your RAG application. Here’s a quick peek at the process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Preparation: Structure your data for efficient retrieval. This might involve cleaning, transforming, and indexing your information.&lt;/li&gt;
&lt;li&gt;LangChain Pipeline: Craft a LangChain pipeline that retrieves relevant data points based on user queries and feeds them to your chosen LLM.&lt;/li&gt;
&lt;li&gt;Interactive Interface: Design a user-friendly interface where users can interact with your AI-powered application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;So lets Dive into code:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can start selecting the build in IDE, Google colab which has free RAM and GPUs in it to give you freedom for creativity in AI/ML etc.&lt;/p&gt;

&lt;p&gt;Click the following link:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://colab.research.google.com/"&gt;Google colab&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lets start coding!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Side by side do it with me&lt;/em&gt; (Press windows + left arrow key and blog on window + right arrow key)&lt;/p&gt;

&lt;p&gt;Install the framworks from langchain in colab&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;! pip install langchain_community tiktoken langchain-groq langchainhub chromadb langchain langchain_core sentence-transformers pypdf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sign up at &lt;a href="https://groq.com/"&gt;Groq.com&lt;/a&gt; for your free API key!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;os.environ['GROQ_API_KEY'] = '&amp;lt;your API key&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following code for uploading a document in colab:)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from google.colab import files
uploaded = files.upload()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;RAG code for Talking to your pdf document:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import bs4
from langchain import hub
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import Chroma
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_groq import ChatGroq
from langchain.embeddings import HuggingFaceEmbeddings
from langchain_core.embeddings import Embeddings
from langchain_community.document_loaders import TextLoader
from langchain_community.document_loaders import PyPDFLoader


#### INDEXING ####


# Load pdf
loader = PyPDFLoader("./press ctrl+space")
pages = loader.load_and_split()


# Split

text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
splits = text_splitter.split_documents(pages)


# Embed

vectorstore = Chroma.from_documents(documents=splits,
                                    embedding=HuggingFaceEmbeddings())

retriever = vectorstore.as_retriever()


#### RETRIEVAL and GENERATION ####


# Prompt
prompt = hub.pull("rlm/rag-prompt")


# LLM

llm = ChatGroq(model_name="mixtral-8x7b-32768", temperature=0)


# Post-processing

def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)


# Chain

rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)


# Question

rag_chain.invoke("")# write a question related to pdf data!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ooha! You completed the project of Talk to your pdfs with Retrieval Augment Generation model.&lt;/p&gt;

&lt;p&gt;Writing blogs on open source projects!&lt;/p&gt;

&lt;p&gt;stay tuned for the next blog, follow!&lt;/p&gt;

&lt;p&gt;I will build next with Llama 3 and other open source model.&lt;/p&gt;

&lt;p&gt;If you any problem you can join my discord community we build projects daily, So I named it &lt;a href="https://discord.com/invite/z38ZJ55nbv"&gt;Projects Every&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Subscribe:)&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
