<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sanjeeb Kumar Sahoo</title>
    <description>The latest articles on DEV Community by Sanjeeb Kumar Sahoo (@ksanjeeb).</description>
    <link>https://dev.to/ksanjeeb</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ksanjeeb"/>
    <language>en</language>
    <item>
      <title>Building a PDF AI app with LangChain &amp; OpenAI</title>
      <dc:creator>Sanjeeb Kumar Sahoo</dc:creator>
      <pubDate>Fri, 03 Nov 2023 11:13:35 +0000</pubDate>
      <link>https://dev.to/ksanjeeb/building-a-generative-ai-app-with-langchainjsts-fa8</link>
      <guid>https://dev.to/ksanjeeb/building-a-generative-ai-app-with-langchainjsts-fa8</guid>
      <description>&lt;p&gt;Hello Dev...&lt;/p&gt;

&lt;p&gt;Let's start by grasping the essential basics before we begin our project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"&lt;em&gt;What is Generative AI ?&lt;/em&gt; "&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Generative AI is a technology that uses algorithms to create content, like text, images, or even music, on its own. It's like a creative AI that can produce things by learning from existing data. It's like a smart copycat that can produce new things.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;"&lt;em&gt;What is LLM?&lt;/em&gt; "&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Certainly! LLM stands for "Large Language Model." It's like a super-smart computer program that can understand and generate human-like text. Think of it as a virtual assistant that can chat with you, write articles, translate languages, and answer questions by learning from lots of text it has read. It's a powerful tool that makes computers understand and use language more like humans do.&lt;/p&gt;

&lt;p&gt;Here are a few examples&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GPT-3.5/ GPT-4: Developed by OpenAI, GPT-3 is a powerful (GPT-4 is even more powerful) language model that can generate human-like text. It's used in chatbots, content generation, and more.&lt;/li&gt;
&lt;li&gt;Claude-2: Created by Anthropic AI, It is an alternative language model of GPT.&lt;/li&gt;
&lt;li&gt;PaLM 2: Pathways Language Model 2 (PaLM 2) is a language model developed by Google.&lt;/li&gt;
&lt;li&gt;LLaMA: This is the LLM used by Meta AI. Meta recently released an open-source version of LlaMA, known as LLama 2.
Other Open Source LLMs are GPT-NeoX-20B, GPT-J,  OPT-175B, BLOOM, MPT-30B.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can check more Open Source models here - &lt;a href="https://huggingface.co/blog/os-llms"&gt;https://huggingface.co/blog/os-llms&lt;/a&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this app, we'll work with the OpenAI model.&lt;/p&gt;

&lt;p&gt;OpenAI offers an API that allows us to use any of their models. This means we can interact with OpenAI's models using their API. You can find more information here -&lt;a href="https://platform.openai.com/examples"&gt;https://platform.openai.com/examples&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To use the OpenAI API, you need to sign up for an account on OpenAI's platform. It is not free, but you will receive a signup bonus of $5 to experiment with the API when you first sign up. If you want to use it more extensively, you will need to add funds to your account. (Please note that signup bonuses are limited to certain models, such as GPT-3.5, but not GPT-4.0.)&lt;/p&gt;

&lt;p&gt;Investing just $5 in the OpenAI API opens the door to numerous experiments. With it, you can learn, create, and make it a fantastic value for your investment.&lt;/p&gt;

&lt;p&gt;If you want to implement basic functionality you can directly use the OpenAPI.&lt;/p&gt;

&lt;p&gt;Explore what's possible with some example applications- &lt;a href="https://platform.openai.com/examples"&gt;https://platform.openai.com/examples&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When it comes to implementing Large Language Models (LLMs) on custom data, the OpenAI Direct API can be a bit challenging to navigate. That's where &lt;strong&gt;Langchain&lt;/strong&gt; comes to the rescue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;"What is Langchain ?"&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;LangChain is a framework that makes it easy to build AI-powered applications using large language models (LLMs).It's not only restricted to OpenAI; you can use any of the LLMs.&lt;br&gt;
It provides a number of features that simplify the development process, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Chaining LLMs: LangChain allows you to chain multiple LLMs together to create more complex and sophisticated applications. For example, you could chain one LLM to translate a text from one language to another, and then chain another LLM to summarize the translated text.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using tools: LangChain can be used with other tools and resources, such as Wikipedia and Zapier. This makes it possible to build AI-powered applications that can interact with the real world in more meaningful ways. For example, you could build an application that uses LangChain to generate a list of restaurants near the user, and then uses Zapier to book a table at the user's chosen restaurant.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Conversational API: LangChain provides a conversational interface to its API. This makes it easier to interact with the API and to develop AI-powered applications that can have more natural conversations with users. For example, you could build an application that uses LangChain to answer customer questions in a more natural and engaging way.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is a simple analogy to help you understand LangChain:&lt;br&gt;
Imagine that you are building a kitchen. You need to use a variety of different appliances and tools to cook a meal, such as a stove, a refrigerator, and a knife. LangChain is like a kitchen for AI developers. It provides a set of tools and resources that make it easier to build AI-powered applications.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;"What is VectorDB ?"&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A vector database is a type of database that stores and retrieves data in the form of vectors. Vectors are mathematical representations of data points, and they can be used to represent a wide variety of different types of data, including text, images, audio, and video.&lt;/p&gt;

&lt;p&gt;Vector databases are particularly well-suited for applications that involve similarity search. Similarity search is the task of finding the most similar data points to a given query data point. For example, a vector database could be used to find the most similar images to a given query image, or the most similar text documents to a given query text document.&lt;/p&gt;

&lt;p&gt;Here is an explanation of vector databases in terms of text with an example:&lt;br&gt;
Imagine you have a database of text documents, such as news articles, blog posts, or product descriptions. You can use a vector database to represent each document as a vector. This vector can contain information about the document's content, such as the words that appear in the document, the frequency of those words, and the relationships between those words.&lt;br&gt;
Once you have represented your documents as vectors, you can use a vector database to perform similarity search. This means that you can find the most similar documents to a given query document.&lt;br&gt;
For example, let's say you have a vector database of news articles. You want to find the most similar articles to a query article about the latest iPhone release. You can use the vector database to perform a similarity search, and the database will return the articles with the most similar vectors.&lt;/p&gt;

&lt;p&gt;Here is a simple example of how to represent a text document as a vector:&lt;br&gt;
Document: "The latest iPhone release is rumored to have a new triple-lens camera system and a longer battery life."&lt;br&gt;
Vector: [0.5, 0.3, 0.2, 0.1, 0.05]&lt;br&gt;
The vector elements represent the following:&lt;br&gt;
0.5: The frequency of the word "iPhone" in the document.&lt;br&gt;
0.3: The frequency of the word "camera" in the document.&lt;br&gt;
0.2: The frequency of the word "battery" in the document.&lt;br&gt;
0.1: The frequency of the word "release" in the document.&lt;br&gt;
0.05: The frequency of the word "latest" in the document.&lt;br&gt;
(In our case we are using LanceDB you can use any of the VectorDB)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;"What is Embeddings?"&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Embeddings are like special number lists that represent words or data. These numbers help computers understand what words mean and how they're similar to each other.&lt;br&gt;
Embeddings are fundamentally &lt;strong&gt;vectors&lt;/strong&gt;, which are numerical representations of data, so to store that we need a VectorDB.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;"What is OpenAI Embeddings?"&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;OpenAI embeddings are a specific type of embedding trained on an extensive dataset of text and code, enabling OpenAI to better understand both natural language and programming.&lt;br&gt;
For example, you could use OpenAI embeddings to build a search engine that finds the most similar text documents to a given query document. You could also use OpenAI embeddings to build a recommendation system that recommends products or content to users based on their past behavior and preferences.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yT5r_Ilz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qqdojcditwgn892l1udo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yT5r_Ilz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qqdojcditwgn892l1udo.png" alt="Image description" width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's build it.&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Tech Stack -&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;1&lt;/strong&gt;.  &lt;strong&gt;NextJS&lt;/strong&gt; (Full-stack javascript framework, easily scalable, and supports SSR and multiple other features) &lt;br&gt;
&lt;strong&gt;2&lt;/strong&gt;.  &lt;strong&gt;LangChain&lt;/strong&gt; (LLM AI javascript/python framework, supports chaining, multi LLM, text search, embedding, and many other features)&lt;br&gt;
&lt;strong&gt;3&lt;/strong&gt;.  &lt;strong&gt;LanceDB&lt;/strong&gt; ( Vector Database to store embeddings, which can be further used by LLMs)&lt;br&gt;
Instead of LanceDB, you can also opt for Pinecone Cloud DB for testing, It will offer you some free storage.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Creating NextJS project&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx create-next-app@latest your-app-name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Installing Dependencies.&lt;br&gt;
then install the Javascript version of lance database&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install vectordb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Similarly, Install the JavaScript version of LangChain, since we'll be using Next.js as a backend. If your application is more AI-focused, prefer Python over JavaScript.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -S langchain
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can read Langchain documentation for more understanding - &lt;br&gt;
&lt;a href="https://js.langchain.com/docs/get_started"&gt;https://js.langchain.com/docs/get_started&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now run the App using&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Try to use the node version above &lt;strong&gt;v16&lt;/strong&gt; or the &lt;strong&gt;latest&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now It's time to do some coding...&lt;/p&gt;

&lt;p&gt;or Else you can clone the project from &lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/ksanjeeb"&gt;
        ksanjeeb
      &lt;/a&gt; / &lt;a href="https://github.com/ksanjeeb/PDF-AI"&gt;
        PDF-AI
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
       PDF Chat AI with Langchain and OpenAI
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1 id="user-content-pdf-ai"&gt;&lt;a class="heading-link" href="https://github.com/ksanjeeb/PDF-AI#pdf-ai"&gt;PDF-AI&lt;/a&gt;&lt;/h1&gt;
&lt;p&gt;PDF Chat AI with Langchain and OpenAI&lt;/p&gt;
&lt;/div&gt;

  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/ksanjeeb/PDF-AI"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;For this app, let's consider 4 pages:&lt;/p&gt;

&lt;p&gt;1) Landing Page&lt;br&gt;
2) Selecting the Type of Content.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YLS5qMls--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xixtj6zkabp6dk5fplz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YLS5qMls--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xixtj6zkabp6dk5fplz4.png" alt="Image description" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2nJeaqjm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pfbp3xgnhvws5zoxliek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2nJeaqjm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pfbp3xgnhvws5zoxliek.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3) Uploading PDF File&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hSkgUzS2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fdmhqtfldcubu20rtxhk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hSkgUzS2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fdmhqtfldcubu20rtxhk.png" alt="Image description" width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g3KZnH21--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c7bdttp4azsd1u76s5hl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g3KZnH21--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c7bdttp4azsd1u76s5hl.png" alt="Image description" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4) Chat with AI Page&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qusdgTDf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ekpcbqbqkafqgnz578ti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qusdgTDf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ekpcbqbqkafqgnz578ti.png" alt="Image description" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's code it or else you can clone my project.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/ksanjeeb"&gt;
        ksanjeeb
      &lt;/a&gt; / &lt;a href="https://github.com/ksanjeeb/PDF-AI"&gt;
        PDF-AI
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
       PDF Chat AI with Langchain and OpenAI
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1 id="user-content-pdf-ai"&gt;&lt;a class="heading-link" href="https://github.com/ksanjeeb/PDF-AI#pdf-ai"&gt;PDF-AI&lt;/a&gt;&lt;/h1&gt;
&lt;p&gt;PDF Chat AI with Langchain and OpenAI&lt;/p&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/ksanjeeb/PDF-AI"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;br&gt;&lt;br&gt;
File Structure :

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jWK5Rato--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ham37subxpojfz0w409g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jWK5Rato--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ham37subxpojfz0w409g.png" alt="Image description" width="800" height="647"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the UI side, we have 3 components.&lt;br&gt;
I'm using TailwindCSS for styling you can use any alternative.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;UI Pages/Components :-&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1)&lt;strong&gt;index.js&lt;/strong&gt; (Landing Page)&lt;/p&gt;

&lt;p&gt;Click Here:- &lt;a href="https://github.com/ksanjeeb/PDF-AI/blob/master/src/pages/index.js"&gt;https://github.com/ksanjeeb/PDF-AI/blob/master/src/pages/index.js&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;2) &lt;strong&gt;upload.js&lt;/strong&gt; (Select the type and Uploading PDF)&lt;/p&gt;

&lt;p&gt;Click Here:- &lt;a href="https://github.com/ksanjeeb/PDF-AI/blob/master/src/pages/upload.js"&gt;https://github.com/ksanjeeb/PDF-AI/blob/master/src/pages/upload.js&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3) &lt;strong&gt;chat.js&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Click Here :- &lt;a href="https://github.com/ksanjeeb/PDF-AI/blob/master/src/pages/chat.js"&gt;https://github.com/ksanjeeb/PDF-AI/blob/master/src/pages/chat.js&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's write the code for the brain (Backend)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Backend/API :-&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So for the basic version of the app we need 4 APIs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Upload API for PDF Embedding&lt;/strong&gt;:&lt;br&gt;
This API allows users to upload a PDF file. The system will process the file, generate embeddings, and store them in the database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;List API for File Metadata&lt;/strong&gt;:&lt;br&gt;
Users can access this API to view the metadata of the uploaded PDF file. It provides details such as the file name and associated metadata, making it ideal for displaying in the user interface. &lt;br&gt;
Note :- In this project, it's important to note that we support one file at a time. If you wish to support multiple file storage, you can achieve this by implementing a robust database system with the capability to store metadata for each file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Delete API for File Embedding&lt;/strong&gt;:&lt;br&gt;
For the scenario where we deal with one file at a time, this API allows users to delete the embedding associated with that file in the database. This step is necessary before uploading a new file to ensure no interference with previous embeddings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Chat API with Prompt-Based Responses&lt;/strong&gt;:&lt;br&gt;
This API is designed to respond to user prompts. Users provide prompts, and the system generates responses based on the input by using embedding vector embedding search and LLM (GPT-3.5 and GPT-4)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In NextJS, all backend code  must be written inside the /api file.&lt;/p&gt;

&lt;p&gt;Make sure to create a .env file in the root folder and add the OpenAI API key to it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;.env.local&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Add all the secret key here
OPENAI_API_KEY="Add Your OpenAI key"
lanceDB_URI=lanceDB/vector-data 
//Creating a folder called lanceDB will store the vector data inside it.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can get the OpenAI API key from here by logging into your account.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Account -&amp;gt; View API Keys -&amp;gt; Generate New Key&lt;br&gt;
&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://platform.openai.com/account/api-keys" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://res.cloudinary.com/practicaldev/image/fetch/s--Vi4ufSe5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.openai.com/API/images/opengraph.png" height="420" class="m-0" width="800"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://platform.openai.com/account/api-keys" rel="noopener noreferrer" class="c-link"&gt;
          OpenAI Platform
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Explore developer resources, tutorials, API docs, and dynamic examples to get the most out of OpenAI's platform.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://res.cloudinary.com/practicaldev/image/fetch/s--PCIr1NPR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://openaiapi-site.azureedge.net/public-assets/d/33a3bccaea/favicon.svg" width="32" height="32"&gt;
        platform.openai.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;br&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;\api\v1\uploadData.js&lt;/strong&gt; (For Uploading File)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Import necessary modules and components
import { PDFLoader } from "langchain/document_loaders/fs/pdf";
import { CSVLoader } from "langchain/document_loaders/fs/csv";
import { connect } from "vectordb";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { LanceDB } from "langchain/vectorstores/lancedb";
import { TextLoader } from "langchain/document_loaders/fs/text";

// Define the configuration for the API, specifying a request body size limit of 4MB
export const config = {
    api: {
        bodyParser: {
            sizeLimit: '4mb' 
        }
    }
}

// Helper function to create a valid string from the input (cleaning and formatting)
function makeValidString(inputString) {
    const pattern = /^[a-z0-9-]+$/;
    const lowerCaseString = inputString.toLowerCase();
    const cleanedString = lowerCaseString.replace('.pdf', '');
    const validCharacters = [];
    for (const char of cleanedString) {
        if (pattern.test(char)) {
            validCharacters.push(char);
        } else if (char === ' ') {
            validCharacters.push('-');
        }
    }
    const validString = validCharacters.join('');
    return validString;
}

// Helper function to determine the appropriate loader based on the file type
function determineLoader(type, context, res) {
    let file;
    switch (type) {
        case 'application/pdf':
            file = new Blob([context], { type: 'application/pdf' });
            return new PDFLoader(file);
        case 'application/csv':
            file = new Blob([context], { type: 'application/csv' });
            return new CSVLoader(file);
        case 'text/plain':
            file = new Blob([context], { type: 'text/plain' });
            return new TextLoader(file);
        case 'application/raw-text':
            return new TextLoader(context);
        default:
            // Handle unsupported file types by sending a response
            res.json({ success: false, error: "Unsupported file type" });
    }
}

// Define the main function to handle POST requests
export default async function POST(req, res) {
    try {
        let base64FileString, fileName, fileType, tableName, buffer;

        if (req.body.isFile) {
            // Extract relevant information from the request body
            base64FileString = req.body.file;
            fileName = req.body.fileName;
            buffer = Buffer.from(base64FileString, 'base64');
        }

        // Generate a table name based on the file name or input data
        tableName = req.body.isFile ? makeValidString(fileName) : fileName;
        fileType = req.body.fileType;

        // Determine the content source (file or input data) and create the appropriate loader
        const context = req.body.isFile ? buffer : req.body.input;
        const loader = await determineLoader(fileType, context, res);

        // Load and split the content using the chosen loader
        const splitDocuments = await loader.loadAndSplit();
        const pageContentList = [];
        const metaDataList = [];

        if (splitDocuments.length &amp;gt; 0) {
            // Extract page content and generate metadata for each split document
            splitDocuments?.forEach((item, index) =&amp;gt; {
                pageContentList.push(item.pageContent);
                metaDataList.push({
                    id: index
                });
            });
        }

        // Connect to the database and create a data schema
        const db = await connect(process.env.lanceDB_URI);
        const dataSchema = [
            { vector: Array(1536), text: fileType, id: 1 }
        ];

        // Create a table in the database
        const table = await db.createTable(tableName, dataSchema);

        // Converting Text to OpenAI embedding vector and storing  into the database table using LanceDB.fromTexts()
        await LanceDB.fromTexts(
            [...pageContentList],
            [...metaDataList],
            new OpenAIEmbeddings(),
            { table }
        );

        // Send a success response along with the split documents
        res.json({ success: true });
    } catch (err) {
        // Handle errors by sending a failure response
        res.json({ success: false, error: "" + err });
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the provided code, we establish a connection to a database and create a table for storing OpenAI embeddings...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;       const db = await connect(process.env.lanceDB_URI);
        const dataSchema = [
            { vector: Array(1536), text: fileType, id: 1 }
        ];

        const table = await db.createTable(tableName, dataSchema)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then, we split the content, create OpenAI embedding vectors, and store them in a table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;       await LanceDB.fromTexts(
            [...pageContentList],
            [...metaDataList],
            new OpenAIEmbeddings(),
            { table }
        );

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/ksanjeeb/PDF-AI/blob/master/src/pages/api/v1/uploadData.js"&gt;https://github.com/ksanjeeb/PDF-AI/blob/master/src/pages/api/v1/uploadData.js&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;\api\v1\query.js&lt;/strong&gt; (Chat API)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Import necessary modules and components
import { LanceDB } from "langchain/vectorstores/lancedb";
import { OpenAI } from "langchain/llms/openai";
import { VectorDBQAChain } from "langchain/chains";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { connect } from "vectordb";

// Define the main function that handles POST requests
export default async function POST(request, response) {
  try {
    // Await the request body, which contains the data from the client
    const body = await request.body;

    // Connect to the LanceDB database using the provided URI
    const db = await connect(process.env.lanceDB_URI);

    // Open a table in the database based on the index specified in the request body
    const table = await db.openTable(body.index);

    // Create a LanceDB instance with OpenAI embeddings and the selected table
    const vectorStore = new LanceDB(new OpenAIEmbeddings(), { table });

    // Create an OpenAI model instance with the specified parameters
    const model = new OpenAI({
      modelName: "gpt-3.5-turbo",
      // Additional options can be added here
    });

    // Create a VectorDBQAChain instance from the model, vectorStore, and configuration
    const chain = VectorDBQAChain.fromLLM(model, vectorStore, {
      k: 1,
      returnSourceDocuments: true,
    });

    // Call the chain with the provided query (prompt)
    chain.call({ query: body.prompt })
      .then((res) =&amp;gt; response.json({ success: true, message: res })) // Respond with the result on success
      .catch(err =&amp;gt; response.json({ success: false, error: "Error :" + err })) // Respond with an error on failure
      .finally(err =&amp;gt; response.json({ success: true, message: "resolved." })); // Always respond with a "resolved" message

  } catch (e) {
    // Handle any errors that occur during the process and respond with an error message
    response.json({ success: false, error: "Error :" + e });
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The client sends a POST request to the Langchain server with the prompt to be answered. The Langchain server then uses the VectorDBQAChain instance to perform the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It searches the LanceDB vector store for the most similar documents to the prompt.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It sends the top K documents to the OpenAI LLM for QA.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It returns the results of the QA query to the client.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;( The variable k represents the number of documents that are considered when sending a query to the OpenAI Language Model (LLM) for Question Answering.&lt;br&gt;
The k is set to 1, which means that only the top 1 document will be sent to the OpenAI model for question answering. This indicates that the model will consider and analyze the content of one document as it generates its answer.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ksanjeeb/PDF-AI/blob/master/src/pages/api/v1/query.js"&gt;https://github.com/ksanjeeb/PDF-AI/blob/master/src/pages/api/v1/query.js&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;/api/v1/listIndex&lt;/strong&gt; (Listing existing Vector Table/Files)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { connect } from "vectordb";

export default async function GET(request, response) {
    try {
      const db = await connect(process.env.lanceDB_URI);
      const table = await db.tableNames()
      response.json({ success: true, data:table })
    } catch (e) {
      response.json({ success: false, error: "Error :" + e })
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/ksanjeeb/PDF-AI/blob/master/src/pages/api/v1/listIndex.js"&gt;https://github.com/ksanjeeb/PDF-AI/blob/master/src/pages/api/v1/listIndex.js&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;/api/v1/deleteIndex.js&lt;/strong&gt; (Deleting the Vector Table)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { connect } from "vectordb";

export default async function POST(request, response) {
    try {
        const indexName = request.body.index;
        const db = await connect(process.env.lanceDB_URI);
        await db.dropTable(indexName)
        response.json({ success: true })
    } catch (e) {
        response.json({ success: false, error: "Error :" + e })
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hooray! Your app is ready! Congratulations!&lt;/p&gt;

&lt;p&gt;Similarly, You can implement text content-based AI Chat, read URL Content, transcripting YouTube content, and many more.&lt;br&gt;
Check here to know more about different sources:- &lt;a href="https://js.langchain.com/docs/modules/data_connection/"&gt;https://js.langchain.com/docs/modules/data_connection/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://js.langchain.com/docs/integrations/document_loaders/file_loaders/"&gt;https://js.langchain.com/docs/integrations/document_loaders/file_loaders/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://js.langchain.com/docs/integrations/document_loaders/web_loaders/"&gt;https://js.langchain.com/docs/integrations/document_loaders/web_loaders/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also chain LLMs for complex apps.&lt;br&gt;
&lt;a href="https://js.langchain.com/docs/modules/chains/"&gt;https://js.langchain.com/docs/modules/chains/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Integrating with other Vector DB.&lt;br&gt;
&lt;a href="https://js.langchain.com/docs/integrations/vectorstores"&gt;https://js.langchain.com/docs/integrations/vectorstores&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Integrating with different LLM and Chat models.&lt;br&gt;
&lt;a href="https://js.langchain.com/docs/integrations/llms/"&gt;https://js.langchain.com/docs/integrations/llms/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://js.langchain.com/docs/integrations/chat/"&gt;https://js.langchain.com/docs/integrations/chat/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you so much.&lt;br&gt;
If you are having any doubt add it in the comment section.&lt;br&gt;
Happy Hacking.....&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@hdbernd?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash"&gt;Bernd 📷 Dittrich&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/a-close-up-of-a-white-wall-with-writing-on-it-1EhmvvIWNcg?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>langchain</category>
      <category>ai</category>
      <category>openai</category>
    </item>
  </channel>
</rss>
