DEV Community

Sathish Panthagani
Sathish Panthagani

Posted on

Building a React.dev RAG chatbot using Vercel AI SDK

Introduction

In this blog post, I'll share my journey of building a React.dev AI assistant which retrieves the info from react.dev website, and answers the prompt provided by the user. The main goal is to use the information available in react.dev website as knowledge base of the assistant.

Tech Stack

  • Next.js: A React-based framework for server-side rendering and routing.
  • ChatGPT: A popular generative AI LLM used for the assistant
  • Vercel AI SDK: The AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications, follow more from their documentation here.
  • Supabase: A Postgres database to store vector embeddings with pgvector extension.
  • Tailwind CSS: A utility-first CSS framework that provides flexibility while keeping the UI clean and customisable.
  • Drizzle ORM: This library provides utilities to interact with DB and run vector similarity search against vector db.

Prerequisites:

This project requires below prerequisites to setup.

  • Supabase Database with Pgvector extension enabled - this will be used to store vector embeddings of the website content.
  • Open AI API key - if you dont have an OpenAI API key - get one from here

Please follow this link to enable pgvector extension for the database provisioned in supabase.


Project setup

This project consists of Next.js, Supabase, and Tailwind CSS as frontend UI, Vercel ai as backend chat integration with OpenAI API. It uses cheerio to scrap webpages and stores the embeddings in supabase pgvector database.

Creating NextJS Project

You can easily setup NextJS project by running following command or follow the instructions from NextJS from the (link)[https://nextjs.org/docs/app/getting-started/installation]

npx create-next-app@latest
Enter fullscreen mode Exit fullscreen mode

Follow the screen instructions and it will create a NextJS project with basic setup.

Installing required libraries

npm install @ai-sdk/openai ai drizzle-orm drizzle-zod zod @langchain/community
Enter fullscreen mode Exit fullscreen mode

Environment configuration

Create a .env file in the root directory of your project to manage variables used in the project.

Add the following variables, replacing the placeholders with the actual keys.

# .env

OPENAI_API_KEY=YOUR_OPENAI_API_KEY
DATABASE_URL=YOUR_SUPABASE_DATABASE_URL
Enter fullscreen mode Exit fullscreen mode

RAG implementation

The assistant is based on Retrieval-Augmented Generation(RAG) which uses knowledge base based on the content from react.dev website. You can read more about RAG (here)[https://sdk.vercel.ai/docs/guides/rag-chatbot#what-is-rag]

Step 1: get the website content scrapped using cheerio plugin. The following code demonstrate the url as input and returns document content scrapped from the website.

import 'cheerio';
import { CheerioWebBaseLoader } from '@langchain/community/document_loaders/web/cheerio';
...
const pTagSelector = 'p';
const cheerioLoader = new CheerioWebBaseLoader(url, {
    selector: pTagSelector,
  });
const docs = await cheerioLoader.load();

Enter fullscreen mode Exit fullscreen mode

Step 2: Split the website content into multiple chunks so that embeddings can be generated easily with shorter text.

import { RecursiveCharacterTextSplitter } from '@langchain/textsplitters';
...
const splitter = new RecursiveCharacterTextSplitter({
    chunkSize: 2000,
    chunkOverlap: 100,
  });
const allSplits = await splitter.splitDocuments(docs);
...
Enter fullscreen mode Exit fullscreen mode

Step 3: generate the embeddings for each of the chunk and store them in vector db, so that it can be used to query from chat api.

const dbResult = await createResources(
    allSplits.map((doc) => ({
      content: doc.pageContent,
      source: doc.metadata.source,
    }))
  );

Enter fullscreen mode Exit fullscreen mode
// actions/resources.ts
import { embed, embedMany } from 'ai';
import { openai } from '@ai-sdk/openai';

const embeddingModel = openai.embedding('text-embedding-ada-002');

export const generateEmbedding = async (value: string): Promise<number[]> => {
  const input = value.replaceAll('\\n', ' ');
  const { embedding } = await embed({
    model: embeddingModel,
    value: input,
  });
  return embedding;
};

export const createResources = async (values: NewResourceParams[]) => {
  try {
    for (const input of values) {
      const { content, source } = insertResourceSchema.parse(input);

      const embedding = await generateEmbedding(content);
      await db.insert(resources).values({
        content,
        source,
        embedding,
      });
    }

    return 'Resources successfully created and embedded.';
  } catch (error) {
    return error instanceof Error && error.message.length > 0
      ? error.message
      : 'Error, please try again.';
  }
};
Enter fullscreen mode Exit fullscreen mode

Chat API implementation

once the knowledge base is built with react.dev website links, the document information get stored in supabase table. Using AI SDK, chat api can be implemented based their documentation available here.

following is the sample code to stream text from server:

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai('gpt-4o'),
    messages,
    system: `You are a helpful assistant. Check your knowledge base before answering any questions.
    Only respond to questions using information from tool calls.
    if no relevant information is found in the tool calls, respond, "Sorry, I don't know."`,
    tools: {
      getInformation: tool({
        description: `get information from your knowledge base to answer questions.`,
        parameters: z.object({
          question: z.string().describe('the users question'),
        }),
        execute: async ({ question }) => findRelevantContent(question),
      }),
    },
  });

  return result.toDataStreamResponse();
}
Enter fullscreen mode Exit fullscreen mode

With the above Prompt, model will always looks for the knowledge base and triggers the tool getInformation which in turns executes function findRelevantContent. This function queries vector db and executes cosine similarity to find matches and returns the relevant context to LLM. LLM then use the knowledge base info and then generate response to the user query.

export const findRelevantContent = async (userQuery: string) => {
  const userQueryEmbedded = await generateEmbedding(userQuery);
  const similarity = sql<number>`1 - (${cosineDistance(
    resources.embedding,
    userQueryEmbedded,
  )})`;
  const similarGuides = await db
    .select({ name: resources.content, similarity })
    .from(resources)
    .where(gt(similarity, similarityThreshold))
    .orderBy(t => desc(t.similarity))
    .limit(4);
  return similarGuides;
};
Enter fullscreen mode Exit fullscreen mode

Chat UI implementation

AI SDK has AI SDK UI which is framework-agnostic toolkit, streamlining the integration of advanced AI functionalities into your applications. It contains useChat hook which helps to integrate chat api (create earlier) with less effort.

// src/app/page.tsx

...
const { messages, input, handleInputChange, handleSubmit, error } = useChat({
    maxSteps: 10,
  });
...
Enter fullscreen mode Exit fullscreen mode

Once UI part is implemented, it can be tested using any of the questions from react.dev website.

React Chatbot

Conclusion

AI SDK has multiple features to integrate LLM directly to any of the react application without much hassle. It has utilities to for frontend and backend which makes it unique and easy to integrate.

In this guide, I have covered process of RAG implementation using AI SDK which will be helpful to create a knowledge base for any of the website.

A fully functional example of this project is included at the end of this article.

GitHub logo sathish39893 / react-docs-ai-app

react chat bot with RAG on react docs from react.dev website

React Docs Generative AI chatbot

This chatbot is a generative AI chatbot that can answer questions about React documentation using RAG (retrieval augmented generation) pipeline.

It is built using Next.js ai sdk and cheerio for web scraping and langchain to split the text into sentences. It uses drizzle ORM to store the embeddings of the scraped data and uses the embeddings to generate answers to the questions asked by the user.

Getting Started

First, run the development server:

npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
Enter fullscreen mode Exit fullscreen mode

Open http://localhost:3000 with your browser to see the result.

This project uses next/font to automatically optimize and load Geist, a new font family for Vercel.

Features

  • Generative AI Chatbot: The chatbot is built using Vercel's AI SDK and can answer questions about React documentation.
  • RAG (retrieval augmented generation) pipeline: The chatbot uses cheerio to…




Top comments (0)