<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Akintola Abiodun</title>
    <description>The latest articles on DEV Community by Akintola Abiodun (@abiodun_akintola).</description>
    <link>https://dev.to/abiodun_akintola</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/abiodun_akintola"/>
    <language>en</language>
    <item>
      <title>Check out my new post.</title>
      <dc:creator>Akintola Abiodun</dc:creator>
      <pubDate>Sun, 23 Feb 2025 18:33:15 +0000</pubDate>
      <link>https://dev.to/abiodun_akintola/check-out-my-new-post-21a3</link>
      <guid>https://dev.to/abiodun_akintola/check-out-my-new-post-21a3</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/abiodun_akintola" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F2656804%2F46fe543d-5800-4145-a4fe-d460c0d6e806.jpg" alt="abiodun_akintola"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/abiodun_akintola/building-a-simplified-retrieval-augmented-generation-system-with-supabase-storage-and-openai-20c8" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Building a simplified Retrieval Augmented Generation System with Supabase Storage and OpenAI Embeddings in Next.js&lt;/h2&gt;
      &lt;h3&gt;Akintola Abiodun ・ Feb 23&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#rag&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#nextjs&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#productivity&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>rag</category>
      <category>ai</category>
      <category>nextjs</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Building a simplified Retrieval Augmented Generation System with Supabase Storage and OpenAI Embeddings in Next.js</title>
      <dc:creator>Akintola Abiodun</dc:creator>
      <pubDate>Sun, 23 Feb 2025 18:15:38 +0000</pubDate>
      <link>https://dev.to/abiodun_akintola/building-a-simplified-retrieval-augmented-generation-system-with-supabase-storage-and-openai-20c8</link>
      <guid>https://dev.to/abiodun_akintola/building-a-simplified-retrieval-augmented-generation-system-with-supabase-storage-and-openai-20c8</guid>
      <description>&lt;p&gt;First off, please forgive me for the ridiculously long title. 😅 I’ll be the first to admit, I’m terrible at naming things. Well… except for variables, of course! (Shoutout to all my beautifully named data1 and data2 variables. 💀)&lt;/p&gt;

&lt;p&gt;Okay, enough about my ironic talent for naming things. Let’s dive into today’s topic: building a simplified RAG system.&lt;/p&gt;

&lt;p&gt;RAG (Retrieval-Augmented Generation) is an incredibly useful technology with a wide range of applications. The ability to query your organization’s data using AI is a game-changer! It’s a breath of fresh air for anyone dealing with large amounts of information.&lt;/p&gt;

&lt;p&gt;If you’re new to this, you might wonder how it works or how to start building your own RAG system. Well, today is your lucky day! We will explore the key concepts and techniques for developing one.&lt;/p&gt;

&lt;p&gt;Imagine a hypothetical scenario in which a school owner manages a database filled with student records, tuition payments, and salary entries for staff. Searching for specific information can be tedious and strenuous, often requiring complex queries.&lt;/p&gt;

&lt;p&gt;This is where a RAG based solution comes in! Instead of manually searching or writing complex SQL queries, the school could simply use an application or chatbot that allows them to type what they need like "show all pending tuition payments", and get instant results from their database. That’s a huge relief, eliminating the need for traditional manual searches and complicated queries.&lt;/p&gt;

&lt;p&gt;With this in mind, let’s dive into building a basic RAG system using Supabase and OpenAI embeddings!&lt;/p&gt;

&lt;p&gt;We will essentially create a system that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uploads and processes different types of files (PDF, DOCX, XLSX, TXT)
Converts document content into embeddings (more on this later!)&lt;/li&gt;
&lt;li&gt;Stores files and their embeddings( We will use these embeddings to generate the most probable response for the RAG chatbot we are creating.)&lt;/li&gt;
&lt;li&gt;Finds relevant content using semantic search(using the mathematical cosine similarity to compare embeddings)&lt;/li&gt;
&lt;li&gt;Powers a smart chatbot that can answer questions about your documents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before we dive in, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Next.js project set up&lt;/li&gt;
&lt;li&gt;A Supabase account and project&lt;/li&gt;
&lt;li&gt;An OpenAI API key&lt;/li&gt;
&lt;li&gt;A basic understanding of TypeScript&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First, let's install the necessary packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install @supabase/supabase-js @ai-sdk/openai ai xlsx pdf-parse mammoth tesseract.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;Step 1: File Upload and Storage&lt;/p&gt;

&lt;p&gt;First, let's create a component to handle file uploads:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'use client'

import { useState } from 'react'
import { useDropzone } from 'react-dropzone'
import { uploadMedia } from '@/lib/upload-media'

// The Artifact will be the organization file that is being uploaded
interface Artifact {
  id?: number
  file?: File
  name: string
  description: string
  uploaded: boolean
}

export default function FileUpload() {
  const [isUploading, setIsUploading] = useState(false)

  // Handle file upload and embedding generation
  const uploadArtifact = async (artifact: Artifact) =&amp;gt; {
    if (!artifact.file) return
    setIsUploading(true)

    try {
      // First, upload file to Supabase storage
      const mediaPath = await uploadMedia(artifact.file)

      if (!mediaPath) {
        throw new Error('Failed to upload file')
      }

      // Generate embeddings via API route
      const formData = new FormData()
      formData.append("mediaPath", mediaPath)

      const response = await fetch("/api/generate-embeddings", {
        method: "POST",
        body: formData,
      })

      const data = await response.json()

      if (!data.success) {
        throw new Error('Failed to generate embeddings')
      }

      // Success! You can now store the embeddings in your database
      console.log('Embeddings generated:', data.embedding)
      //please store your generated embeddings in your database :)
    } catch (error) {
      console.error('Error:', error)
    } finally {
      setIsUploading(false)
    }
  }

  // Set up drag &amp;amp; drop
  const onDrop = useCallback((acceptedFiles: File[]) =&amp;gt; {
    const validFileTypes = [
      'text/plain',
      'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
      'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',
      'application/pdf'
    ]

    const validFiles = acceptedFiles.filter(file =&amp;gt; validFileTypes.includes(file.type))

    if (validFiles.length &amp;gt; 0) {
      // Process each valid file
      validFiles.forEach(file =&amp;gt; {
        uploadArtifact({
          file,
          name: file.name,
          description: '',
          uploaded: false
        })
      })
    }
  }, [])

  const { getRootProps, getInputProps } = useDropzone({
    onDrop,
    accept: {
      'text/plain': ['.txt'],
      'application/vnd.openxmlformats-officedocument.wordprocessingml.document': ['.docx'],
      'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet': ['.xlsx'],
      'application/pdf': ['.pdf']
    }
  })

  return (
    &amp;lt;div&amp;gt;
      &amp;lt;div {...getRootProps()} className="border-2 border-dashed p-8 text-center"&amp;gt;
        &amp;lt;input {...getInputProps()} /&amp;gt;
        &amp;lt;p&amp;gt;Drop files here or click to select&amp;lt;/p&amp;gt;
      &amp;lt;/div&amp;gt;
      {isUploading &amp;amp;&amp;amp; &amp;lt;p&amp;gt;Processing files...&amp;lt;/p&amp;gt;}
    &amp;lt;/div&amp;gt;
  )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let us create the upload function that handles the Supabase storage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// lib/upload-media.ts
import { supabase } from '@/lib/utils'

export async function uploadMedia(file: File) {
  // Upload file to Supabase storage with a unique name
  const { data, error } = await supabase.storage
    .from('media')
    .upload(`${Date.now()}-${file.name}`, file)

  if (error) {
    console.error('Error uploading file:', error)
    return null
  }

  return data.path
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Understanding Embeddings
&lt;/h2&gt;

&lt;p&gt;OK Before we dive deeper, we should understand what embeddings are right?&lt;/p&gt;

&lt;p&gt;Think of embeddings as a way to convert words or sentences into numbers while keeping their meaning intact. Imagine you are organizing a library or lets say your personal bookshelf at home, but instead of sorting books alphabetically, you arrange them based on how similar their content is. That’s exactly what embeddings do with text.&lt;/p&gt;

&lt;p&gt;A Simple Example:&lt;br&gt;
Let’s say we have two sentences:&lt;/p&gt;

&lt;p&gt;-"I love play football"&lt;br&gt;
-"Football is my favourite sport"&lt;/p&gt;

&lt;p&gt;These two sentences mean almost the same thing, right? Even though the exact words aren’t identical, the meaning is very similar. Embeddings capture that similarity and assign them closely related numbers in a mathematical space.&lt;/p&gt;

&lt;p&gt;Now, compare these:&lt;/p&gt;

&lt;p&gt;-"Walahi I would love to eat noodles right now"&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"I love to play football"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ignoring my subtle hint that I'm currently craving noodles while writing this, these sentences have completely different meanings, so their embeddings would be very far apart in that space.&lt;/p&gt;

&lt;p&gt;How Does This Help?&lt;br&gt;
Now, you might be wondering, why does this even matter? Well, this is the magic behind how AI understands and finds similar content, even when the exact words don’t match word by word!&lt;/p&gt;

&lt;p&gt;Let us use our initial example, imagine you run a school database where teachers and staff need to look up student records. Instead of typing the exact name of a document, they can search naturally:&lt;/p&gt;

&lt;p&gt;"Show me unpaid tuition fees"&lt;/p&gt;

&lt;p&gt;Even if the actual database entry is labeled as "Pending student invoices", the system can still match the two because their embeddings are similar,  No need for exact keywords, just ask in your own words, and AI will figure it out, kinda wild if you ask me. &lt;/p&gt;

&lt;p&gt;Why Does This Matter for RAG?&lt;br&gt;
In our RAG (Retrieval-Augmented Generation) system, we’ll use embeddings to:&lt;br&gt;
-Store important information as numerical representations&lt;br&gt;
Compare new queries(that is the user's input) against existing stored data&lt;br&gt;
-Retrieve the most relevant content for AI to generate the best response.&lt;br&gt;
So basically, embeddings help AI "think" more like humans by recognizing meaning instead of just matching words, yh yh I know, mind-blowing lol. &lt;/p&gt;

&lt;p&gt;Moving on, Here's our API route for generating embeddings&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// app/api/generate-embeddings/route.ts
import { createEmbedding } from '@/lib/create-embeddings'
import { type NextRequest, NextResponse } from 'next/server'
export const maxDuration = 60
export async function POST(req: NextRequest) {
  try {
    const formData = await req.formData()
    const filePath = formData.get('mediaPath') as string

    if (!filePath) {
      return NextResponse.json(
        { error: 'Missing file path' },
        { status: 400 }
      )
    }
    // Generate embeddings for the file content
    const embedding = await createEmbedding(filePath)
    return NextResponse.json({ success: true, embedding })
  } catch (error) {
    console.error('Error:', error)
    return NextResponse.json(
      { error: 'Internal server error' },
      { status: 500 }
    )
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Processing Different File Types
&lt;/h2&gt;

&lt;p&gt;Different file types need different handling, we can not use the same process of extracting content from pdfs to that of xlsx, docx, etc. Here's how we extract text from various formats:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// lib/create-embeddings.ts
import { createOpenAI } from '@ai-sdk/openai';
import { createClient } from '@supabase/supabase-js';
import { embed } from 'ai';
import { execSync } from 'child_process';
import fs from 'fs';
import mammoth from 'mammoth';
import os from 'os';
import path from 'path';
import PdfParse from 'pdf-parse';
import Tesseract from 'tesseract.js';
import * as XLSX from 'xlsx';
import { chunkDocumentWithOverlap } from './chunkDocument';

const supabase = createClient(
  process.env.SUPABASE_URL!,
  process.env.SUPABASE_ANON_KEY!
);

const openai = createOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

// Main function to create embeddings
export async function createEmbedding(filePath: string) {
  const { data, error } = await supabase.storage
    .from('media')
    .download(filePath)
  if (error || !data) throw new Error('Failed to download file')
  // Extract text from the file
  const text = await extractTextFromFile(data, filePath)

  // Split text into chunks for better processing
  const chunks = chunkDocumentWithOverlap(text)

  // Generate embeddings for each chunk
  const chunksWithEmbeddings = await Promise.all(
    chunks.map(async (chunk) =&amp;gt; {
      const { embedding } = await embed({
        model: openai.embedding('text-embedding-3-small'),
        value: chunk,
      })
      return { embedding, content: chunk }
    })
  )
  return chunksWithEmbeddings
}

async function extractTextFromFile(
  fileData: Blob,
  filePath: string
): Promise&amp;lt;string&amp;gt; {
  const fileExtension = path.extname(filePath).toLowerCase();
  const tempFilePath = path.join(os.tmpdir(), `temp_file${fileExtension}`);

  console.log(`Temp file path: ${tempFilePath}`);

  // Write the blob to a temporary file
  try {
    await fs.promises.writeFile(
      tempFilePath,
      Buffer.from(await fileData.arrayBuffer())
    );
    console.log('File written successfully.');
  } catch (err) {
    console.error('Error writing file:', err);
    throw new Error('Failed to write temporary file.');
  }

  // Check if the file exists
  const fileExists = fs.existsSync(tempFilePath);
  if (!fileExists) {
    console.error('Temp file was not created.');
    throw new Error('File was not created successfully.');
  }

  console.log('File exists:', fileExists);

  try {
    switch (fileExtension) {
      case '.pdf':
        return await extractTextFromPDF(tempFilePath);
      case '.docx':
        return await extractTextFromDOCX(tempFilePath);
      case '.xlsx':
        return extractTextFromXLSX(tempFilePath);
      case '.txt':
        console.log('Reading .txt file...');
        const textContent = await fs.promises.readFile(tempFilePath, 'utf-8');
        console.log('Extracted text:', textContent);
        return textContent;
      default:
        throw new Error('Unsupported file type');
    }
  } catch (err) {
    console.error('Error extracting text:', err);
    throw new Error('Failed to extract text.');
  } finally {
    // Clean up the temporary file
    try {
      await fs.promises.unlink(tempFilePath);
      console.log('Temp file deleted.');
    } catch (err) {
      console.error('Error deleting temp file:', err);
    }
  }
}

//function to clean up the text
const cleanText = (text : string) =&amp;gt; {
  return text
    .replace(/\x00/g, '') // Remove NULL bytes
    .replace(/[^\x20-\x7E\n]/g, ''); // Remove non-ASCII characters (optional)
};

// extract the text from pdf
async function extractTextFromPDF(filePath: string): Promise&amp;lt;string&amp;gt; {
  try {
    console.log('Trying pdf-parse...');
    const dataBuffer = await fs.promises.readFile(filePath);
    const data = await PdfParse(dataBuffer);

    let text = data.text.replace(/\uFFFD/g, '').trim(); // Remove invalid characters

    if (text &amp;amp;&amp;amp; text.length &amp;gt; 20) {
      console.log('Extracted text successfully using pdf-parse.');
      return cleanText(text);
    }

    throw new Error('Extracted text is empty or invalid.');
  } catch (error) {
    console.warn('pdf-parse failed:', error);
  }

  // Fallback: Using pdftotext (requires poppler-utils installed)
  try {
    console.log('Trying pdftotext...');
    const extractedText = execSync(`pdftotext -layout "${filePath}" -`, {
      encoding: 'utf-8',
    }).trim();
    if (extractedText.length &amp;gt; 20) {
      console.log('Extracted text successfully using pdftotext.');
      return extractedText;
    }

    throw new Error('pdftotext extraction returned empty text.');
  } catch (error) {
    console.warn('pdftotext failed:', error);
  }

  // Final Fallback: OCR with Tesseract.js (for scanned PDFs)
  try {
    console.log('Trying OCR (Tesseract.js)...');
    const {
      data: { text },
    } = await Tesseract.recognize(filePath, 'eng', {
      logger: (m) =&amp;gt; console.log(m), // Log OCR progress
    });

    if (text.length &amp;gt; 20) {
      console.log('Extracted text successfully using OCR.');
      return text;
    }

    throw new Error('OCR extraction returned empty text.');
  } catch (error) {
    console.error('OCR extraction failed:', error);
  }

  throw new Error('Failed to extract text from PDF using all methods.');
}

//extract text from docx files
async function extractTextFromDOCX(filePath: string): Promise&amp;lt;string&amp;gt; {
  try {
    const result = await mammoth.extractRawText({ path: filePath });
    return result.value;
  } catch (error) {
    console.error('Error parsing DOCX:', error);
    throw new Error('Failed to extract text from DOCX');
  }
}

//extract text from xlsx files
function extractTextFromXLSX(filePath: string): string {
  try {
    const workbook = XLSX.readFile(filePath);
    let text = '';
    workbook.SheetNames.forEach((sheetName) =&amp;gt; {
      const sheet = workbook.Sheets[sheetName];
      text += XLSX.utils.sheet_to_csv(sheet) + '\n\n';
    });
    return text;
  } catch (error) {
    console.error('Error parsing XLSX:', error);
    throw new Error('Failed to extract text from XLSX');
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Chunking Large Documents
&lt;/h2&gt;

&lt;p&gt;For large documents, we split them into smaller chunks with some overlap to still maintain some sort of context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
//lib/chunkDocumentWithOverlap
const MAX_CHUNK_SIZE = 2000;
const OVERLAP = 200;

export function chunkDocumentWithOverlap(text: string): string[] {
  const chunks: string[] = [];
  let startIndex = 0;

  while (startIndex &amp;lt; text.length) {
    let endIndex = Math.min(startIndex + MAX_CHUNK_SIZE, text.length);

    console.log(
      'Start:',
      startIndex,
      'End:',
      endIndex,
      'Total Length:',
      text.length,
      'Chunks Count:',
      chunks.length
    );

    if (endIndex &amp;lt; text.length) {
      // Try to find a space to break at, moving left from `endIndex`
      let breakPoint = endIndex;
      while (breakPoint &amp;gt; startIndex &amp;amp;&amp;amp; text[breakPoint] !== ' ') {
        breakPoint--;
      }

      // If no space was found, keep the original `endIndex`
      if (breakPoint &amp;gt; startIndex) {
        endIndex = breakPoint + 1; // Include the space
      }
    }

    chunks.push(text.slice(startIndex, endIndex));

    // Move `startIndex` forward but ensure we make progress
    startIndex = endIndex;
  }

  return chunks;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Finding Similar Content
&lt;/h2&gt;

&lt;p&gt;When a user asks a question, we need to find the most similar content from the embeddings, so what we will do is we will turn the user's input into an embedding, and then look for similar embeddings to it using cosine similarity to compare vectors. Here's how we do it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// lib/similarity.ts
import { openai } from '@ai-sdk/openai';

//make sure OPENAI_API_KEY is set up in your env
// Calculate similarity between two vectors using cosine similarity
export function cosineSimilarity(a: number[], b: number[]): number {
  // Calculate dot product
  const dotProduct = a.reduce((sum, _, i) =&amp;gt; sum + a[i] * b[i], 0)

  // Calculate magnitudes
  const magnitudeA = Math.sqrt(a.reduce((sum, val) =&amp;gt; sum + val * val, 0))
  const magnitudeB = Math.sqrt(b.reduce((sum, val) =&amp;gt; sum + val * val, 0))

  // Return similarity score
  return dotProduct / (magnitudeA * magnitudeB)
}

// Find most similar documents
export async function findMostSimilarArtifacts(
  input: string,
  artifacts: Chunk[],
  count: number
): Promise&amp;lt;Chunk[]&amp;gt; {
  // Get embedding for user input
  const { embedding } = await embed({
    model: openai.embedding('text-embedding-3-small'),
    value: input,
  });


  // Calculate similarity scores and sort
  return artifacts
    .map((artifact) =&amp;gt; ({
      ...artifact,
      similarity: cosineSimilarity(embedding, artifact.embedding),
    }))
    .sort((a, b) =&amp;gt; b.similarity - a.similarity)
    .slice(0, count)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Building the AI Chatbot Interface
&lt;/h2&gt;

&lt;p&gt;Now let's create an interactive chatbot that can answer questions about your documents using the AI SDK's &lt;code&gt;useChat&lt;/code&gt; hook. This will essentially tie everything together!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// components/DocumentChat.tsx
'use client'

import { useChat } from 'ai/react'
import { useState } from 'react'
import { Button } from "@/components/ui/button"
import { Card, CardContent, CardFooter, CardHeader, CardTitle } from "@/components/ui/card"
import { Input } from "@/components/ui/input"
import { ScrollArea } from "@/components/ui/scroll-area"
import { Send } from 'lucide-react'

export default function DocumentChat() {
  // Initialize the chat hook with our API endpoint
  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
    api: '/api/chat',
    // Initialize with a helpful system message
    initialMessages: [
      {
        id: 'welcome',
        role: 'system',
        content: "I'm your document assistant. Ask me anything about your uploaded documents!",
      },
    ],
  })

  return (
    &amp;lt;Card className="w-full max-w-2xl mx-auto"&amp;gt;
      &amp;lt;CardHeader&amp;gt;
        &amp;lt;CardTitle&amp;gt;Document Assistant&amp;lt;/CardTitle&amp;gt;
      &amp;lt;/CardHeader&amp;gt;

      &amp;lt;CardContent&amp;gt;
        &amp;lt;ScrollArea className="h-[600px] pr-4"&amp;gt;
          &amp;lt;div className="flex flex-col gap-4"&amp;gt;
            {messages.map((message) =&amp;gt; (
              &amp;lt;div
                key={message.id}
                className={`flex ${
                  message.role === 'user' ? 'justify-end' : 'justify-start'
                }`}
              &amp;gt;
                &amp;lt;div
                  className={`rounded-lg px-4 py-2 max-w-[80%] ${
                    message.role === 'user'
                      ? 'bg-primary text-primary-foreground'
                      : 'bg-muted'
                  }`}
                &amp;gt;
                  {message.content}
                &amp;lt;/div&amp;gt;
              &amp;lt;/div&amp;gt;
            ))}
            {isLoading &amp;amp;&amp;amp; (
              &amp;lt;div className="flex justify-start"&amp;gt;
                &amp;lt;div className="rounded-lg px-4 py-2 bg-muted"&amp;gt;
                  Thinking...
                &amp;lt;/div&amp;gt;
              &amp;lt;/div&amp;gt;
            )}
          &amp;lt;/div&amp;gt;
        &amp;lt;/ScrollArea&amp;gt;
      &amp;lt;/CardContent&amp;gt;

      &amp;lt;CardFooter&amp;gt;
        &amp;lt;form onSubmit={handleSubmit} className="flex w-full gap-2"&amp;gt;
          &amp;lt;Input
            value={input}
            onChange={handleInputChange}
            placeholder="Ask a question about your documents..."
            disabled={isLoading}
          /&amp;gt;
          &amp;lt;Button type="submit" disabled={isLoading}&amp;gt;
            &amp;lt;Send className="h-4 w-4" /&amp;gt;
          &amp;lt;/Button&amp;gt;
        &amp;lt;/form&amp;gt;
      &amp;lt;/CardFooter&amp;gt;
    &amp;lt;/Card&amp;gt;
  )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's create the API route that powers our chatbot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// app/api/chat/route.ts
import { createOpenAI } from '@ai-sdk/openai'
import { embed, streamText } from 'ai'
import { NextResponse, type NextRequest } from 'next/server'
import { findMostSimilarArtifacts } from '@/lib/similarity'

// Initialize OpenAI
const openai = createOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
})

export async function POST(req: NextRequest) {
  try {
    const { messages } = await req.json()
    const lastMessage = messages[messages.length - 1]

//get storedArtifacts from your storage first and then find the similar artifacts to be used as context.
const relevantDocs = await findMostSimilarArtifacts(lastMessage, storedArtifacts, 3)

const contextPromises = relevantDocs.map(async (a) =&amp;gt; {
    return `${a.articleName} (${a.content})`
})
//get the context to use 
  const context = (await Promise.all(contextPromises)).join("\n\n")
const result = streamText({
      model: openai('gpt-4o-mini'),
      messages,
      temperature: 0.7,
      maxTokens: 1000,
      system: `You are an AI assistant helping with retrieval augmentation. Provide concise and relevant information based on the context provided. the context : ${context}`,
    });

    return result.toDataStreamResponse();
    }catch (error) {
    console.error('Error in chat route:', error);
    return new Response('An error occurred while processing your request', {
      status: 500,
    });
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So there you go viola, your own working rag system that answers questions based on your document data. &lt;/p&gt;

&lt;h2&gt;
  
  
  How It All Works Together
&lt;/h2&gt;

&lt;p&gt;so from the top, &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User uploads a document&lt;/li&gt;
&lt;li&gt;Document is stored in Supabase&lt;/li&gt;
&lt;li&gt;Text is extracted and split into chunks,&lt;/li&gt;
&lt;li&gt;Each chunk gets converted to an embedding&lt;/li&gt;
&lt;li&gt;Embeddings are stored with their text&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When a user asks a question:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The question is converted to an embedding&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;System finds similar chunks using cosine similarity&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Relevant content is used to generate an informed answer&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And that’s a wrap, You now have the foundation to build a powerful RAG system using Supabase and OpenAI embeddings. With this setup, you can store and retrieve knowledge efficiently, making your chatbot smarter and more helpful.&lt;/p&gt;

&lt;p&gt;The journey definitely should not stop here, keep experimenting, optimizing, and pushing the limits of what AI can do!&lt;/p&gt;

&lt;p&gt;Aja Aja Fighting! 💪&lt;/p&gt;

</description>
      <category>rag</category>
      <category>ai</category>
      <category>nextjs</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Just start it!</title>
      <dc:creator>Akintola Abiodun</dc:creator>
      <pubDate>Sat, 04 Jan 2025 18:24:01 +0000</pubDate>
      <link>https://dev.to/abiodun_akintola/just-start-it-1kgo</link>
      <guid>https://dev.to/abiodun_akintola/just-start-it-1kgo</guid>
      <description>&lt;p&gt;Have you ever been awake at 3am?., thinking about improving your life and becoming a better version of yourself? Do you feel an innate desire to start something, maybe a startup, an agency, or even learn a new programming language, but you are unsure where to begin 😔? We have all been there, myself included.&lt;/p&gt;

&lt;p&gt;This struggle is something most people face. I like to call it body-mind dissonance, when you are really ready to take action, but your mind literally doesn't know where to start. This is where mindless raw action comes into play. You don't need to have the entire plan figured out before you begin. Just start! Along the way, the full picture will come together. Success and growth happen step by step.&lt;/p&gt;

&lt;p&gt;Let me give you an example. If you want to earn a degree, no one bombards you with the entire course list on day one. You progress in steps, starting with an orientation. You go level by level, semester by semester until you inevitably achieve your goal. That is what growth is about. That is what success is about, raw action consistency with steps.&lt;/p&gt;

&lt;p&gt;The same applies to anything you want to accomplish. If you want to learn a programming language, start by reading the docs, that’s a great first step. Then move on to the next milestone (it will come to you right away! 😉).&lt;/p&gt;

&lt;p&gt;If you want to start a business, begin by registering it, that is a significant milestone. If you have an idea for a project you’ve been thinking about, open your text editor and initialize the project (raw action!).&lt;/p&gt;

&lt;p&gt;Remember, you do not need to have the entire idea fully formed before you start. The key is to take that first step. So, just start it!&lt;/p&gt;

&lt;p&gt;And when you start, remember to be consistent and always look out for the next milestone to achieve. On a final note: What will you just start today?  I wish you all a very successful 2025. &lt;/p&gt;

</description>
      <category>growth</category>
      <category>development</category>
      <category>beginners</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
