<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Abayomi Olatunji</title>
    <description>The latest articles on DEV Community by Abayomi Olatunji (@abayomijohn273).</description>
    <link>https://dev.to/abayomijohn273</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/abayomijohn273"/>
    <language>en</language>
    <item>
      <title>Building an AI Assistant with Ollama and Next.js – Part 3 (RAG with LangChain, Pinecone and Ollama)</title>
      <dc:creator>Abayomi Olatunji</dc:creator>
      <pubDate>Sun, 01 Jun 2025 18:21:47 +0000</pubDate>
      <link>https://dev.to/abayomijohn273/building-an-ai-assistant-with-ollama-and-nextjs-part-3-rag-with-langchain-pinecone-and-ollama-dja</link>
      <guid>https://dev.to/abayomijohn273/building-an-ai-assistant-with-ollama-and-nextjs-part-3-rag-with-langchain-pinecone-and-ollama-dja</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;🚨 This is &lt;strong&gt;Part 3&lt;/strong&gt; of the “Building an AI Assistant with Ollama and Next.js” series.&lt;br&gt;&lt;br&gt;
👉 Check out &lt;a href="https://dev.to/abayomijohn273/building-an-ai-assistant-with-ollama-and-nextjs-4c2d"&gt;Part 1 here&lt;/a&gt;&lt;br&gt;&lt;br&gt;
👉 Check out &lt;a href="https://dev.to/abayomijohn273/building-an-ai-assistant-with-ollama-and-nextjs-part-2-using-packages-1nli"&gt;Part 2 here&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🤖 Introduction
&lt;/h2&gt;

&lt;p&gt;In the previous parts, we covered how to set up an AI assistant locally using &lt;strong&gt;Ollama&lt;/strong&gt;, &lt;strong&gt;Next.js&lt;/strong&gt;, and different package integrations. In this part, we’re diving deeper into building a &lt;strong&gt;Knowledge-Based AI assistant&lt;/strong&gt; using &lt;strong&gt;RAG (Retrieval-Augmented Generation)&lt;/strong&gt; with &lt;strong&gt;LangChain&lt;/strong&gt;, &lt;strong&gt;Ollama&lt;/strong&gt;, and &lt;strong&gt;Pinecone&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We’ll walk through how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Load and preprocess documents&lt;/li&gt;
&lt;li&gt;Split and embed them into vector space&lt;/li&gt;
&lt;li&gt;Store the embeddings in Pinecone&lt;/li&gt;
&lt;li&gt;Query these vectors for smart retrieval&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔧 Tools Used
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://nextjs.org/" rel="noopener noreferrer"&gt;Next.js&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://tailwindcss.com/" rel="noopener noreferrer"&gt;TailwindCSS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cursor.sh/" rel="noopener noreferrer"&gt;Cursor IDE&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://js.langchain.com/" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.pinecone.io/" rel="noopener noreferrer"&gt;Pinecone Vector Database&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.npmjs.com/package/pdf-parse" rel="noopener noreferrer"&gt;PDF-Parse&lt;/a&gt;, &lt;a href="https://www.npmjs.com/package/mammoth" rel="noopener noreferrer"&gt;Mammoth.js&lt;/a&gt; for document reading&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📘 What is RAG?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RAG&lt;/strong&gt; stands for &lt;strong&gt;Retrieval-Augmented Generation&lt;/strong&gt;. It’s a hybrid AI approach that improves response accuracy by combining:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval&lt;/strong&gt;: Searches for relevant documents or chunks from a knowledge base.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generation&lt;/strong&gt;: Uses a language model (like Gemma or LLaMA) to generate natural responses based on the retrieved content.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔁 Flow Summary
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Load&lt;/strong&gt; files (PDF, DOCX, TXT)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Split&lt;/strong&gt; them into readable chunks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embed&lt;/strong&gt; those chunks into vector representations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Store&lt;/strong&gt; them in Pinecone&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Query&lt;/strong&gt; Pinecone using user input and generate context-aware answers&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can read more on this on the LangChain documentation: &lt;a href="https://js.langchain.com/docs/tutorials/rag/" rel="noopener noreferrer"&gt;https://js.langchain.com/docs/tutorials/rag/&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧩 Key Packages and Docs for further reading
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Package&lt;/th&gt;
&lt;th&gt;Use&lt;/th&gt;
&lt;th&gt;Docs&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;langchain&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Framework for chaining LLMs with tools&lt;/td&gt;
&lt;td&gt;&lt;a href="https://js.langchain.com" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;@pinecone-database/pinecone&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Pinecone client&lt;/td&gt;
&lt;td&gt;&lt;a href="https://docs.pinecone.io/" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;@langchain/pinecone&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;LangChain-Pinecone integration&lt;/td&gt;
&lt;td&gt;&lt;a href="https://js.langchain.com/docs/integrations/vectorstores/pinecone/" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;@langchain/community/embeddings/ollama&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Ollama embeddings for LangChain&lt;/td&gt;
&lt;td&gt;&lt;a href="https://js.langchain.com/docs/integrations/text_embedding/ollama" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;pdf-parse&lt;/code&gt;, &lt;code&gt;mammoth&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;For loading and reading PDFs, DOCX, and TXT&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://www.npmjs.com/package/pdf-parse" rel="noopener noreferrer"&gt;pdf-parse&lt;/a&gt;, &lt;a href="https://www.npmjs.com/package/mammoth" rel="noopener noreferrer"&gt;mammoth&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🧰 Tool Setup Overview
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🔧 1. Setting Up Pinecone
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.pinecone.io/start/" rel="noopener noreferrer"&gt;Create an account on Pinecone&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Create an &lt;strong&gt;Index&lt;/strong&gt; with the following settings:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt;: e.g., &lt;code&gt;database_name&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector Type&lt;/strong&gt;: &lt;code&gt;Dense&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dimension&lt;/strong&gt;: &lt;code&gt;1024&lt;/code&gt; (must match &lt;code&gt;mxbai-embed-large&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metric&lt;/strong&gt;: &lt;code&gt;Cosine&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment&lt;/strong&gt;: &lt;code&gt;us-east-1-aws&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;You can select existing embedding models available on the setup options, I choose to use the custom setting so that it aligns with the model I'm using on the project. i.e &lt;code&gt;mxbai-embed-large&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🛠 2. Configure &lt;code&gt;.env&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Add these to your &lt;code&gt;.env.local&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PINECONE_API_KEY=your-api-key
PINECONE_INDEX_NAME=database_name
PINECONE_ENVIRONMENT=us-east-1-aws
OLLAMA_MODEL=gemma3:1b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🚀 3. Launch Ollama and Embedding Model
&lt;/h3&gt;

&lt;p&gt;Make sure Ollama is installed and run this model in your terminal (you can use any LLM model of your choice):&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ollama run gemma3:1b&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Install embedding model with:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ollama pull mxbai-embed-large&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;LangChain will reference this locally using Ollama via:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new OllamaEmbeddings({
  model: 'mxbai-embed-large',
  baseUrl: 'http://localhost:11434'
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: you can check more models in &lt;a href="https://js.langchain.com/docs/integrations/chat/" rel="noopener noreferrer"&gt;https://js.langchain.com/docs/integrations/chat/&lt;/a&gt; and &lt;a href="https://ollama.com/search" rel="noopener noreferrer"&gt;https://ollama.com/search&lt;/a&gt;. Also, you explore other embedding models in &lt;a href="https://js.langchain.com/docs/integrations/text_embedding/" rel="noopener noreferrer"&gt;https://js.langchain.com/docs/integrations/text_embedding/&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧪 How It Works – Step by Step
&lt;/h2&gt;

&lt;p&gt;Here, I will be giving a breakdown of what we're trying to achieve, followed by the code snippet to use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Upload and Process Document
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;User uploads .pdf, .docx, or .txt.&lt;/li&gt;
&lt;li&gt;We load the file using langchain loaders.&lt;/li&gt;
&lt;li&gt;The text is split into chunks using RecursiveCharacterTextSplitter.&lt;/li&gt;
&lt;li&gt;Chunks are returned as an array of LangChain Document objects.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Embed and Store in Pinecone
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Chunks are embedded via OllamaEmbeddings using mxbai-embed-large.&lt;/li&gt;
&lt;li&gt;Vectors are stored in the Pinecone vector index under a namespace.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Query for Context
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;When a user types a question, we run a vector similarity search.&lt;/li&gt;
&lt;li&gt;Relevant chunks are retrieved from Pinecone.&lt;/li&gt;
&lt;li&gt;Chunks are combined into a context block.&lt;/li&gt;
&lt;li&gt;The context is injected into the prompt as a system message for the LLM.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;utils/documentProcessing.ts

import { OllamaEmbeddings } from '@langchain/community/embeddings/ollama';
import { Document } from '@langchain/core/documents';
import { PineconeStore } from '@langchain/pinecone';
import { Pinecone } from '@pinecone-database/pinecone';
import { DocxLoader } from 'langchain/document_loaders/fs/docx';
import { PDFLoader } from 'langchain/document_loaders/fs/pdf';
import { TextLoader } from 'langchain/document_loaders/fs/text';
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';

const pinecone = new Pinecone({ apiKey: process.env.PINECONE_API_KEY! });
const embeddings = new OllamaEmbeddings({ model: 'mxbai-embed-large', baseUrl: 'http://localhost:11434' });
const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200 });

export async function processDocument(file: File | Blob, fileName: string): Promise&amp;lt;Document[]&amp;gt; {
  let documents: Document[];
  if (fileName.endsWith('.pdf')) documents = await new PDFLoader(file).load();
  else if (fileName.endsWith('.docx')) documents = await new DocxLoader(file).load();
  else if (fileName.endsWith('.txt')) documents = await new TextLoader(file).load();
  else throw new Error('Unsupported file type');

  return await textSplitter.splitDocuments(documents);
}

export async function storeDocuments(documents: Document[]): Promise&amp;lt;void&amp;gt; {
  const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX_NAME!);
  await PineconeStore.fromDocuments(documents, embeddings, {
    pineconeIndex,
    maxConcurrency: 5,
    namespace: 'your_namespace', //optional
  });
}

export async function queryDocuments(query: string): Promise&amp;lt;Document[]&amp;gt; {
  const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX_NAME!);
  const vectorStore = await PineconeStore.fromExistingIndex(embeddings, {
    pineconeIndex,
    maxConcurrency: 5,
    namespace: 'your_namespace', //optional
  });

  return await vectorStore.similaritySearch(query, 4);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;api/chat/upload/route.ts

import { processDocument, storeDocuments } from '@/utils/documentProcessing';
import { NextResponse } from 'next/server';

export async function POST(req: Request) {
  const formData = await req.formData();
  const file = formData.get('file') as File;
  if (!file) return NextResponse.json({ error: 'No file provided' }, { status: 400 });

  const documents = await processDocument(file, file.name);
  await storeDocuments(documents);

  return NextResponse.json({
    message: 'Document processed and stored successfully',
    fileName: file.name,
    documentCount: documents.length
  });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;api/chat/route.ts

import { queryDocuments } from '@/utils/documentProcessing';
import { Message, streamText } from 'ai';
import { NextRequest } from 'next/server';
import { createOllama } from 'ollama-ai-provider';

const ollama = createOllama();
const MODEL_NAME = process.env.OLLAMA_MODEL || 'gemma3:1b';

export async function POST(req: NextRequest) {
  const { messages } = await req.json();
  const lastMessage = messages[messages.length - 1];
  const relevantDocs = await queryDocuments(lastMessage.content);

  const context = relevantDocs.map((doc) =&amp;gt; doc.pageContent).join('\n\n');
  const systemMessage: Message = {
    id: 'system',
    role: 'system',
    content: `You are a helpful AI assistant with access to a knowledge base. 
    Use the following context to answer the user's questions:\n\n${context}`,
  };

  const promptMessages = [systemMessage, ...messages];
  const result = await streamText({
    model: ollama(MODEL_NAME),
    messages: promptMessages
  });

  return result.toDataStreamResponse();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;For the UI part, here are the code snippet&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ChatInput.tsx

'use client'
interface ChatInputProps {
  input: string;
  handleInputChange: (e: React.ChangeEvent&amp;lt;HTMLTextAreaElement&amp;gt;) =&amp;gt; void;
  handleSubmit: (e: React.FormEvent&amp;lt;HTMLFormElement&amp;gt;) =&amp;gt; void;
  isLoading: boolean;
}

export default function ChatInput({ input, handleInputChange, handleSubmit, isLoading }: ChatInputProps) {

  return (
    &amp;lt;form onSubmit={handleSubmit} className="flex gap-4"&amp;gt;
      &amp;lt;textarea
        value={input}
        onChange={handleInputChange}
        placeholder="Ask a question about the documents..."
        className="flex-1 p-4 border border-gray-200 dark:border-gray-700 rounded-xl 
          bg-white dark:bg-gray-800 
          placeholder-gray-400 dark:placeholder-gray-500
          focus:outline-none focus:ring-2 focus:ring-blue-500 dark:focus:ring-blue-400
          resize-none min-h-[50px] max-h-32
          text-gray-700 dark:text-gray-200"
        rows={1}
        required
        disabled={isLoading}
      /&amp;gt;
      &amp;lt;button
        type="submit"
        disabled={isLoading}
        className={`px-6 py-2 rounded-xl font-medium transition-all duration-200
          ${isLoading 
            ? 'bg-gray-100 dark:bg-gray-700 text-gray-400 dark:text-gray-500 cursor-not-allowed'
            : 'bg-blue-500 hover:bg-blue-600 active:bg-blue-700 text-white shadow-sm hover:shadow'
          }`}
      &amp;gt;
        {isLoading ? (
          &amp;lt;span className="flex items-center gap-2"&amp;gt;
            &amp;lt;svg className="animate-spin h-4 w-4" viewBox="0 0 24 24"&amp;gt;
              &amp;lt;circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4" fill="none"/&amp;gt;
              &amp;lt;path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"/&amp;gt;
            &amp;lt;/svg&amp;gt;
            Processing
          &amp;lt;/span&amp;gt;
        ) : 'Send'}
      &amp;lt;/button&amp;gt;
    &amp;lt;/form&amp;gt;
  );
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ChatMessage.tsx

'use client'
import { Message } from 'ai';
import ReactMarkdown from 'react-markdown';

interface ChatMessageProps {
  message: Message;
}

export default function ChatMessage({ message }: ChatMessageProps) {
  return (
    &amp;lt;div
      className={`flex items-start gap-4 p-6 rounded-2xl shadow-sm transition-colors ${
        message.role === 'assistant'
          ? 'bg-white dark:bg-gray-800 border border-gray-100 dark:border-gray-700'
          : 'bg-blue-50 dark:bg-blue-900/30 border border-blue-100 dark:border-blue-800'
      }`}
    &amp;gt;
      &amp;lt;div className={`w-8 h-8 rounded-full flex items-center justify-center flex-shrink-0 ${
        message.role === 'assistant'
          ? 'bg-purple-100 text-purple-600 dark:bg-purple-900 dark:text-purple-300'
          : 'bg-blue-100 text-blue-600 dark:bg-blue-900 dark:text-blue-300'
      }`}&amp;gt;
        {message.role === 'assistant' ? '🤖' : '👤'}
      &amp;lt;/div&amp;gt;
      &amp;lt;div className="flex-1 min-w-0"&amp;gt;
        &amp;lt;div className="font-medium text-sm mb-2 text-gray-700 dark:text-gray-300"&amp;gt;
          {message.role === 'assistant' ? 'AI Assistant' : 'You'}
        &amp;lt;/div&amp;gt;
        &amp;lt;div className="prose dark:prose-invert prose-sm max-w-none"&amp;gt;
          &amp;lt;ReactMarkdown&amp;gt;{message.content}&amp;lt;/ReactMarkdown&amp;gt;
        &amp;lt;/div&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
  );
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FileUpload.tsx

"use client"
import React, { useState } from 'react';

export default function FileUpload() {
  const [isUploading, setIsUploading] = useState(false);
  const [message, setMessage] = useState('');
  const [error, setError] = useState('');

  const handleFileUpload = async (e: React.ChangeEvent&amp;lt;HTMLInputElement&amp;gt;) =&amp;gt; {
    const file = e.target.files?.[0];
    if (!file) return;

    // Reset states
    setMessage('');
    setError('');
    setIsUploading(true);

    try {
      const formData = new FormData();
      formData.append('file', file);

      const response = await fetch('/api/chat/upload', {
        method: 'POST',
        body: formData,
      });

      const data = await response.json();

      if (!response.ok) {
        throw new Error(data.error || 'Error uploading file');
      }

      setMessage(`Successfully uploaded ${file.name}`);
    } catch (err) {
      setError(err instanceof Error ? err.message : 'Error uploading file');
    } finally {
      setIsUploading(false);
    }
  };

  return (
    &amp;lt;div className="mb-6"&amp;gt;
      &amp;lt;div className="flex flex-col sm:flex-row items-center gap-4"&amp;gt;
        &amp;lt;label
          className={`flex items-center gap-2 px-6 py-3 rounded-xl border-2 border-dashed
            transition-all duration-200 cursor-pointer
            ${isUploading 
              ? 'border-gray-300 bg-gray-50 dark:border-gray-700 dark:bg-gray-800/50'
              : 'border-blue-300 hover:border-blue-400 hover:bg-blue-50 dark:border-blue-700 dark:hover:border-blue-600 dark:hover:bg-blue-900/30'
            }`}
        &amp;gt;
          &amp;lt;svg 
            className={`w-5 h-5 ${isUploading ? 'text-gray-400' : 'text-blue-500'}`} 
            fill="none" 
            stroke="currentColor" 
            viewBox="0 0 24 24"
          &amp;gt;
            &amp;lt;path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M4 16v1a3 3 0 003 3h10a3 3 0 003-3v-1m-4-8l-4-4m0 0L8 8m4-4v12" /&amp;gt;
          &amp;lt;/svg&amp;gt;
          &amp;lt;span className={`font-medium ${isUploading ? 'text-gray-400' : 'text-blue-500'}`}&amp;gt;
            {isUploading ? 'Uploading...' : 'Upload Document'}
          &amp;lt;/span&amp;gt;
          &amp;lt;input
            type="file"
            className="hidden"
            accept=".pdf,.docx"
            onChange={handleFileUpload}
            disabled={isUploading}
          /&amp;gt;
        &amp;lt;/label&amp;gt;
        &amp;lt;span className="text-sm text-gray-500 dark:text-gray-400 flex items-center gap-2"&amp;gt;
          &amp;lt;svg className="w-4 h-4" fill="none" stroke="currentColor" viewBox="0 0 24 24"&amp;gt;
            &amp;lt;path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M13 16h-1v-4h-1m1-4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z" /&amp;gt;
          &amp;lt;/svg&amp;gt;
          Supported: PDF, DOCX
        &amp;lt;/span&amp;gt;
      &amp;lt;/div&amp;gt;

      {message &amp;amp;&amp;amp; (
        &amp;lt;div className="mt-4 p-4 bg-green-50 dark:bg-green-900/30 rounded-xl border border-green-100 dark:border-green-800"&amp;gt;
          &amp;lt;p className="text-sm text-green-600 dark:text-green-400 flex items-center gap-2"&amp;gt;
            &amp;lt;svg className="w-4 h-4" fill="none" stroke="currentColor" viewBox="0 0 24 24"&amp;gt;
              &amp;lt;path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M5 13l4 4L19 7" /&amp;gt;
            &amp;lt;/svg&amp;gt;
            {message}
          &amp;lt;/p&amp;gt;
        &amp;lt;/div&amp;gt;
      )}
      {error &amp;amp;&amp;amp; (
        &amp;lt;div className="mt-4 p-4 bg-red-50 dark:bg-red-900/30 rounded-xl border border-red-100 dark:border-red-800"&amp;gt;
          &amp;lt;p className="text-sm text-red-600 dark:text-red-400 flex items-center gap-2"&amp;gt;
            &amp;lt;svg className="w-4 h-4" fill="none" stroke="currentColor" viewBox="0 0 24 24"&amp;gt;
              &amp;lt;path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M12 8v4m0 4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z" /&amp;gt;
            &amp;lt;/svg&amp;gt;
            {error}
          &amp;lt;/p&amp;gt;
        &amp;lt;/div&amp;gt;
      )}
    &amp;lt;/div&amp;gt;
  );
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ChatPage.tsx

"use client"
import { useChat } from 'ai/react';

import ChatInput from './ChatInput';
import ChatMessage from './ChatMessage';
import FileUpload from './FileUpload';

export default function ChatPage() {
  const { input, messages, handleInputChange, handleSubmit, isLoading } = useChat({
    api: '/api/chat',
    onError: (error) =&amp;gt; {
      console.error('Chat error:', error);
      alert('Error: ' + error.message);
    }
  });

  return (
    &amp;lt;div className="flex flex-col h-screen bg-gray-50 dark:bg-gray-900"&amp;gt;
      &amp;lt;div className="flex-1 max-w-5xl mx-auto w-full p-4 md:p-6 lg:p-8"&amp;gt;
        &amp;lt;div className="flex-1 overflow-y-auto mb-4 space-y-6"&amp;gt;
          &amp;lt;h1 className="text-3xl font-bold text-gray-900 dark:text-white text-center mb-8"&amp;gt;
            RAG-Powered Knowledge Base Chat
          &amp;lt;/h1&amp;gt;
          &amp;lt;div className="bg-white dark:bg-gray-800 rounded-xl shadow-lg p-6"&amp;gt;
            &amp;lt;FileUpload /&amp;gt;
          &amp;lt;/div&amp;gt;
          &amp;lt;div className="space-y-6"&amp;gt;
            {messages.map((message) =&amp;gt; (
              &amp;lt;ChatMessage key={message.id} message={message} /&amp;gt;
            ))}
          &amp;lt;/div&amp;gt;
        &amp;lt;/div&amp;gt;
        &amp;lt;div className="sticky bottom-0 bg-white dark:bg-gray-800 rounded-xl shadow-lg p-4"&amp;gt;
          &amp;lt;ChatInput 
            input={input} 
            handleInputChange={handleInputChange} 
            handleSubmit={handleSubmit} 
            isLoading={isLoading}
          /&amp;gt;
        &amp;lt;/div&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
  );
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Viola! You're reading to run your code&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm run dev&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Click the &lt;code&gt;Upload document&lt;/code&gt; button to upload the document you want to store. Once the upload is successful, your Pinecone dashboard will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmb1losih584zxj5rocw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmb1losih584zxj5rocw.png" alt="Store data" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the document loaded, you can ask your AI Assistant questions relating to the content in the document and you will get the correct response. Here is a screenshot of my test:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbm6lw4k1ykv07hd4tc7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbm6lw4k1ykv07hd4tc7j.png" alt="Assistant App UI" width="800" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Happy Coding 😎! Feel free to share your experience and feedbacks too. Cheers!&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building an AI Assistant with Ollama and Next.js - Part 2 (Using Packages)</title>
      <dc:creator>Abayomi Olatunji</dc:creator>
      <pubDate>Thu, 29 May 2025 03:35:41 +0000</pubDate>
      <link>https://dev.to/abayomijohn273/building-an-ai-assistant-with-ollama-and-nextjs-part-2-using-packages-1nli</link>
      <guid>https://dev.to/abayomijohn273/building-an-ai-assistant-with-ollama-and-nextjs-part-2-using-packages-1nli</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Missed Part 1?&lt;/strong&gt; Start with the basics and learn how to build an AI assistant using &lt;code&gt;Ollama&lt;/code&gt; locally in a Next.js app:&lt;br&gt;&lt;br&gt;
👉 &lt;a href="https://dev.to/abayomijohn273/building-an-ai-assistant-with-ollama-and-nextjs-4c2d"&gt;Building an AI Assistant with Ollama and Next.js - Part 1&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🧠 Let's go — Part 2
&lt;/h2&gt;

&lt;p&gt;In Part 1, we set up a local AI assistant using &lt;strong&gt;Ollama&lt;/strong&gt;, &lt;strong&gt;Next.js&lt;/strong&gt;, and the &lt;strong&gt;Gemma 3:1B model&lt;/strong&gt; with minimal setup.&lt;br&gt;&lt;br&gt;
In this article, we’ll explore &lt;strong&gt;two powerful and flexible methods&lt;/strong&gt; to integrate Ollama directly into your &lt;strong&gt;Next.js&lt;/strong&gt; project using JavaScript libraries.&lt;/p&gt;

&lt;p&gt;We'll walk through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installing the necessary packages&lt;/li&gt;
&lt;li&gt;How each method works&lt;/li&gt;
&lt;li&gt;Benefits and differences between them&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  🛠 Tools Used
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Next.js&lt;/strong&gt; – App framework for building fast React apps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TailwindCSS&lt;/strong&gt; – Styling made simple and responsive&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cursor IDE&lt;/strong&gt; – Developer-friendly coding environment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ollama&lt;/strong&gt; – Local model runner&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemma 3:1B Model&lt;/strong&gt; – Lightweight, open-source LLM&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ollama.js&lt;/strong&gt; &lt;a href="https://github.com/ollama/ollama-js" rel="noopener noreferrer"&gt;https://github.com/ollama/ollama-js&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI SDK&lt;/strong&gt; &lt;a href="https://www.npmjs.com/package/ai" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/ai&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ollama-ai-provider&lt;/strong&gt; &lt;a href="https://www.npmjs.com/package/ollama-ai-provider" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/ollama-ai-provider&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;react-markdown&lt;/strong&gt; - &lt;a href="https://www.npmjs.com/package/react-markdown" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/react-markdown&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  🚀 Getting Started
&lt;/h2&gt;

&lt;p&gt;Make sure you already have Ollama and the model installed. Run this in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ollama run gemma3:1b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;📥 You can get the model from: &lt;a href="https://ollama.com/library" rel="noopener noreferrer"&gt;https://ollama.com/library&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  📦 Method 1 – Using ollama-js
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/ollama/ollama-js" rel="noopener noreferrer"&gt;ollama-js&lt;/a&gt; package is a lightweight Node client for interacting with the Ollama server directly from your code.&lt;/p&gt;

&lt;h3&gt;
  
  
  📌 Install:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  📁 API Route in Next.js
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// app/api/chat/route.js

import ollama from 'ollama';

export async function POST(req) {
  const { message } = await req.json();
  const response = await ollama.chat({
    model: 'gemma3:1b',
    messages: [{ role: 'user', content: message }],
  });

  return Response.json(response);
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the existing UI implementation from &lt;a href="https://dev.to/abayomijohn273/building-an-ai-assistant-with-ollama-and-nextjs-4c2d"&gt;&lt;strong&gt;Part 1&lt;/strong&gt;&lt;/a&gt;, this method will work perfectly with it.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Benefits:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Minimal setup&lt;/li&gt;
&lt;li&gt;Direct control over the model and requests&lt;/li&gt;
&lt;li&gt;Great for full-stack or custom workflows&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ⚡ Method 2 – Using &lt;code&gt;ai-sdk&lt;/code&gt; + &lt;code&gt;ollama-ai-provider&lt;/code&gt; + &lt;code&gt;react-markdown&lt;/code&gt; &lt;strong&gt;(Preferred)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This method uses &lt;a href="https://ai-sdk.dev" rel="noopener noreferrer"&gt;AI SDK&lt;/a&gt;, which abstracts a lot of complexity and provides a seamless experience, especially for frontend-focused applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  📌 Install the packages first:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install ai ollama-ai-provider react-markdown

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🧠 Usage Overview:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// app/api/chat/route.ts

import { streamText } from 'ai';
import { NextRequest } from 'next/server';
import { createOllama } from 'ollama-ai-provider';

export const runtime = 'edge';

// Create Ollama provider with configuration
const ollamaProvider = createOllama();

// Configure the model name
const MODEL_NAME = process.env.OLLAMA_MODEL || 'gemma3:1b';

export async function POST(req: NextRequest) {
  try {
    const { messages } = await req.json();

    if (!messages || !Array.isArray(messages) || messages.length === 0) {
      return new Response('Invalid messages format', { status: 400 });
    }

    // Add system message if not present
    const messagesWithSystem = messages[0]?.role !== 'system' 
      ? [
          { 
            role: 'system', 
            content: 'You are a helpful AI assistant powered by Ollama. You help users with their questions and tasks.'
          },
          ...messages
        ]
      : messages;

    const result = await streamText({
      model: ollamaProvider(MODEL_NAME),
      messages: messagesWithSystem,
    });

    return result.toDataStreamResponse();
  } catch (error) {
    console.error('Chat API error:', error);
    return new Response(
      JSON.stringify({ error: 'Failed to process chat request' }), 
      { status: 500 }
    );
  }
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// ChatInput.tsx

import { useEffect, useRef } from 'react';

interface ChatInput2Props {
  input: string;
  handleInputChange: (e: React.ChangeEvent&amp;lt;HTMLTextAreaElement&amp;gt;) =&amp;gt; void;
  handleSubmit: (e: React.FormEvent) =&amp;gt; void;
  isLoading: boolean;
}

export default function ChatInput2({
  input,
  handleInputChange,
  handleSubmit,
  isLoading
}: ChatInput2Props) {
  const textareaRef = useRef&amp;lt;HTMLTextAreaElement&amp;gt;(null);

  useEffect(() =&amp;gt; {
    if (textareaRef.current) {
      textareaRef.current.style.height = 'auto';
      textareaRef.current.style.height = `${textareaRef.current.scrollHeight}px`;
    }
  }, [input]);

  return (
    &amp;lt;form onSubmit={handleSubmit} className="flex items-end gap-4 border-t border-gray-700 bg-gray-800 p-4 sticky bottom-0"&amp;gt;
      &amp;lt;div className="relative flex-1"&amp;gt;
        &amp;lt;textarea
          ref={textareaRef}
          className="w-full resize-none rounded-xl border border-gray-600 bg-gray-700 p-4 pr-12 text-gray-100 placeholder-gray-400 focus:outline-none focus:ring-2 focus:ring-blue-500 max-h-[200px] min-h-[56px]"
          rows={1}
          placeholder="Type your message..."
          value={input}
          onChange={handleInputChange}
          disabled={isLoading}
        /&amp;gt;
        &amp;lt;button
          type="submit"
          disabled={isLoading || !input.trim()}
          className="absolute bottom-2 right-2 rounded-lg bg-blue-600 p-2 text-white hover:bg-blue-700 disabled:opacity-50 disabled:hover:bg-blue-600"
        &amp;gt;
          &amp;lt;svg
            xmlns="http://www.w3.org/2000/svg"
            fill="none"
            viewBox="0 0 24 24"
            strokeWidth={2}
            stroke="currentColor"
            className="w-5 h-5"
          &amp;gt;
            &amp;lt;path
              strokeLinecap="round"
              strokeLinejoin="round"
              d="M6 12L3.269 3.126A59.768 59.768 0 0121.485 12 59.77 59.77 0 013.27 20.876L5.999 12zm0 0h7.5"
            /&amp;gt;
          &amp;lt;/svg&amp;gt;
        &amp;lt;/button&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/form&amp;gt;
  );
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// ChatMessage.tsx

import ReactMarkdown from 'react-markdown';

interface ChatMessage2Props {
  role: 'user' | 'assistant' | 'system';
  content: string;
}

export default function ChatMessage2({ role, content }: ChatMessage2Props) {
  return (
    &amp;lt;div
      className={`flex ${
        role === 'user' ? 'justify-end' : 'justify-start'
      } mb-4`}
    &amp;gt;
      &amp;lt;div
        className={`max-w-[80%] rounded-xl p-4 shadow-md ${
          role === 'user'
            ? 'bg-blue-600 text-gray-100'
            : 'bg-gray-700 text-gray-100 border border-gray-600'
        }`}
      &amp;gt;
        &amp;lt;ReactMarkdown
          components={{
            p: ({ children }) =&amp;gt; &amp;lt;p className="mb-2 last:mb-0"&amp;gt;{children}&amp;lt;/p&amp;gt;,
            code: ({ children }) =&amp;gt; (
              &amp;lt;code
                className={`block p-2 rounded my-2 ${
                  role === 'user'
                    ? 'bg-blue-700 text-gray-100'
                    : 'bg-gray-800 text-gray-100'
                }`}
              &amp;gt;
                {children}
              &amp;lt;/code&amp;gt;
            ),
            ul: ({ children }) =&amp;gt; (
              &amp;lt;ul className="list-disc list-inside mb-2 text-gray-100"&amp;gt;{children}&amp;lt;/ul&amp;gt;
            ),
            ol: ({ children }) =&amp;gt; (
              &amp;lt;ol className="list-decimal list-inside mb-2 text-gray-100"&amp;gt;{children}&amp;lt;/ol&amp;gt;
            ),
          }}
        &amp;gt;
          {content}
        &amp;lt;/ReactMarkdown&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
  );
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// ChatPage.tsx

"use client"
import { useChat } from 'ai/react';
import ChatInput2 from './ChatInput2';
import ChatMessage2 from './ChatMessage2';

type MessageRole = 'system' | 'user' | 'assistant';

function normalizeRole(role: string): MessageRole {
  if (role === 'system' || role === 'user' || role === 'assistant') {
    return role as MessageRole;
  }
  return 'assistant';
}

export default function Chat2Page() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
    api: '/api/chat2',
    initialMessages: [
      {
        id: 'system-1',
        role: 'system',
        content: 'You are a helpful AI assistant powered by Ollama. You can help users with various tasks and answer their questions.',
      },
    ],
  });

  return (
    &amp;lt;div className="container mx-auto max-w-4xl p-4 h-[calc(100vh-2rem)] bg-gray-900"&amp;gt;
      &amp;lt;div className="mb-4"&amp;gt;
        &amp;lt;h1 className="text-3xl font-bold text-gray-100"&amp;gt;AI Chat Assistant v2&amp;lt;/h1&amp;gt;
        &amp;lt;p className="text-gray-400"&amp;gt;
          Powered by Ollama with markdown support and streaming responses
        &amp;lt;/p&amp;gt;
      &amp;lt;/div&amp;gt;

      &amp;lt;div className="flex flex-col h-[calc(100%-8rem)]"&amp;gt;
        &amp;lt;div className="flex-1 overflow-y-auto rounded-xl border border-gray-700 bg-gray-800 p-4 mb-4"&amp;gt;
          {messages.map((message) =&amp;gt; (
            &amp;lt;ChatMessage2
              key={message.id}
              role={normalizeRole(message.role)}
              content={message.content}
            /&amp;gt;
          ))}
          {messages.length === 1 &amp;amp;&amp;amp; (
            &amp;lt;div className="flex h-full items-center justify-center text-gray-500"&amp;gt;
              Start a conversation by typing a message below
            &amp;lt;/div&amp;gt;
          )}
        &amp;lt;/div&amp;gt;

        &amp;lt;ChatInput2
          input={input}
          handleInputChange={handleInputChange}
          handleSubmit={handleSubmit}
          isLoading={isLoading}
        /&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
  );
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// app/chat/page.tsx

import ChatPage from '@/modules/chat/ChatPage';

export default function Chat() {
  return &amp;lt;ChatPage /&amp;gt;;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ✅ Benefits:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Built-in support for streaming responses (for real-time UX)&lt;/li&gt;
&lt;li&gt;Works smoothly with React Server Components&lt;/li&gt;
&lt;li&gt;Clean abstraction that improves maintainability&lt;/li&gt;
&lt;li&gt;Easy markdown rendering with react-markdown&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  📸 Outcome
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuodxq5xouwtu23bnadw0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuodxq5xouwtu23bnadw0.png" alt="Output from Ollama and AI SDK" width="800" height="586"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 Summary: Which Method Should You Use?
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;&lt;code&gt;ollama-js&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;
&lt;code&gt;ai-sdk&lt;/code&gt; + &lt;code&gt;ollama-ai-provider&lt;/code&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Setup Simplicity&lt;/td&gt;
&lt;td&gt;✅ Simple&lt;/td&gt;
&lt;td&gt;✅ Moderate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Streaming Support&lt;/td&gt;
&lt;td&gt;❌ Manual&lt;/td&gt;
&lt;td&gt;✅ Built-in&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Frontend Friendly&lt;/td&gt;
&lt;td&gt;❌ More Backend Focused&lt;/td&gt;
&lt;td&gt;✅ Tailored for React&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Markdown Rendering&lt;/td&gt;
&lt;td&gt;❌ Manual&lt;/td&gt;
&lt;td&gt;✅ Easy via &lt;code&gt;react-markdown&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Recommended For&lt;/td&gt;
&lt;td&gt;Custom/Low-level Projects&lt;/td&gt;
&lt;td&gt;Production-ready AI UI&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  👋 What's Next?
&lt;/h2&gt;

&lt;p&gt;While this is another method to build your AI assistant locally,&lt;br&gt;
👉 Click the next article to learn how to build this using LangChain and Ollama for more advanced AI workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Happy Coding....&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>nextjs</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Building an AI Assistant with Ollama and Next.js - Part 1</title>
      <dc:creator>Abayomi Olatunji</dc:creator>
      <pubDate>Wed, 28 May 2025 23:48:06 +0000</pubDate>
      <link>https://dev.to/abayomijohn273/building-an-ai-assistant-with-ollama-and-nextjs-4c2d</link>
      <guid>https://dev.to/abayomijohn273/building-an-ai-assistant-with-ollama-and-nextjs-4c2d</guid>
      <description>&lt;h2&gt;
  
  
  Introduction 🧠💬
&lt;/h2&gt;

&lt;p&gt;Artificial Intelligence (AI) is reshaping how we interact with digital tools, and building your own local AI assistant has never been easier. In this guide, I’ll walk you through how I built a simple AI assistant using &lt;strong&gt;Next.js&lt;/strong&gt;, &lt;strong&gt;TailwindCSS&lt;/strong&gt;, and &lt;strong&gt;Ollama&lt;/strong&gt;, running the &lt;strong&gt;Gemma 3:1B&lt;/strong&gt; model &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can run any model of your choice from the available models on &lt;a href="https://ollama.com/models" rel="noopener noreferrer"&gt;https://ollama.com/models&lt;/a&gt;; however &lt;em&gt;you should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;Whether you're a beginner or just looking for a lightweight and privacy-friendly AI implementation, you’ll find this guide approachable and relatable. No cloud APIs. No subscriptions. Just local magic.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧰 Tools Used
&lt;/h2&gt;

&lt;p&gt;Before we dive in, here are the key tools used in this project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://nextjs.org/" rel="noopener noreferrer"&gt;&lt;strong&gt;Next.js&lt;/strong&gt;&lt;/a&gt; – Our React framework of choice, with app router support.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://tailwindcss.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;TailwindCSS&lt;/strong&gt;&lt;/a&gt; – For fast and beautiful styling.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.cursor.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;Cursor IDE&lt;/strong&gt;&lt;/a&gt; – A modern coding environment tailored for AI-assisted development. &lt;em&gt;(I will create an article to help you setup your IDE, use rules and also work with MCPs)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;Ollama&lt;/strong&gt;&lt;/a&gt; – A simple way to run open-source large language models locally. Download and use directly from your terminal.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ollama.com/library/gemma3" rel="noopener noreferrer"&gt;&lt;strong&gt;Gemma 3:1B model&lt;/strong&gt;&lt;/a&gt; – A lightweight model great for running on most modern laptops. Using this just for testing purposes. Others includes Llama, DeepSeek, and Mistral, etc.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📦 Step 1: Set Up Your Next.js App
&lt;/h2&gt;

&lt;p&gt;Let’s start by creating a new Next.js project. Open your terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx create-next-app@latest ollama-assistant --app
cd ollama-assistant
cursor .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installing Next.js now allows you to set up Tailwind, TypeScript, and other configurations from the installation process.&lt;/p&gt;

&lt;p&gt;Then, run the app on your locals using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🤖 Step 2: Install and Run Ollama with Gemma
&lt;/h2&gt;

&lt;p&gt;Ollama makes it super simple to run models locally. Head over to &lt;a href="https://ollama.com/download" rel="noopener noreferrer"&gt;https://ollama.com/download&lt;/a&gt; and install it for your OS.&lt;/p&gt;

&lt;p&gt;Once installed, open your terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ollama run gemma3:1b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will download and start the Gemma 3:1B model locally. Once the download is complete, the model will launch, and you can start chatting with it on the terminal, as shown in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x4bm0t90kd1ni8pvtum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x4bm0t90kd1ni8pvtum.png" alt="Terminal Screenshot of Ollama setup" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ Step 3: Connect Your App to Ollama
&lt;/h2&gt;

&lt;p&gt;Next, we’ll add a simple API route that communicates with the local Ollama server. Ollama provides REST API endpoints that allow you to interact with the downloaded models. Once the terminal is running, it exposes an endpoint at &lt;code&gt;http://localhost:11434/api&lt;/code&gt;, where you can send your HTTP requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app/api/chat/route.ts

import { NextResponse } from 'next/server';

export async function POST(req: Request) {
  try {
    const { message } = await req.json();

    // Make request to Ollama API
    const response = await fetch('http://localhost:11434/api/generate', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({
        model: 'gemma3:1b',
        prompt: message,
        stream: false,
      }),
    });

    const data = await response.json();

    return NextResponse.json({
      response: data.response,
    });
  } catch (error) {
    console.error('Error:', error);
    return NextResponse.json(
      { error: 'Failed to process the request' },
      { status: 500 }
    );
  }
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This endpoint accepts a &lt;code&gt;message&lt;/code&gt; from the frontend and returns the model’s response.&lt;/p&gt;




&lt;h2&gt;
  
  
  🖼️ Step 4: Build the Chat Interface
&lt;/h2&gt;

&lt;p&gt;Let’s create a simple UI where users can type and get responses from our assistant.&lt;/p&gt;

&lt;p&gt;These files will be in a modules folder named &lt;code&gt;chat&lt;/code&gt;. (you can use your preferred project folder structure).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ChatInput.tsx

import React, { useState } from 'react';

interface ChatInputProps {
  onSendMessage: (message: string) =&amp;gt; void;
}

const ChatInput: React.FC&amp;lt;ChatInputProps&amp;gt; = ({ onSendMessage }) =&amp;gt; {
  const [message, setMessage] = useState('');

  const handleSubmit = (e: React.FormEvent) =&amp;gt; {
    e.preventDefault();
    if (message.trim()) {
      onSendMessage(message);
      setMessage('');
    }
  };

  return (
    &amp;lt;form onSubmit={handleSubmit} className="border-t border-gray-200 dark:border-gray-700 p-4"&amp;gt;
      &amp;lt;div className="flex items-center gap-2"&amp;gt;
        &amp;lt;input
          type="text"
          value={message}
          onChange={(e) =&amp;gt; setMessage(e.target.value)}
          placeholder="Type your message..."
          className="flex-1 rounded-lg border border-gray-300 dark:border-gray-600 p-2 
                   bg-white dark:bg-gray-800 text-gray-900 dark:text-gray-100"
        /&amp;gt;
        &amp;lt;button
          type="submit"
          className="bg-blue-600 text-white px-4 py-2 rounded-lg hover:bg-blue-700 
                   transition-colors duration-200"
        &amp;gt;
          Send
        &amp;lt;/button&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/form&amp;gt;
  );
};

export default ChatInput; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ChatMessage.tsx

import React from 'react';

interface ChatMessageProps {
  message: string;
  isUser: boolean;
}

const ChatMessage: React.FC&amp;lt;ChatMessageProps&amp;gt; = ({ message, isUser }) =&amp;gt; {
  return (
    &amp;lt;div className={`flex ${isUser ? 'justify-end' : 'justify-start'} mb-4`}&amp;gt;
      &amp;lt;div
        className={`${
          isUser
            ? 'bg-blue-600 text-white rounded-l-lg rounded-tr-lg'
            : 'bg-gray-200 dark:bg-gray-700 text-gray-800 dark:text-gray-200 rounded-r-lg rounded-tl-lg'
        } px-4 py-2 max-w-[80%]`}
      &amp;gt;
        &amp;lt;p className="text-sm"&amp;gt;{message}&amp;lt;/p&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
  );
};

export default ChatMessage; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ChatPage.tsx 

"use client"
import React, { useEffect, useRef, useState } from 'react';
import ChatInput from './ChatInput';
import ChatMessage from './ChatMessage';

interface Message {
  text: string;
  isUser: boolean;
}

const ChatPage: React.FC = () =&amp;gt; {
  const [messages, setMessages] = useState&amp;lt;Message[]&amp;gt;([]);
  const [isLoading, setIsLoading] = useState(false);
  const messagesEndRef = useRef&amp;lt;HTMLDivElement&amp;gt;(null);

  const scrollToBottom = () =&amp;gt; {
    messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' });
  };

  useEffect(() =&amp;gt; {
    scrollToBottom();
  }, [messages]);

  const handleSendMessage = async (message: string) =&amp;gt; {
    // Add user message
    setMessages(prev =&amp;gt; [...prev, { text: message, isUser: true }]);
    setIsLoading(true);

    try {
      const response = await fetch('/api/chat', {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
        },
        body: JSON.stringify({ message }),
      });

      const data = await response.json();

      // Add AI response
      setMessages(prev =&amp;gt; [...prev, { text: data.response, isUser: false }]);
    } catch (error) {
      console.error('Error:', error);
      setMessages(prev =&amp;gt; [...prev, { text: 'Sorry, there was an error processing your request.', isUser: false }]);
    } finally {
      setIsLoading(false);
    }
  };

  return (
    &amp;lt;div className="flex flex-col h-screen max-w-2xl mx-auto"&amp;gt;
      &amp;lt;div className="bg-white dark:bg-gray-800 shadow-lg rounded-lg m-4 flex-1 flex flex-col overflow-hidden"&amp;gt;
        &amp;lt;div className="p-4 border-b border-gray-200 dark:border-gray-700"&amp;gt;
          &amp;lt;h1 className="text-xl font-semibold text-gray-800 dark:text-white"&amp;gt;AI Assistant&amp;lt;/h1&amp;gt;
        &amp;lt;/div&amp;gt;

        &amp;lt;div className="flex-1 overflow-y-auto p-4"&amp;gt;
          {messages.map((msg, index) =&amp;gt; (
            &amp;lt;ChatMessage key={index} message={msg.text} isUser={msg.isUser} /&amp;gt;
          ))}
          {isLoading &amp;amp;&amp;amp; (
            &amp;lt;div className="flex justify-start mb-4"&amp;gt;
              &amp;lt;div className="bg-gray-200 dark:bg-gray-700 rounded-lg px-4 py-2"&amp;gt;
                &amp;lt;div className="animate-pulse flex space-x-2"&amp;gt;
                  &amp;lt;div className="w-2 h-2 bg-gray-400 rounded-full"&amp;gt;&amp;lt;/div&amp;gt;
                  &amp;lt;div className="w-2 h-2 bg-gray-400 rounded-full"&amp;gt;&amp;lt;/div&amp;gt;
                  &amp;lt;div className="w-2 h-2 bg-gray-400 rounded-full"&amp;gt;&amp;lt;/div&amp;gt;
                &amp;lt;/div&amp;gt;
              &amp;lt;/div&amp;gt;
            &amp;lt;/div&amp;gt;
          )}
          &amp;lt;div ref={messagesEndRef} /&amp;gt;
        &amp;lt;/div&amp;gt;

        &amp;lt;ChatInput onSendMessage={handleSendMessage} /&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
  );
};

export default ChatPage; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, the &lt;code&gt;page/tsx&lt;/code&gt; in the app folder will have;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import ChatPage from '@/modules/ChatPage';

export default function Chat() {
  return &amp;lt;ChatPage /&amp;gt;;
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is how the user interface look after running &lt;code&gt;npm run dev&lt;/code&gt; in the terminal&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdk0ll9s9nulptg4p3wrc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdk0ll9s9nulptg4p3wrc.png" alt="Website UI" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Viola! You can continue to build on this.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 How It Works
&lt;/h2&gt;

&lt;p&gt;Here’s a quick summary of what’s happening:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You enter a message in the text area.&lt;/li&gt;
&lt;li&gt;The app sends your message to the /api/chat endpoint.&lt;/li&gt;
&lt;li&gt;The endpoint forwards it to Ollama’s local API (localhost:11434).&lt;/li&gt;
&lt;li&gt;Ollama responds with the model’s reply.&lt;/li&gt;
&lt;li&gt;The frontend displays the AI response instantly.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  💡 Why Use Ollama?
&lt;/h2&gt;

&lt;p&gt;✅ Privacy – Everything runs locally. No data leaves your device.&lt;br&gt;
✅ Speed – No network latency when calling the model.&lt;br&gt;
✅ Cost-effective – No token limits or monthly subscriptions.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✨ What's Next?
&lt;/h2&gt;

&lt;p&gt;While this method connects to the Ollama server externally (using the terminal), in the next article, I’ll show you how to build your assistant using the &lt;a href="https://github.com/ollama/ollama-js" rel="noopener noreferrer"&gt;ollamajs&lt;/a&gt; package directly in your codebase for even tighter integration. Stay tuned!&lt;/p&gt;

&lt;p&gt;📩 Feel free to drop a comment if you have any questions or need help setting things up!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>nextjs</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Deploying NestJS Application using Vercel and Supabase</title>
      <dc:creator>Abayomi Olatunji</dc:creator>
      <pubDate>Fri, 25 Oct 2024 12:58:52 +0000</pubDate>
      <link>https://dev.to/abayomijohn273/deploying-nestjs-application-using-vercel-and-supabase-3n7m</link>
      <guid>https://dev.to/abayomijohn273/deploying-nestjs-application-using-vercel-and-supabase-3n7m</guid>
      <description>&lt;p&gt;Understand that deploying to Vercel is quite easy, however, there are some setups you need to take into consideration during deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;NestJS Project connected and working properly locally in the development environment with PostgreSQL &lt;/li&gt;
&lt;li&gt;Vercel Account for deployment&lt;/li&gt;
&lt;li&gt;Supabase Account (we will be setting up our PostgreSQL Database here)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Let's start with the Supabase setup considering that your NestJS app is ready for deployment.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Supabase Account
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://supabase.com/" rel="noopener noreferrer"&gt;Supabase&lt;/a&gt; is an opensource firebase alternative with full support and seamless configuration of your PostgreSQL database, and it also provides additional features such as authentication, storage, etc.&lt;/p&gt;

&lt;p&gt;Set up a new account on Supabase and create a new project in the account. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa10o9s1bqamf87e9ful5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa10o9s1bqamf87e9ful5.png" alt="supabase project setup" width="800" height="648"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the setup is completed, click on the &lt;strong&gt;connect&lt;/strong&gt; button on the &lt;strong&gt;Home&lt;/strong&gt; page. This will show you different options for connecting the DB to your project&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhb2wjpr87uhow9w1fs9j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhb2wjpr87uhow9w1fs9j.png" alt="supabase db connect" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Test the connection on your locals with the credentials provided to make sure everything is working perfectly well.&lt;/p&gt;

&lt;p&gt;NOTE: Make sure the credentials are not exposed and stored in your &lt;code&gt;.env&lt;/code&gt; file (I believe you know this already 😉)&lt;/p&gt;

&lt;p&gt;Next, Let's set up our Vercel account and deploy the project&lt;/p&gt;

&lt;h3&gt;
  
  
  Vercel
&lt;/h3&gt;

&lt;p&gt;Typically, &lt;a href="https://vercel.com" rel="noopener noreferrer"&gt;Vercel&lt;/a&gt; is known mostly to be used for front-end app deployment, however, it can also be used to deploy backend projects.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;PS:&lt;/strong&gt; Use a suited service provider instead if you're working on a medium to large-scale project for your backend deployments.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;On your Vercel account, create a new project and connect to your Git repository. Import your .env file and click the &lt;strong&gt;Deploy&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhhjhza6tls6q9x0s4k1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhhjhza6tls6q9x0s4k1.png" alt="Vercel create project" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Voila, that's it 🎉🎉🎉.&lt;/em&gt;&lt;br&gt;
...&lt;/p&gt;
&lt;h3&gt;
  
  
  Common Issues likely encountered
&lt;/h3&gt;
&lt;h4&gt;
  
  
  # Error: No Output Directory named "public"
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmz8qxy0y2rnsn507vpz1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmz8qxy0y2rnsn507vpz1.png" alt="vercel error" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a common error because Vercel needs to know your output directory during the build process. To fix this, simply add a &lt;code&gt;versel.json&lt;/code&gt; file and copy this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "version": 2,

  "builds": [
    {
      "src": "src/main.ts",
      "use": "@vercel/node"
    }
  ],
  "routes": [
    {
      "src": "/(.*)",
      "dest": "src/main.ts",
      "methods": ["GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS"]
    }
  ]
} 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Run deployment again and that's all&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;...&lt;/p&gt;

&lt;h4&gt;
  
  
  # Error: This Serverless Function has crashed
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fouzds3gip80e7tcswxra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fouzds3gip80e7tcswxra.png" alt="app crashed" width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my case, it was because of a &lt;code&gt;module not found&lt;/code&gt; error&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5tq7u4jvr3qsi3smep1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5tq7u4jvr3qsi3smep1.png" alt="error" width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;...&lt;br&gt;
There are several ways to fix this problem:&lt;/p&gt;
&lt;h5&gt;
  
  
  Method 1 (Replace all your imports with relative path)
&lt;/h5&gt;

&lt;p&gt;From&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { UsersService } from 'src/users/users.service';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { UsersService } from '../users/users.service';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;...&lt;/p&gt;

&lt;h5&gt;
  
  
  Method 2 (Modify your vercel.json file and .gitignore file)
&lt;/h5&gt;

&lt;p&gt;I eventually went with this method because I didn't need to confine my app to using only relative path imports.&lt;/p&gt;

&lt;p&gt;So, modify the &lt;strong&gt;vercel.json&lt;/strong&gt; to this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "version": 2,

  "builds": [
    {
      "src": "dist/main.js",
      "use": "@vercel/node"
    }
  ],
  "routes": [
    {
      "src": "/(.*)",
      "dest": "dist/main.js",
      "methods": ["GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS"]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Go to your &lt;strong&gt;.gitignore&lt;/strong&gt; file and remove &lt;strong&gt;/dist&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Run a new deployment and that's all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Happy coding! 😎&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>node</category>
      <category>nestjs</category>
    </item>
    <item>
      <title>How to fix Nextjs image not loading on production</title>
      <dc:creator>Abayomi Olatunji</dc:creator>
      <pubDate>Wed, 25 Sep 2024 08:32:05 +0000</pubDate>
      <link>https://dev.to/abayomijohn273/how-to-fix-nextjs-image-not-loading-on-production-10k2</link>
      <guid>https://dev.to/abayomijohn273/how-to-fix-nextjs-image-not-loading-on-production-10k2</guid>
      <description>&lt;p&gt;Hello devs,&lt;br&gt;
I recently faced an issue around images not loading on production but working perfectly well on locals.&lt;/p&gt;

&lt;p&gt;Accessing the image on production gives this error message &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"url" parameter is valid but upstream response is invalid&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For the above error, irrespective of the version you are running on, kindly install &lt;strong&gt;sharp&lt;/strong&gt; and that is all!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i sharp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn add sharp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>javascript</category>
      <category>nextjs</category>
      <category>react</category>
    </item>
    <item>
      <title>NextJS API working on locals but not working on production</title>
      <dc:creator>Abayomi Olatunji</dc:creator>
      <pubDate>Mon, 01 Nov 2021 10:25:05 +0000</pubDate>
      <link>https://dev.to/abayomijohn273/nextjs-api-working-on-locals-but-not-working-on-production-2je6</link>
      <guid>https://dev.to/abayomijohn273/nextjs-api-working-on-locals-but-not-working-on-production-2je6</guid>
      <description>&lt;p&gt;Hey, I am writing this post to share the experience I had dealing with NextJS API not working on production (that is, returning a 404 Bad Request).&lt;/p&gt;

&lt;p&gt;A 400 Bad Request simply means that the server cannot process a request due to client error and this errors could be wrong URL or issues in the service use in the request.&lt;/p&gt;

&lt;p&gt;For this particular use-case, the problem was related to the environment variables.&lt;/p&gt;

&lt;p&gt;The environment variables store in the .env.local file wasn't working after deploying to &lt;a href="https://vercel.com" rel="noopener noreferrer"&gt;Vercel&lt;/a&gt; and the solution is to also setup your variables on Vercel.&lt;/p&gt;

&lt;p&gt;Let me show you a walkthrough;&lt;/p&gt;

&lt;p&gt;Adding the variables needed in your project in .env file, you can check &lt;a href="https://nextjs.org/docs/basic-features/environment-variables" rel="noopener noreferrer"&gt;Environment Variable&lt;/a&gt; for more information of that. (NOTE: Make sure you add it to &lt;strong&gt;.gitignore&lt;/strong&gt; so you don't expose it).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SAMPLE ENV VARIABLE&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DB_USER=james
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After deploying the app on Vercel, navigate to &lt;strong&gt;Settings&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7veree7j0mn13gt9aloe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7veree7j0mn13gt9aloe.png" alt="Image Navigate to Settings" width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;Environment Variables&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmqahfionpsjyuix591g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmqahfionpsjyuix591g.png" alt="Image Navigate to environment variable" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From there, you can add your environment variable and your web app will work as expected.&lt;/p&gt;

&lt;p&gt;I hope this is helpful to you.&lt;/p&gt;

&lt;p&gt;❤️❤️❤️&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>javascript</category>
      <category>react</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
