DEV Community

Cover image for Building an AI Assistant with Ollama and Next.js - Part 2 (Using Packages)
Abayomi Olatunji
Abayomi Olatunji

Posted on

Building an AI Assistant with Ollama and Next.js - Part 2 (Using Packages)

πŸ’‘ Missed Part 1? Start with the basics and learn how to build an AI assistant using Ollama locally in a Next.js app:

πŸ‘‰ Building an AI Assistant with Ollama and Next.js - Part 1


🧠 Let's go β€” Part 2

In Part 1, we set up a local AI assistant using Ollama, Next.js, and the Gemma 3:1B model with minimal setup.

In this article, we’ll explore two powerful and flexible methods to integrate Ollama directly into your Next.js project using JavaScript libraries.

We'll walk through:

  • Installing the necessary packages
  • How each method works
  • Benefits and differences between them

πŸ›  Tools Used


πŸš€ Getting Started

Make sure you already have Ollama and the model installed. Run this in your terminal:

ollama run gemma3:1b
Enter fullscreen mode Exit fullscreen mode

πŸ“₯ You can get the model from: https://ollama.com/library


πŸ“¦ Method 1 – Using ollama-js

The ollama-js package is a lightweight Node client for interacting with the Ollama server directly from your code.

πŸ“Œ Install:

npm install ollama
Enter fullscreen mode Exit fullscreen mode

πŸ“ API Route in Next.js

// app/api/chat/route.js

import ollama from 'ollama';

export async function POST(req) {
  const { message } = await req.json();
  const response = await ollama.chat({
    model: 'gemma3:1b',
    messages: [{ role: 'user', content: message }],
  });

  return Response.json(response);
}

Enter fullscreen mode Exit fullscreen mode

Using the existing UI implementation from Part 1, this method will work perfectly with it.

βœ… Benefits:

  • Minimal setup
  • Direct control over the model and requests
  • Great for full-stack or custom workflows

⚑ Method 2 – Using ai-sdk + ollama-ai-provider + react-markdown (Preferred)

This method uses AI SDK, which abstracts a lot of complexity and provides a seamless experience, especially for frontend-focused applications.

πŸ“Œ Install the packages first:

npm install ai ollama-ai-provider react-markdown

Enter fullscreen mode Exit fullscreen mode

🧠 Usage Overview:

// app/api/chat/route.ts

import { streamText } from 'ai';
import { NextRequest } from 'next/server';
import { createOllama } from 'ollama-ai-provider';

export const runtime = 'edge';

// Create Ollama provider with configuration
const ollamaProvider = createOllama();

// Configure the model name
const MODEL_NAME = process.env.OLLAMA_MODEL || 'gemma3:1b';

export async function POST(req: NextRequest) {
  try {
    const { messages } = await req.json();

    if (!messages || !Array.isArray(messages) || messages.length === 0) {
      return new Response('Invalid messages format', { status: 400 });
    }

    // Add system message if not present
    const messagesWithSystem = messages[0]?.role !== 'system' 
      ? [
          { 
            role: 'system', 
            content: 'You are a helpful AI assistant powered by Ollama. You help users with their questions and tasks.'
          },
          ...messages
        ]
      : messages;

    const result = await streamText({
      model: ollamaProvider(MODEL_NAME),
      messages: messagesWithSystem,
    });

    return result.toDataStreamResponse();
  } catch (error) {
    console.error('Chat API error:', error);
    return new Response(
      JSON.stringify({ error: 'Failed to process chat request' }), 
      { status: 500 }
    );
  }
} 
Enter fullscreen mode Exit fullscreen mode
// ChatInput.tsx

import { useEffect, useRef } from 'react';

interface ChatInput2Props {
  input: string;
  handleInputChange: (e: React.ChangeEvent<HTMLTextAreaElement>) => void;
  handleSubmit: (e: React.FormEvent) => void;
  isLoading: boolean;
}

export default function ChatInput2({
  input,
  handleInputChange,
  handleSubmit,
  isLoading
}: ChatInput2Props) {
  const textareaRef = useRef<HTMLTextAreaElement>(null);

  useEffect(() => {
    if (textareaRef.current) {
      textareaRef.current.style.height = 'auto';
      textareaRef.current.style.height = `${textareaRef.current.scrollHeight}px`;
    }
  }, [input]);

  return (
    <form onSubmit={handleSubmit} className="flex items-end gap-4 border-t border-gray-700 bg-gray-800 p-4 sticky bottom-0">
      <div className="relative flex-1">
        <textarea
          ref={textareaRef}
          className="w-full resize-none rounded-xl border border-gray-600 bg-gray-700 p-4 pr-12 text-gray-100 placeholder-gray-400 focus:outline-none focus:ring-2 focus:ring-blue-500 max-h-[200px] min-h-[56px]"
          rows={1}
          placeholder="Type your message..."
          value={input}
          onChange={handleInputChange}
          disabled={isLoading}
        />
        <button
          type="submit"
          disabled={isLoading || !input.trim()}
          className="absolute bottom-2 right-2 rounded-lg bg-blue-600 p-2 text-white hover:bg-blue-700 disabled:opacity-50 disabled:hover:bg-blue-600"
        >
          <svg
            xmlns="http://www.w3.org/2000/svg"
            fill="none"
            viewBox="0 0 24 24"
            strokeWidth={2}
            stroke="currentColor"
            className="w-5 h-5"
          >
            <path
              strokeLinecap="round"
              strokeLinejoin="round"
              d="M6 12L3.269 3.126A59.768 59.768 0 0121.485 12 59.77 59.77 0 013.27 20.876L5.999 12zm0 0h7.5"
            />
          </svg>
        </button>
      </div>
    </form>
  );
} 
Enter fullscreen mode Exit fullscreen mode
// ChatMessage.tsx

import ReactMarkdown from 'react-markdown';

interface ChatMessage2Props {
  role: 'user' | 'assistant' | 'system';
  content: string;
}

export default function ChatMessage2({ role, content }: ChatMessage2Props) {
  return (
    <div
      className={`flex ${
        role === 'user' ? 'justify-end' : 'justify-start'
      } mb-4`}
    >
      <div
        className={`max-w-[80%] rounded-xl p-4 shadow-md ${
          role === 'user'
            ? 'bg-blue-600 text-gray-100'
            : 'bg-gray-700 text-gray-100 border border-gray-600'
        }`}
      >
        <ReactMarkdown
          components={{
            p: ({ children }) => <p className="mb-2 last:mb-0">{children}</p>,
            code: ({ children }) => (
              <code
                className={`block p-2 rounded my-2 ${
                  role === 'user'
                    ? 'bg-blue-700 text-gray-100'
                    : 'bg-gray-800 text-gray-100'
                }`}
              >
                {children}
              </code>
            ),
            ul: ({ children }) => (
              <ul className="list-disc list-inside mb-2 text-gray-100">{children}</ul>
            ),
            ol: ({ children }) => (
              <ol className="list-decimal list-inside mb-2 text-gray-100">{children}</ol>
            ),
          }}
        >
          {content}
        </ReactMarkdown>
      </div>
    </div>
  );
} 
Enter fullscreen mode Exit fullscreen mode
// ChatPage.tsx

"use client"
import { useChat } from 'ai/react';
import ChatInput2 from './ChatInput2';
import ChatMessage2 from './ChatMessage2';

type MessageRole = 'system' | 'user' | 'assistant';

function normalizeRole(role: string): MessageRole {
  if (role === 'system' || role === 'user' || role === 'assistant') {
    return role as MessageRole;
  }
  return 'assistant';
}

export default function Chat2Page() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
    api: '/api/chat2',
    initialMessages: [
      {
        id: 'system-1',
        role: 'system',
        content: 'You are a helpful AI assistant powered by Ollama. You can help users with various tasks and answer their questions.',
      },
    ],
  });

  return (
    <div className="container mx-auto max-w-4xl p-4 h-[calc(100vh-2rem)] bg-gray-900">
      <div className="mb-4">
        <h1 className="text-3xl font-bold text-gray-100">AI Chat Assistant v2</h1>
        <p className="text-gray-400">
          Powered by Ollama with markdown support and streaming responses
        </p>
      </div>

      <div className="flex flex-col h-[calc(100%-8rem)]">
        <div className="flex-1 overflow-y-auto rounded-xl border border-gray-700 bg-gray-800 p-4 mb-4">
          {messages.map((message) => (
            <ChatMessage2
              key={message.id}
              role={normalizeRole(message.role)}
              content={message.content}
            />
          ))}
          {messages.length === 1 && (
            <div className="flex h-full items-center justify-center text-gray-500">
              Start a conversation by typing a message below
            </div>
          )}
        </div>

        <ChatInput2
          input={input}
          handleInputChange={handleInputChange}
          handleSubmit={handleSubmit}
          isLoading={isLoading}
        />
      </div>
    </div>
  );
} 
Enter fullscreen mode Exit fullscreen mode
// app/chat/page.tsx

import ChatPage from '@/modules/chat/ChatPage';

export default function Chat() {
  return <ChatPage />;
}
Enter fullscreen mode Exit fullscreen mode

βœ… Benefits:

  • Built-in support for streaming responses (for real-time UX)
  • Works smoothly with React Server Components
  • Clean abstraction that improves maintainability
  • Easy markdown rendering with react-markdown

πŸ“Έ Outcome

Output from Ollama and AI SDK


🧠 Summary: Which Method Should You Use?

Feature ollama-js ai-sdk + ollama-ai-provider
Setup Simplicity βœ… Simple βœ… Moderate
Streaming Support ❌ Manual βœ… Built-in
Frontend Friendly ❌ More Backend Focused βœ… Tailored for React
Markdown Rendering ❌ Manual βœ… Easy via react-markdown
Recommended For Custom/Low-level Projects Production-ready AI UI

πŸ‘‹ What's Next?

While this is another method to build your AI assistant locally,
πŸ‘‰ Click the next article to learn how to build this using LangChain and Ollama for more advanced AI workflows.

Happy Coding....

Top comments (0)