DEV Community

Cover image for Building an AI Chatbot with NextJS and Llama
Boakye Effah
Boakye Effah

Posted on • Edited on

Building an AI Chatbot with NextJS and Llama

Step by step approach to building a chatbot assistant

Table of Contents:

Introduction

In this guide, I'll walk you through creating a cutting-edge AI Support Assistant using Next.js 14 and Material-UI (MUI). The app features a chat interface that allows users to engage with an AI assistant powered by Google's Generative AI

Requirements

Before getting started, ensure you have:

  • Node.js (version 16.x or later) installed

  • A code editor (VS Code is recommended)

  • An AI API key (OpenAI, Deepseek or Groq)

Components

  • Server-side API The server-side API manages client requests, processes messages, communicates with the Groq service, and returns the responses back to the client.
  • Client-side Interface The client-side interface is designed with React components, offering a clean and user-friendly UI for chatbot interactions. It supports real-time response streaming, message history and message history.

Application Code

First, clone the github repository:

git clone https://github.com/kbyeffah/AI_Chatbot.git
cd AI-Chatbot
Enter fullscreen mode Exit fullscreen mode

Installing dependencies

Ensure you have all these dependencies installed:

npm install groq-sdk
npm install next react react-dom
npm install @mui/material @emotion/react @emotion/styled
npm install @emotion/react@^11.13.0 @emotion/styled@^11.13.0 @mui/material@^5.16.6 @mui/icons-material@^5.16.7
npm install @google/generative-ai@^0.16.0 react-markdown@^9.0.1
npm install @shadcn/ui framer-motion
npm install -g shadcn-ui
Enter fullscreen mode Exit fullscreen mode

Environment Configuration

Set up a .env.local file in your local root.
In your .env.local file, add your GroqAPI key (you can use your preferred model among Deepseek,OpenAI and the others) generated from (https://console.groq.com/)
Depending on your choice;

NEXT_PUBLIC_GROQ_API_KEY= your_groq_api_key_here
NEXT_PUBLIC_OPEN_API_KEY= your_open_api_key_here

Enter fullscreen mode Exit fullscreen mode

Server-side code:

Using Groq;

app/api/chat/route.js

import { NextResponse } from "next/server";
import Groq from "groq-sdk";

const groq = new Groq({
  apiKey: process.env.NEXT_PUBLIC_GROQ_API_KEY
});


const systemPrompt = `
You are a friendly and knowledgeable academic assistant, 
a coding assistant, and a teacher specializing in AI and Machine Learning. 
Your role is to assist users with academic topics, provide detailed explanations, 
and support learning across various domains.
`;

export async function POST(req) {
  try {
    const { messages, msg } = await req.json(); 
    // Validate input
    if (!msg || typeof msg !== "string") {
      return NextResponse.json({ error: "Invalid request: msg is required" }, { status: 400 });
    }

    // Safely process messages
    const processedMessages = (messages ?? [])
      .filter((m) => m?.parts?.[0]?.text)
      .map((m) => ({
        role: m.role === "model" ? "assistant" : "user",
        content: m.parts[0].text,
      }));

    // Construct message history with system instructions
    const enhancedMessages = [
      { role: "system", content: systemPrompt },
      ...processedMessages,
      { role: "user", content: msg },
    ];

    // Create a streaming response from Groq
    const stream = await groq.chat.completions.create({
      messages: enhancedMessages,
      model: "llama3-8b-8192",
      stream: true,
      max_tokens: 1024,
      temperature: 0.7,
    });

    // Create a readable stream for response streaming
    const responseStream = new ReadableStream({
      async start(controller) {
        const encoder = new TextEncoder();

        try {
          for await (const chunk of stream) {
            const content = chunk.choices[0]?.delta?.content;
            if (content) {
              controller.enqueue(encoder.encode(content));
            }
          }
        } catch (error) {
          console.error("Streaming error:", error);
          controller.error(error);
        } finally {
          controller.close();
        }
      },
    });

    return new Response(responseStream, {
      headers: { "Content-Type": "text/plain" },
    });
  } catch (error) { 
    console.error("Chat API Error:", error);

    return NextResponse.json(
      { error: "An error occurred while processing your request", details: error.message },
      { status: 500 }
    );
  }
}

Enter fullscreen mode Exit fullscreen mode

Explanation:

Importing dependencies;

  • NextRequest and NextResponse: These are imported from "next/server", providing a way to handle incoming HTTP requests and send responses in a Next.js API route.
  • Groq: This imports the groq-sdk, a package that allows communication with the Groq API for AI-based chat completion.

Initializing the Groq API;

  • new Groq({...}): This initializes the Groq API client using the API key stored in an environment variable (.env.local).
  • process.env.GROQ_API_KEY: Ensures security by not hardcoding the API key in the source code.

Defining the System Prompt;

  • The system prompt defines the AI assistant’s personality and knowledge domain.
  • This prompt guides the AI's responses to ensure they are relevant and informative.

Handling Incoming POST Requests;

  • This exports a function to handle POST requests at this API route.
  • req: NextRequest: Represents the incoming request, allowing access to the request body.

Parsing and Validating User Input;

  • Extracts messages (conversation history) and msg (current user input) from the request body.
  • Uses TypeScript type annotations (messages?: any[]; msg: string) for better type safety.
  • Checks if msg is present and is a valid string. If the input is missing or invalid, it returns a 400 Bad Request response.

Processing Message History;

  • Ensures messages is always an array (usingmessages ?? [] to handle undefined cases).
  • Filters out invalid messages that don’t have text.
  • Maps each message to a structure compatible with the Groq API: If m.role === "model", it's treated as an assistant response. Otherwise, it's treated as a user message.

Constructing the Message Payload;

  • Adds the system prompt at the beginning to guide the AI.
  • Includes previous messages for context (conversation history).
  • Appends the latest user message to be processed by the AI.

Sending the Request to Groq API;

  • messages: enhancedMessages → Sends the full chat history.
  • model: "llama3-8b-8192" → Specifies the AI model.
  • stream: true → Enables real-time streaming of responses.
  • max_tokens: 1024 → Limits the response length.
  • temperature: 0.7 → Controls response creativity.

Creating a Readable Stream for AI Responses;

  • Creates a streaming response to allow real-time AI-generated text
  • Iterates over the streaming response from Groq.
  • Extracts content from chunk.choices[0]?.delta?.content.
  • Encodes the content and sends it to the frontend.
  • Handles errors gracefully and closes the stream.

Returning the Streaming Response;

  • Returns the streaming response to the client with "text/plain"content type.

Error Handling;

  • Catches errors, logs them, and sends a 500 Internal Server Error response.

Client-side code:

app/page.js

"use client";

import { useState, useRef } from "react";
import { Card, CardContent } from "@/components/ui/card";
import { Button } from "@/components/ui/button";
import { Textarea } from "@/components/ui/textarea";
import { ScrollArea } from "@/components/ui/scroll-area";
import { motion } from "framer-motion";

export default function Home() {
  const [messages, setMessages] = useState([]);
  const [input, setInput] = useState("");
  const [loading, setLoading] = useState(false);
  const chatRef = useRef(null);

  const sendMessage = async () => {
    if (!input.trim()) return;

    const newMessages = [...messages, { role: "user", content: input }];
    setMessages(newMessages);
    setInput("");
    setLoading(true);

    try {
      const response = await fetch("/api/chat", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ messages: newMessages, msg: input }),
      });

      if (!response.body) throw new Error("No response body");

      const reader = response.body.getReader();
      let receivedText = "";

      while (true) {
        const { done, value } = await reader.read();
        if (done) break;
        receivedText += new TextDecoder().decode(value);

        setMessages([...newMessages, { role: "assistant", content: receivedText }]);
      }
    } catch (error) {
      console.error("Error:", error);
    } finally {
      setLoading(false);
      chatRef.current?.scrollIntoView({ behavior: "smooth" });
    }
  };

  return (
    <motion.div
      className="max-w-2xl mx-auto mt-10 p-5 bg-gray-900 text-white rounded-2xl shadow-lg"
      initial={{ opacity: 0, y: 20 }}
      animate={{ opacity: 1, y: 0 }}
    >
      <h1 className="text-xl font-semibold text-center mb-4">💬 AI Chatbot</h1>

      <ScrollArea className="h-80 overflow-y-auto p-4 bg-gray-800 rounded-lg">
        {messages.map((msg, idx) => (
          <motion.div
            key={idx}
            className={`p-3 my-2 rounded-lg max-w-[80%] ${
              msg.role === "user" ? "bg-blue-600 self-end ml-auto" : "bg-gray-700 self-start"
            }`}
            initial={{ opacity: 0, y: 10 }}
            animate={{ opacity: 1, y: 0 }}
          >
            <p className="text-sm">{msg.content}</p>
          </motion.div>
        ))}
        <div ref={chatRef} />
      </ScrollArea>

      <Card className="mt-4 bg-gray-800">
        <CardContent className="p-4 flex gap-3">
          <Textarea
            value={input}
            onChange={(e) => setInput(e.target.value)}
            placeholder="Type a message..."
            className="flex-1 text-white bg-gray-900 border-none focus:ring-0"
          />
          <Button onClick={sendMessage} disabled={loading} className="bg-blue-500 hover:bg-blue-600">
            {loading ? "..." : "Send"}
          </Button>
        </CardContent>
      </Card>
    </motion.div>
  );
}
Enter fullscreen mode Exit fullscreen mode

Modify the styling in the page.js with your preferences for a more stylish and engaging interface

Explanation:

Chat Area;

  • Uses ScrollArea to handle overflow messages
  • Dynamically updates AI/user messages
  • Uses motion.div for smooth animations

Input & Send Button;_

  • Uses Shadcn’s Textarea for input.
  • Send button triggers the API request.

Styling;

  • Dark theme (gray-900, gray-800, blue-600)
  • Rounded edges & smooth shadows.

How it works:

User interaction;

  • The user types a message in the chat input field.
  • They can press "Send" or hit Enter to submit the message.
  • The message is added to the chat history in the UI instantly.

Server Processing (API Route /api/chat);

  • The user message is sent to the Next.js API (/api/chat).
  • The API extracts the message history, ensuring context is preserved.
  • It sends the request to Groq API (Llama model) for text generation.

Response Streaming (Real-Time Updates);

  • Instead of waiting for the entire response, Groq sends data in chunks (streaming).
  • The client processes these chunks as they arrive, making the bot feel responsive.
  • The chat UI updates dynamically, displaying partial responses as they load.

Display & UI Experience;

  • The chat interface updates in real-time.
  • User messages appear in a blue bubble (right-aligned).
  • AI responses appear in a gray bubble (left-aligned).
  • Auto-scroll ensures the latest messages remain visible.

Testing and Hosting

  • Start the server from your terminal: npm run dev
  • Open your browser and navigate to:
    http://localhost:3000

  • Deploy: Use platforms like Vercel, AWS Amplify, Netlify or Firebase Hosting to deploy the application

Top comments (0)