DEV Community

Victor Pascal Dike
Victor Pascal Dike

Posted on • Edited on

Building a Full-Stack AI Chatbot with FastAPI (Backend) and React (Frontend)

an image showing a diagram of the connection between chatbot, fastapi and react

AI-powered chatbots are transforming how we interact with technology. From customer support to personal assistants, their applications are vast and growing. This article is a comprehensive guide to building a functional AI chatbot, showcasing the core principles of full-stack AI development.

We'll build the backend using FastAPI, a modern Python framework known for its speed, and a frontend interface with React, the most popular JavaScript library for building user interfaces. This step-by-step guide will walk you through connecting an AI model to a user-friendly frontend, covering the entire workflow from user input to AI-generated response.

What You'll Build

You will create a simple web-based chat application where you can type a message, send it to a backend service, get a response from an AI model (like OpenAI's GPT), and see the conversation displayed on the screen.

Prerequisites

  • Python 3.7+ and basic knowledge of Python.
  • Node.js and npm installed.
  • Basic understanding of React and JavaScript.
  • An OpenAI API key.

I. Key Concepts: The Building Blocks

Before we code, let's understand the core components that make up our chatbot.

A. Backend with FastAPI 🐍

The backend is the brain of our application. It handles user requests, communicates with the AI model, and sends back responses. We're using FastAPI for several key reasons:

  • High Performance: It's built on modern Python features for asynchronous operations, making it incredibly fast.
  • Easy to Use: The syntax is intuitive and simple, letting you build robust APIs with less code.
  • Automatic Docs: It automatically generates interactive API documentation (using Swagger UI), which is a huge help for testing.
  • Data Validation: It uses Python type hints to validate data automatically, reducing potential bugs.

B. Frontend with React ⚛️

The frontend is the user-facing interface. It's what users see and interact with to send messages and view the chatbot's replies. React is an excellent choice because:

  • Component-Based: It allows you to build encapsulated components that manage their own state, making UIs easier to build and maintain.
  • Virtual DOM: It intelligently updates only the parts of the page that have changed, leading to a faster and smoother user experience.
  • Huge Ecosystem: A massive community means you'll find countless libraries and resources for any feature you can imagine.

C. API Communication ↔️

The frontend and backend communicate through an API (Application Programming Interface). Here's how it works:

  1. User Action: The user types a message in the React app and hits "Send."
  2. HTTP Request: The React app sends the message inside an HTTP POST request to a specific URL on our FastAPI backend (e.g., /chat). The data is formatted as JSON.
  3. Backend Processing: The FastAPI backend receives the request, processes the message, and forwards it to the AI model's API.
  4. HTTP Response: Once the AI model responds, the backend sends its answer back to the React app, again in JSON format.
  5. UI Update: The React app receives the response and updates the chat window to display the new message.

D. Integrating a Language Model (LLM) 🤖

The chatbot's intelligence comes from a Large Language Model (LLM). For this tutorial, we'll use OpenAI's GPT models via their API.

  • APIs are Key: Most powerful LLMs are accessed through APIs. You send a prompt (your user's input), and the API returns the generated text.
  • Secure Your Keys: Accessing these APIs requires an API key. Never hardcode your API key directly in your source code. We'll use environment variables to keep it safe.

II. Implementation: Let's Build It!

Now, let's get our hands dirty and build the chatbot.

A. Backend Setup (FastAPI)

First, we'll create the backend service.

1. Project Setup:
Create a project directory, set up a virtual environment, and install the necessary packages.

# Create and navigate into the project folder
mkdir chatbot-backend
cd chatbot-backend

# Create and activate a virtual environment
python3 -m venv .venv      # or python -m venv .venv On Windows
source .venv/bin/activate  # On Linux/macOS
# .venv\Scripts\activate   # On Windows

# Install libraries
pip install fastapi "uvicorn[standard]" python-dotenv openai
Enter fullscreen mode Exit fullscreen mode

2. Store Your API Key:
Create a file named .env in the chatbot-backend directory. This file will hold your secret API key. Remember to add .env to your .gitignore file to avoid committing it to version control.

# .env
OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
Enter fullscreen mode Exit fullscreen mode

3. Create main.py:
This file will contain all our backend logic.

# main.py
import os
from fastapi import FastAPI, HTTPException
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
from openai import OpenAI
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

# Initialize the OpenAI client with the API key
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

# Initialize the FastAPI app
app = FastAPI()

# --- CORS Middleware ---
# This allows the frontend (running on a different port) to communicate with the backend.
origins = [
    "http://localhost:5173",  # The default port for Vite React apps
    "http://localhost:3000",  # A common port for React apps
]

app.add_middleware(
    CORSMiddleware,
    allow_origins=origins,
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

# --- Pydantic Model for Request Body ---
# This defines the expected structure of the incoming request data.
class ChatInput(BaseModel):
    user_message: str

# --- API Endpoints ---
@app.get("/")
async def health_check():
    """A simple health check endpoint."""
    return {"status": "ok"}

@app.post("/chat")
async def chat_endpoint(input_data: ChatInput):
    """The main chat endpoint that interacts with the OpenAI API."""
    try:
        # Create a chat completion request to OpenAI
        completion = client.chat.completions.create(
            model="gpt-3.5-turbo",  # Or another model like gpt-4
            messages=[
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": input_data.user_message},
            ],
        )
        # Extract the bot's response from the API result
        bot_response = completion.choices[0].message.content
        return {"bot_response": bot_response}

    except Exception as e:
        # Handle potential errors from the API call
        raise HTTPException(status_code=500, detail=str(e))
Enter fullscreen mode Exit fullscreen mode

4. Run the Backend:
Start your FastAPI server with Uvicorn.

uvicorn main:app --reload
Enter fullscreen mode Exit fullscreen mode

The --reload flag makes the server restart automatically after you save changes to the code. You can now access your API at http://127.0.0.1:8000.

B. Frontend Setup (React)

Now let's build the user interface.

1. Project Setup:
Create a new React application using Vite (a fast and modern build tool).

# Create a new React project (outside of your backend folder)
npm create vite@latest chatbot-frontend --template react
cd chatbot-frontend

# Install dependencies
npm install
Enter fullscreen mode Exit fullscreen mode

2. Modify src/App.jsx:
Replace the contents of src/App.jsx (or App.js) with the code below. This component will manage the chat's state and handle communication with the backend.

// src/App.jsx
import React, { useState, useEffect } from 'react';
import './App.css';

function App() {
    const [userInput, setUserInput] = useState('');
    const [chatLog, setChatLog] = useState([]);
    const [loading, setLoading] = useState(false);

    // Load chat history from local storage when the component mounts
    useEffect(() => {
        const storedChatLog = localStorage.getItem('chatLog');
        if (storedChatLog) {
            setChatLog(JSON.parse(storedChatLog));
        }
    }, []);

    const handleSubmit = async (event) => {
        event.preventDefault();
        if (!userInput.trim()) return; // Prevent empty submissions

        // Add user message to chat log optimistically
        const userMessage = { type: 'user', text: userInput };
        const newChatLog = [...chatLog, userMessage];
        setChatLog(newChatLog);
        setUserInput(''); // Clear the input field
        setLoading(true);

        try {
            // Send the user's message to the backend
            const response = await fetch('http://localhost:8000/chat', {
                method: 'POST',
                headers: {
                    'Content-Type': 'application/json',
                },
                body: JSON.stringify({ user_message: userInput }),
            });

            if (!response.ok) {
                throw new Error(`HTTP error! status: ${response.status}`);
            }

            const data = await response.json();
            const botMessage = { type: 'bot', text: data.bot_response };

            // Update chat log with the bot's response
            const finalChatLog = [...newChatLog, botMessage];
            setChatLog(finalChatLog);

            // Save the updated chat log to local storage
            localStorage.setItem('chatLog', JSON.stringify(finalChatLog));

        } catch (error) {
            console.error('Error fetching chat response:', error);
            const errorMessage = { type: 'error', text: 'Sorry, something went wrong. Please try again.' };
            setChatLog(prev => [...prev, errorMessage]); // Add error message to log
        } finally {
            setLoading(false); // Stop the loading indicator
        }
    };

    return (
        <div className="App">
            <h1>AI Chatbot</h1>
            <div className="chat-window">
                {chatLog.map((message, index) => (
                    <div key={index} className={`message ${message.type}`}>
                        {message.text}
                    </div>
                ))}
                {loading && <div className="message bot">Loading...</div>}
            </div>
            <form onSubmit={handleSubmit}>
                <input
                    type="text"
                    value={userInput}
                    onChange={(e) => setUserInput(e.target.value)}
                    placeholder="Type your message..."
                    disabled={loading}
                />
                <button type="submit" disabled={loading}>Send</button>
            </form>
        </div>
    );
}

export default App;
Enter fullscreen mode Exit fullscreen mode

3. Add CSS Styling:
Create a file at src/App.css and add the following styles to make the chat interface look clean.

/* src/App.css */
.App {
    font-family: sans-serif;
    display: flex;
    flex-direction: column;
    align-items: center;
    justify-content: center;
    height: 100vh;
    padding: 20px;
    box-sizing: border-box;
}

h1 {
    color: #333;
}

.chat-window {
    width: 100%;
    max-width: 500px;
    height: 60vh;
    border: 1px solid #ccc;
    border-radius: 8px;
    overflow-y: scroll;
    padding: 10px;
    margin-bottom: 20px;
    display: flex;
    flex-direction: column;
    gap: 10px;
}

.message {
    padding: 8px 12px;
    border-radius: 15px;
    max-width: 70%;
    word-wrap: break-word;
}

.user {
    background-color: #007bff;
    color: white;
    align-self: flex-end;
    border-bottom-right-radius: 2px;
}

.bot, .error {
    background-color: #f0f0f0;
    color: #333;
    align-self: flex-start;
    border-bottom-left-radius: 2px;
}

.error {
    background-color: #f8d7da;
    color: #721c24;
}


form {
    display: flex;
    width: 100%;
    max-width: 500px;
}

input[type="text"] {
    flex-grow: 1;
    padding: 10px;
    border: 1px solid #ccc;
    border-radius: 4px;
    margin-right: 10px;
    font-size: 1rem;
}

button {
    padding: 10px 20px;
    border: none;
    border-radius: 4px;
    background-color: #007bff;
    color: white;
    cursor: pointer;
    font-size: 1rem;
}

button:disabled {
    background-color: #cccccc;
    cursor: not-allowed;
}
Enter fullscreen mode Exit fullscreen mode

4. Run the Frontend:
Start the React development server.

npm run dev
Enter fullscreen mode Exit fullscreen mode

Your React app should now be running, typically at http://localhost:5173.

III. Conclusion

Congratulations! You've successfully built a full-stack AI chatbot. By combining a powerful FastAPI backend with a dynamic React frontend, you've created a scalable, user-friendly AI application.

This project is a fantastic starting point. Here are some ideas for taking it further:

  • Streaming Responses: Modify the code to stream the AI's response token-by-token for a more interactive feel.
  • Add More Models: Integrate models from other providers like Hugging Face.
  • Improve the UI: Enhance the user interface with better styling, user avatars, and message timestamps.
  • Deploy It: Deploy your backend and frontend to services like Vercel, Netlify, or a cloud provider so others can use your chatbot.

References

Top comments (0)