DEV Community

grbeno
grbeno

Posted on • Edited on

Real-Time AI Chat with Django and React: Development

Building on the template presented in the previous part, now in the second part I'd like to show you a web application example with an LLM based chatbot.

I will demonstrate the chatbot using the GPT-4o-mini model from OpenAI, but feel free to choose another model or provider. In this case, check the documentation for how to use these in Python.

Features

✅ Real-time asynchronous chat (Channels, WebSockets)
✅ AI/LLM API consuming (OpenAI)
✅ Chat memory (In-memory Channel Layer)


1. Backend

Open the CLI/terminal!

Well, assuming you already have a basic Django application using React as the frontend, you should now change to the project directory and activate the virtual environment if it is not activated at this moment.

cd <project_path>
Enter fullscreen mode Exit fullscreen mode
venv/Scripts/activate
Enter fullscreen mode Exit fullscreen mode

Let's start developing our new application

Create the chat app.

python manage.py startapp chat
Enter fullscreen mode Exit fullscreen mode

Install the necessary packages.

pip install environs openai channels daphne
Enter fullscreen mode Exit fullscreen mode

You have to add the chat app and the recently installed packages, channels and daphne, to the INSTALLED_APPS in the config/settings.py file. Daphne should be listed right before django.contrib.staticfiles.

# config/settings.py

# Application definition

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'daphne',  # adding daphne!
    'django.contrib.staticfiles',
    'channels',
    'backend',
    'chat',  # adding chat!
]

Enter fullscreen mode Exit fullscreen mode

You need to have an API key to access an LLM. My choice now is the GPT-4o-mini model, so I will generate a key on OpenAI.

# .env

OPENAI_API_KEY=<your_api_key>

Enter fullscreen mode Exit fullscreen mode

chat/chat_api.py

Create a file for the AiChat class, which calls the model via API.

The AiChat class should look like this:

# chat/chat_api.py

from openai import OpenAI
from environs import Env

# Load the environment variables
env = Env()
env.read_env()

client = OpenAI()
client.api_key=env.str("OPENAI_API_KEY")

class AiChat():

    _channels = {}  # In-Memory Channel Layer

    def __init__(self, prompt: str, model: str, channel: str) -> None:
        self.prompt = prompt
        self.model = model
        self.channel = channel

        ## In-Memory Channel Layer
        if self.channel not in AiChat._channels:
            AiChat._channels[self.channel] = [
                {"role": "user", "content": "You are helpful and friendly assistant. Be short but concise as you can!"},
            ]
        self.conversation = AiChat._channels[self.channel]

    def chat(self) -> str:
        if self.prompt:
            # The conversation is going on ...
            # Adding prompt to chat history
            self.conversation.append({"role": "user", "content": self.prompt})
            # The OpenAI's chat completion generates answers to your prompts.
            completion = client.chat.completions.create(
                model=self.model,
                messages=self.conversation
            )
            answer = completion.choices[0].message.content
            # Adding answer to chat history
            self.conversation.append({"role": "assistant", "content": answer})
            return answer

Enter fullscreen mode Exit fullscreen mode

The __init__ constructor has three parameters, which come from the Websocket Consumer (see below). The first two are straightforward, and the third one refers to the channel name.

Consumers will generate a unique channel name for themselves, and start listening on it for events. Channel's documentation

Another important part of the AiChat class is the In-Memory Channel Layer, which is necessary for the conversation to be retrievable from memory, although not recommended for production use.

As you might have noticed, the AiChat has one method "chat" which is responsible for handling the conversation (by imported OpenAI) if the prompt is received successfully from the client.

The next step is adding CHANNEL_LAYERS to the end of the config/settings.py file.

# config/settings.py

# Channels
CHANNEL_LAYERS = {
    "default": {
        "BACKEND": "channels.layers.InMemoryChannelLayer",          
     }    
 }
Enter fullscreen mode Exit fullscreen mode

chat/views.py

Now, let's see the Websocket Consumer mentioned before.

# chat/views.py

import json
from channels.generic.websocket import AsyncWebsocketConsumer
from .chat_api import AiChat

class ChatConsumer(AsyncWebsocketConsumer):

    async def connect(self):
        await self.accept()

    async def disconnect(self, close_code):
        print('Disconnected:', close_code)

    async def receive(self, text_data):
        # text data from the client
        text_data_json = json.loads(text_data)
        prompt = text_data_json["prompt"]
        # choose a model 
        model = 'gpt-4o-mini' 

        # Response
        model_response = AiChat(prompt, model, self.channel_name)  # instantiate
        response = model_response.chat()  # run the model

        # Send the response to the client
        await self.send(text_data=json.dumps({
            'prompt': prompt,
            'response': response,
        }))

Enter fullscreen mode Exit fullscreen mode

Our custom ChatConsumer class inherits all functionalities defined in the channel's AsyncWebsocketConsumer.

ChatConsumer establishes connection, handles disconnection and receives data - only prompt in this case - from the client. Finally, it sends the prompt and the response which is generated by the AiChat, back to the client side.

Let's now take a look at the other files in Django. You have to create chat/routing.py and update config/asgi.py and config/settings.py.

chat/routing.py

# chat/routing.py

from django.urls import re_path
from . import views

websocket_urlpatterns = [
    re_path(r"ws/chat/$", views.ChatConsumer.as_asgi(), name="chat"),
]

Enter fullscreen mode Exit fullscreen mode

config/asgi.py

# config/asgi.py

import os

from django.core.asgi import get_asgi_application  
from channels.routing import ProtocolTypeRouter, URLRouter  
from channels.auth import AuthMiddlewareStack
from chat.routing import websocket_urlpatterns

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings')  

application = ProtocolTypeRouter({  
  "http": get_asgi_application(),  
  "websocket": AuthMiddlewareStack(  
        URLRouter(  
            websocket_urlpatterns  
        )  
    ),  
})

Enter fullscreen mode Exit fullscreen mode

config/settings.py

Finally, there is one more update in the settings file. Add ASGI and uncomment WSGI in case it might be needed later in the development process.

# config/settings.py

# WSGI_APPLICATION = 'config.wsgi.application'
ASGI_APPLICATION = 'config.asgi.application'

Enter fullscreen mode Exit fullscreen mode

2. Frontend

After change to frontend/ install Marked which is formating the response on the user interface.

cd frontend
Enter fullscreen mode Exit fullscreen mode
npm install marked
Enter fullscreen mode Exit fullscreen mode

src/AI/Chat.jsx

Create AI directory and Chat.jsx inside src.

The WebSocketChat React component establishes a connection between the frontend and backend using WebSocket, sending new prompts (inputMessage) to the backend while keeping the context with the prompts and responses (prevMessages) that were involved previously.

The component returns an interface that includes a main wrapper where you can see the websocket connection status, the chat between the user and the AI and the text input for sending prompts.

// src/AI/Chat.jsx

import React, { useEffect, useState, useCallback, useRef } from 'react';
import { marked } from 'marked';
import './Chat.css';

const WebSocketChat = () => {
    const [responseMessages, setResponseMessages] = useState([]);
    const [inputMessage, setInputMessage] = useState('');
    const [connectionStatus, setConnectionStatus] = useState('Disconnected');
    const socketRef = useRef(null);

    // apply markdown to response messages
    const createMarkup = (markdown) => {
        return { __html: marked(markdown) };
    };

    // Initialize WebSocket connection
    useEffect(() => {
        const websocket = new WebSocket('ws://localhost:8000/ws/chat/');
        socketRef.current = websocket;

        websocket.onopen = () => {
            console.log('Connected to WebSocket');
            setConnectionStatus('Connected');
        };

        websocket.onclose = () => {
            console.log('Disconnected from WebSocket');
            setConnectionStatus('Disconnected');
        };

        websocket.onerror = (error) => {
            console.error('WebSocket error:', error);
            setConnectionStatus('Error');
        };

        // Listen for messages
        socketRef.current.addEventListener('message', (event) => {
            const response = JSON.parse(event.data);
            setResponseMessages(prevMessages => [...prevMessages, { prompt: response.prompt, message: response.response }]);
        });

        // Cleanup on component unmount
        return () => {
            websocket.close();
        };

    }, []);

    // Send message handler
    const sendMessage = useCallback(() => {
        if (socketRef.current && socketRef.current.readyState === WebSocket.OPEN && inputMessage.trim()) {
            socketRef.current.send(JSON.stringify({
                prompt: inputMessage,
            }));
            setInputMessage('');
        }
    }, [inputMessage]);

    // Handle pressing Enter key in the input field
    const handleKeyDown = useCallback((event) => {
        if (event.key === 'Enter') {
            sendMessage();
        }
    }, [sendMessage]);

    return (
        <div className="wrapper">
            <div>
                <h2 style={{ color: '#03101d', fontFamily: "sans-serif" }}>AI Chat</h2>
            </div>

            <div className={`status ${
                connectionStatus === 'Connected' ? 'connected' : 
                connectionStatus === 'Error' ? 'error' : 'disconnected'
            }`}>
                Websocket status: {connectionStatus}
            </div>

            {responseMessages.map((item, index) => (
                <div key={index} className="messages">
                    <div>
                        <span className="prompt">{item.prompt}</span>
                        <span className="response" dangerouslySetInnerHTML={createMarkup(item.message)} />
                    </div>
                </div>
            ))}

            <div className="input-wrapper">
                <input
                    type="text"
                    value={inputMessage}
                    onChange={(e) => setInputMessage(e.target.value)}
                    onKeyDown={handleKeyDown}
                    placeholder="Type a message..."
                />
                <button
                    onClick={sendMessage}
                    disabled={!socketRef.current || socketRef.current.readyState !== WebSocket.OPEN}
                >
                    Send
                </button>
            </div>
        </div>
    );
};

export default WebSocketChat;

Enter fullscreen mode Exit fullscreen mode

src/AI/Chat.css

/* src/AI/Chat.css */

input {
    width: 30%;
    height: 70px;
    padding: 0.5em;
    border: none;
    font-size: 1em;
}

button {
    display: flex;
    flex-direction: column;
    align-items: center;
    width: 31%;
    padding: 0.5em;
    margin-top: 2.0em;
    background-color: rgba(29, 60, 107, 0.5);
    color: white;
    border: none;
    font-size: 1em;
    cursor: pointer;
}

/* classes */

.wrapper {
    display: flex;
    flex-direction: column;
    align-items: center;
}

.messages {
    display: flex;
    flex-direction: column;
    width: 32%;
    margin: 0.7em;
    font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
    font-size: 1.2em;
    line-height: 1.3em;
}

.prompt {
    display: block;
    padding: 0.7em;
    margin: 0.5em;
    color: white;
    background-color: rgb(52 142 59 / 70%);
}

.response {
    display: block;
    padding: 0.7em;
    margin: 0.5em;
    background-color: rgb(255 255 255 / 50%);
}

.input-wrapper {
    display: flex;
    flex-direction: column;
    align-items: center;
    width: 100%;
}

.status {
    margin: 0.6em;
    font-size: 1.3em;
}

.connected {
    color: rgb(9, 11, 139);
}

.disconnected {
    color: rgb(224, 13, 41);
}

Enter fullscreen mode Exit fullscreen mode

src/provider.jsx

The Provider component sets up the routing. The /ws/chat/ renders the WebSocketChat component and / the main page, which is rendered by the App component.

Make sure you are still in the frontend directory and install the react-router-dom.

npm install react-router-dom
Enter fullscreen mode Exit fullscreen mode
// src/provider.jsx

import React from 'react';
import { BrowserRouter as Router, Route, Routes } from 'react-router-dom';
import App from './App';
import WebSocketChat from './AI/Chat';

const Provider = () => {
    return (
        <Router>
            <Routes>
                <Route path="ws/chat/" element={<WebSocketChat />} />
                <Route path="/" element={<App />} />
            </Routes>
        </Router>
    );
};

export default Provider;

Enter fullscreen mode Exit fullscreen mode

src/main.jsx

// src/main.jsx

import { StrictMode } from 'react'
import { createRoot } from 'react-dom/client'
import Provider from './provider'

createRoot(document.getElementById('root')).render(
  <StrictMode>
    <Provider />
  </StrictMode>,
)

Enter fullscreen mode Exit fullscreen mode

The last steps

npm run build
Enter fullscreen mode Exit fullscreen mode

Change back to the project directory and run the app with your browser.

cd ..
Enter fullscreen mode Exit fullscreen mode
python manage.py runserver
Enter fullscreen mode Exit fullscreen mode
http://localhost:8000/ws/chat
Enter fullscreen mode Exit fullscreen mode

You can now chat with your new AI chatbot! 🎉

chatbot-channels-websocket

Thank you for your attention so far, let's meet in the next part where we will deploy our newly created AI-Chat on Railway. ☺️

Top comments (0)