DEV Community

grbeno
grbeno

Posted on • Edited on

How to Create a Real-Time AI Chat with Django Channels and React

In this article I'd like to present you a web application example in which I implemented an LLM based chatbot.

I will demonstrate this with the GPT-4o-mini LLM, but feel free to choose another model or provider and modify the code according to their documentation.

This application is based on a boilerplate Django web application that uses React as the frontend, and I've already published it earlier. 👉 You can read it here

However, this post can still be useful for you even if you want to rely on your own template and environment.

The chat assistant will be able to keep a history of the conversation, which will be implemented asynchronously using Django’s channels package and WebSockets in React.

✅ Asynchronous chat
✅ AI/LLM API
✅ Chat memory


1. Backend

Assuming you already have a basic Django application using React as the frontend, you should find and change to the project directory and activate the virtual environment.

cd <project_path>
Enter fullscreen mode Exit fullscreen mode
venv/Scripts/activate
Enter fullscreen mode Exit fullscreen mode

Let's start our new application

Create the chat app.

python manage.py startapp chat
Enter fullscreen mode Exit fullscreen mode

Install the necessary packages.

pip install environs openai channels daphne
Enter fullscreen mode Exit fullscreen mode

You have to add the chat app and the recently installed packages, channels and daphne, to the INSTALLED_APPS in the config/settings.py file. Daphne should be listed right before django.contrib.staticfiles.

# config/settings.py

# Application definition

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'daphne',  # adding daphne!
    'django.contrib.staticfiles',
    'channels',
    'backend',
    'accounts',
    'chat',  # adding chat!
]

Enter fullscreen mode Exit fullscreen mode

You need to have an API key to access an LLM in order to create an AI chat. My choice now is the GPT-4o-mini model, so I will generate a key on OpenAI.

# .env

OPENAI_API_KEY=<your_api_key>

Enter fullscreen mode Exit fullscreen mode

chat/chat_api.py

Create a file for the AiChat class, which calls the model via API.

The AiChat class should look like this:

# chat/chat_api.py

from openai import OpenAI
from environs import Env

# Load the environment variables
env = Env()
env.read_env()

client = OpenAI()
client.api_key=env.str("OPENAI_API_KEY")

class AiChat():

    _channels = {}  # In-Memory Channel Layer

    def __init__(self, prompt: str, model: str, channel: str) -> None:
        self.prompt = prompt
        self.model = model
        self.channel = channel

        ## In-Memory Channel Layer
        if self.channel not in AiChat._channels:
            AiChat._channels[self.channel] = [
                {"role": "user", "content": "You are helpful and friendly assistant. Be short but concise as you can!"},
            ]
        self.conversation = AiChat._channels[self.channel]

    def chat(self) -> str:
        if self.prompt:
            # The conversation is going on ...
            # Adding prompt to chat history
            self.conversation.append({"role": "user", "content": self.prompt})
            # The OpenAI's chat completion generates answers to your prompts.
            completion = client.chat.completions.create(
                model=self.model,
                messages=self.conversation
            )
            answer = completion.choices[0].message.content
            # Adding answer to chat history
            self.conversation.append({"role": "assistant", "content": answer})
            return answer

Enter fullscreen mode Exit fullscreen mode

The __init__ constructor has three parameters, which come from the Websocket Consumer (see below). The first two are straightforward, and the third one refers to the channel name.

Consumers will generate a unique channel name for themselves, and start listening on it for events. Channel's documentation

Another important part of the AiChat class is the In-Memory Channel Layer, which is necessary for the conversation to be retrievable from memory.

Add CHANNEL_LAYERS to the end of the config/settings.py file.

# config/settings.py

# Channels
CHANNEL_LAYERS = {
    "default": {
        "BACKEND": "channels.layers.InMemoryChannelLayer",          
     }    
 }
Enter fullscreen mode Exit fullscreen mode

It's worth noting here that it is recommended to modify the In-Memory Channel Layer to Redis before you deploy your web application to production. For more info visit the channel's documentation about Channel Layers.

chat/views.py

Now, let's see the Websocket Consumer mentioned before.

# chat/views.py

import json
from channels.generic.websocket import AsyncWebsocketConsumer
from .chat_api import AiChat

class ChatConsumer(AsyncWebsocketConsumer):

    async def connect(self):
        await self.accept()

    async def disconnect(self, close_code):
        print('Disconnected:', close_code)

    async def receive(self, text_data):
        # text data from the client
        text_data_json = json.loads(text_data)
        prompt = text_data_json["prompt"]
        # choose a model 
        model = 'gpt-4o-mini' 

        # Response
        model_response = AiChat(prompt, model, self.channel_name)  # instantiate
        response = model_response.chat()  # run the model

        # Send the response to the client
        await self.send(text_data=json.dumps({
            'prompt': prompt,
            'response': response,
        }))

Enter fullscreen mode Exit fullscreen mode

Let's now take a look at the other files in Django. You have to create chat/routing.py and update config/asgi.py and config/settings.py.

chat/routing.py

# chat/routing.py

from django.urls import re_path
from . import views

websocket_urlpatterns = [
    re_path(r"ws/chat/$", views.ChatConsumer.as_asgi(), name="chat"),
]

Enter fullscreen mode Exit fullscreen mode

config/asgi.py

# config/asgi.py

import os

from django.core.asgi import get_asgi_application  
from channels.routing import ProtocolTypeRouter, URLRouter  
from channels.auth import AuthMiddlewareStack
from chat.routing import websocket_urlpatterns

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings')  

application = ProtocolTypeRouter({  
  "http": get_asgi_application(),  
  "websocket": AuthMiddlewareStack(  
        URLRouter(  
            websocket_urlpatterns  
        )  
    ),  
})

Enter fullscreen mode Exit fullscreen mode

config/settings.py

Finally, there is one more update in the settings file. Add ASGI and uncomment WSGI in case it might be needed later in the development process.

# config/settings.py

# WSGI_APPLICATION = 'config.wsgi.application'
ASGI_APPLICATION = 'config.asgi.application'

Enter fullscreen mode Exit fullscreen mode

2. Frontend

Install Marked to format the response on the user interface.

cd frontend
Enter fullscreen mode Exit fullscreen mode
npm install marked
Enter fullscreen mode Exit fullscreen mode

src/AI/Chat.jsx

Create AI directory and Chat.jsx inside src.

The WebSocketChat React component establishes a connection between the frontend and backend using WebSocket, sending new prompts (inputMessage) to the backend while keeping the context with the prompts and responses (prevMessages) that were involved previously.

The component returns an interface that includes a main wrapper where you can see the websocket connection status, the chat between the user and the AI and the text input for sending prompts.

// src/AI/Chat.jsx

import React, { useEffect, useState, useCallback, useRef } from 'react';
import { marked } from 'marked';
import './Chat.css';

const WebSocketChat = () => {
    const [responseMessages, setResponseMessages] = useState([]);
    const [inputMessage, setInputMessage] = useState('');
    const [connectionStatus, setConnectionStatus] = useState('Disconnected');
    const socketRef = useRef(null);

    // apply markdown to response messages
    const createMarkup = (markdown) => {
        return { __html: marked(markdown) };
    };

    // Initialize WebSocket connection
    useEffect(() => {
        const websocket = new WebSocket('ws://localhost:8000/ws/chat/');
        socketRef.current = websocket;

        websocket.onopen = () => {
            console.log('Connected to WebSocket');
            setConnectionStatus('Connected');
        };

        websocket.onclose = () => {
            console.log('Disconnected from WebSocket');
            setConnectionStatus('Disconnected');
        };

        websocket.onerror = (error) => {
            console.error('WebSocket error:', error);
            setConnectionStatus('Error');
        };

        // Listen for messages
        socketRef.current.addEventListener('message', (event) => {
            const response = JSON.parse(event.data);
            setResponseMessages(prevMessages => [...prevMessages, { prompt: response.prompt, message: response.response }]);
        });

        // Cleanup on component unmount
        return () => {
            websocket.close();
        };

    }, []);

    // Send message handler
    const sendMessage = useCallback(() => {
        if (socketRef.current && socketRef.current.readyState === WebSocket.OPEN && inputMessage.trim()) {
            socketRef.current.send(JSON.stringify({
                prompt: inputMessage,
            }));
            setInputMessage('');
        }
    }, [inputMessage]);

    return (
        <div className="wrapper">
            <div>
                <h2 style={{ color: '#03101d', fontFamily: "sans-serif" }}>AI Chat</h2>
            </div>

            <div className={`status ${
                connectionStatus === 'Connected' ? 'connected' : 
                connectionStatus === 'Error' ? 'error' : 'disconnected'
            }`}>
                Websocket status: {connectionStatus}
            </div>

            {responseMessages.map((item, index) => (
                <div key={index} className="messages">
                    <div>
                        <span className="prompt">{item.prompt}</span>
                        <span className="response" dangerouslySetInnerHTML={createMarkup(item.message)} />
                    </div>
                </div>
            ))}

            <div className="input-wrapper">
                <input
                    type="text"
                    value={inputMessage}
                    onChange={(e) => setInputMessage(e.target.value)}
                    placeholder="Type a message..."
                />
                <button
                    onClick={sendMessage}
                    disabled={!socketRef.current || socketRef.current.readyState !== WebSocket.OPEN}
                >
                    Send
                </button>
            </div>
        </div>
    );
};

export default WebSocketChat;

Enter fullscreen mode Exit fullscreen mode

src/AI/Chat.css

/* src/AI/Chat.css */

input {
    width: 30%;
    height: 70px;
    padding: 0.5em;
    border: none;
    font-size: 1em;
}

button {
    display: flex;
    flex-direction: column;
    align-items: center;
    width: 31%;
    padding: 0.5em;
    margin-top: 2.0em;
    background-color: rgba(29, 60, 107, 0.5);
    color: white;
    border: none;
    font-size: 1em;
    cursor: pointer;
}

/* classes */

.wrapper {
    display: flex;
    flex-direction: column;
    align-items: center;
}

.messages {
    display: flex;
    flex-direction: column;
    width: 32%;
    margin: 0.7em;
    font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
    font-size: 1.2em;
    line-height: 1.3em;
}

.prompt {
    display: block;
    padding: 0.7em;
    margin: 0.5em;
    color: white;
    background-color: rgb(52 142 59 / 70%);
}

.response {
    display: block;
    padding: 0.7em;
    margin: 0.5em;
    background-color: rgb(255 255 255 / 50%);
}

.input-wrapper {
    display: flex;
    flex-direction: column;
    align-items: center;
    width: 100%;
}

.status {
    margin: 0.6em;
    font-size: 1.3em;
}

.connected {
    color: rgb(9, 11, 139);
}

.disconnected {
    color: rgb(224, 13, 41);
}

Enter fullscreen mode Exit fullscreen mode

src/provider.jsx

The Provider component sets up the routing. The /ws/chat/ renders the WebSocketChat component and / the main page, which is rendered by the App component.

Make sure you are still in the frontend directory and install the react-router-dom.

npm install react-router-dom
Enter fullscreen mode Exit fullscreen mode
// src/provider.jsx

import React from 'react';
import { BrowserRouter as Router, Route, Routes } from 'react-router-dom';
import App from './App';
import WebSocketChat from './AI/Chat';

const Provider = () => {
    return (
        <Router>
            <Routes>
                <Route path="ws/chat/" element={<WebSocketChat />} />
                <Route path="/" element={<App />} />
            </Routes>
        </Router>
    );
};

export default Provider;

Enter fullscreen mode Exit fullscreen mode

src/main.jsx

// src/main.jsx

import { StrictMode } from 'react'
import { createRoot } from 'react-dom/client'
import Provider from './provider'

createRoot(document.getElementById('root')).render(
  <StrictMode>
    <Provider />
  </StrictMode>,
)

Enter fullscreen mode Exit fullscreen mode

The last steps

npm run build
Enter fullscreen mode Exit fullscreen mode
cd ..
Enter fullscreen mode Exit fullscreen mode
python manage.py runserver
Enter fullscreen mode Exit fullscreen mode
http://localhost:8000/ws/chat
Enter fullscreen mode Exit fullscreen mode

You can now chat with your new AI chatbot! 🎉

chatbot-channels-websocket

Thank you for your attention! ☺️

Reinvent your career. Join DEV.

It takes one minute and is worth it for your career.

Get started

Top comments (0)

Image of AssemblyAI tool

Transforming Interviews into Publishable Stories with AssemblyAI

Insightview is a modern web application that streamlines the interview workflow for journalists. By leveraging AssemblyAI's LeMUR and Universal-2 technology, it transforms raw interview recordings into structured, actionable content, dramatically reducing the time from recording to publication.

Key Features:
🎥 Audio/video file upload with real-time preview
🗣️ Advanced transcription with speaker identification
⭐ Automatic highlight extraction of key moments
✍️ AI-powered article draft generation
📤 Export interview's subtitles in VTT format

Read full post

👋 Kindness is contagious

Immerse yourself in a wealth of knowledge with this piece, supported by the inclusive DEV Community—every developer, no matter where they are in their journey, is invited to contribute to our collective wisdom.

A simple “thank you” goes a long way—express your gratitude below in the comments!

Gathering insights enriches our journey on DEV and fortifies our community ties. Did you find this article valuable? Taking a moment to thank the author can have a significant impact.

Okay