DEV Community

grbeno
grbeno

Posted on • Edited on

Real-Time AI Chat with Django and React: Deployment

In the last part, I will introduce how to deploy a Real-Time AI Chat using Django Channels and React on Railway.

I'll update the base application with the next steps, that are necessary for production use:

🔸 Using environment variables
🔸 Changing In-memory Channels Layer to Redis Channel Layer
🔸 Adding STATIC_ROOT to Django settings using React as static files
🔸 Creating a Dockerfile for the app and a docker-compose.yml for Redis

I'll deploy the chat application to the Railway platform, which is relatively cheap and easy to use.

Create an account on Railway: You can do that on the Railway website, but I would be thankful if you used my referral link here.

✨ Features

✅ Asynchronous chat (Channels, WebSockets)
✅ AI/LLM API consuming (OpenAI)
✅ Chat memory using Redis layer
✅ Production: Docker, Railway

🧰 Prerequisites

🔹 Docker
🔹 GitHub account
🔹 Railway account

GitHub repository: https://github.com/grbeno/aichat


Environment variables

Add SECRET_KEY, DEBUG, ALLOWED_HOSTS to the .env file.

Update the config/settings.py with the following lines.

# config/settings.py

from environs import Env

# Load the environment variables
env = Env()
env.read_env()
Enter fullscreen mode Exit fullscreen mode
# config/settings.py

# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env.str("SECRET_KEY")

# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env.bool("DEBUG", default=True)

ALLOWED_HOSTS = env.list("ALLOWED_HOSTS")
Enter fullscreen mode Exit fullscreen mode
# .env

OPENAI_API_KEY=<your_api_key>
SECRET_KEY=<your_secret_key>
DEBUG=True
ALLOWED_HOSTS=localhost,127.0.0.1
Enter fullscreen mode Exit fullscreen mode

Change In-memory Channels Layer to Redis Channel Layer

As the Django Channels documentation says:

channels_redis is the only official Django-maintained channel layer supported for production use.

According to this, the first step is installing the dependencies, then updating the config/settings.py and chat/chat_api.py files.

pip install redis channels_redis
Enter fullscreen mode Exit fullscreen mode

Use Redis Channel Layer instead of In-memory Channel Layer, so you need to update config/settings.py and chat/chat_api.py.

# config/settings.py

# Channels
CHANNEL_LAYERS = {
    "default": {
        "BACKEND": "channels_redis.core.RedisChannelLayer",
            "CONFIG": {
                "hosts": [(env.str('REDISHOST', default="redis"), 6379)],
             },           
        }    
    }
Enter fullscreen mode Exit fullscreen mode

I marked with comments # Redis Channel Layer where I changed the code:

# chat/chat_api.py

from openai import OpenAI
from environs import Env
import redis  # Redis Channel Layer
import json  # Redis Channel Layer

# Load the environment variables
env = Env()
env.read_env()

client = OpenAI()
client.api_key=env.str("OPENAI_API_KEY")

class AiChat():

    # Redis Channel Layer
    _redis_client = redis.Redis(host='redis', port=6379, db=0)

    def __init__(self, prompt: str, model: str, channel: str) -> None:
        self.prompt = prompt
        self.model = model
        self.channel = channel

        # Redis Channel Layer

        # Check if the channel exists in Redis
        if not self._redis_client.exists(channel):
            initial_data = [{"role": "system", "content": "You are helpful and friendly. Be short but concise as you can!"}]
            self._redis_client.set(channel, json.dumps(initial_data))

        # Retrieve the conversation from Redis
        conversation_data = self._redis_client.get(channel)
        self.conversation = json.loads(conversation_data) if conversation_data else initial_data

    def chat(self) -> str:
        if self.prompt:
            # The conversation is going on ...
            # Adding prompt to chat history
            self.conversation.append({"role": "user", "content": self.prompt})
            # Redis Channel Layer
            self._redis_client.set(self.channel, json.dumps(self.conversation))
            # The OpenAI's chat completion generates answers to your prompts.
            completion = client.chat.completions.create(
                model=self.model,
                messages=self.conversation
            )
            answer = completion.choices[0].message.content
            # Adding answer to chat history
            self.conversation.append({"role": "assistant", "content": answer})
            # Redis Channel Layer
            self._redis_client.set(self.channel, json.dumps(self.conversation))
            return answer

Enter fullscreen mode Exit fullscreen mode

Checking host and protocol dynamically

Use the get_host() method and HTTP_X_FORWARDED_PROTO in Django for yielding the current host (development or production).

In order to implement this,

  1. Complete the React template view
  2. Add SECURE_PROXY_SSL_HEADER to the config/settings.py
  3. Add WS_URL to frontend/index.html
  4. Update the websocket constructor
# backend/views.py

from django.views.generic import TemplateView

# React home page
class React(TemplateView):
    template_name = 'index.html'

    def get_context_data(self, **kwargs):
        context = super().get_context_data(**kwargs)

        # Check for forwarded protocol (for proxies like Railway)
        forwarded_proto = self.request.META.get('HTTP_X_FORWARDED_PROTO')
        is_secure = self.request.is_secure() or forwarded_proto == 'https'

        # Set WS/WSS protocol
        ws_protocol = 'wss://' if is_secure else 'ws://'
        context['WS_URL'] = f"{ws_protocol}{self.request.get_host()}"

        return context
Enter fullscreen mode Exit fullscreen mode
# config/settings.py

# Add this to the end
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
Enter fullscreen mode Exit fullscreen mode
<!-- frontend/index.html -->

<!doctype html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <link rel="icon" type="image/svg+xml" href="/vite.svg" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Vite + React</title>
    <script> 
      window.WS_URL = "{{ WS_URL|safe}}";
    </script>
  </head>
  <body>
    <div id="root"></div>
    <script type="module" src="/src/main.jsx"></script>
  </body>
</html>
Enter fullscreen mode Exit fullscreen mode
// frontend/src/AI/Chat.jsx

//Update websocket constructor
const websocket = new WebSocket(window.WS_URL + '/ws/chat/');
Enter fullscreen mode Exit fullscreen mode

Preparing for deployment

Some preparations are still needed before you build the Docker image and start deploying the application.

pip install whitenoise twisted[tls,http2]
Enter fullscreen mode Exit fullscreen mode
  • The whitenoise is needed for serving static files.
  • The tls and http2 are needed for the Daphne ASGI server in Railway.

After the installation add whitenoise to MIDDLEWARE list.

# config/settings.py

MIDDLEWARE = [
    # ...
    "django.middleware.security.SecurityMiddleware",
    "whitenoise.middleware.WhiteNoiseMiddleware",
    # ...
]
Enter fullscreen mode Exit fullscreen mode

The collectstatic command (runs during deployment) will collect static files into the staticfiles/ directory which is set by STATIC_ROOT.

# config/settings.py

STATIC_URL = 'assets/'

STATIC_ROOT = str(BASE_DIR.joinpath('staticfiles'))

STATICFILES_DIRS = [ str(BASE_DIR.joinpath('static', 'assets')) ]
Enter fullscreen mode Exit fullscreen mode

Dockerfile

Before building the Docker image, we need to create or update the requirements.txt. The dependencies and their versions listed in this text file will be installed in the Docker container.

pip freeze > requirements.txt
Enter fullscreen mode Exit fullscreen mode

Create a Dockerfile in the project directory.

# Pull base image
FROM python:3.11-slim-bullseye

# Set work directory
WORKDIR /app

# Install dependencies
COPY ./requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy project
COPY . /app/
Enter fullscreen mode Exit fullscreen mode

Inside the Dockerfile, set the work directory, install the django dependencies and finally copy the other files that aren't ignored by the .dockerignore.

Dockerignore

If you are using a virtual environment to develop, test your application or build React outside Docker, you may have some files that you don't need in Docker. In this case use dockerignore - you can find one in the attached repo.

Docker Compose

Create two docker-compose yaml files. One for development locally, and one for production in Railway.

docker-compose-dev.yml

services:
  backend:
    build: .
    container_name: ws_aichat
    command: >
      sh -c "
      python manage.py collectstatic --noinput &&
      python manage.py runserver 0.0.0.0:8000
      "
    ports:
      - 8000:8000
    env_file:
      - ./.env
    depends_on:  
      - redis

  redis:
    image: redis:latest
    container_name: ws_aichat_redis
    command: redis-server /usr/local/etc/redis/redis.conf
    volumes:
      - ./redis.conf:/usr/local/etc/redis/redis.conf
    ports:
      - '6379:6379'
Enter fullscreen mode Exit fullscreen mode

docker-compose-railway.yml

services:
  redis:
    image: redis:latest
    container_name: ws_aichat_redis
    command: redis-server /usr/local/etc/redis/redis.conf
    volumes:
      - ./redis.conf:/usr/local/etc/redis/redis.conf
    ports:
      - '6379:6379
Enter fullscreen mode Exit fullscreen mode

Running on Docker in development mode

First step is opening up Docker Desktop, then building the Dockerfile,

docker build .
Enter fullscreen mode Exit fullscreen mode

Running docker-compose in development mode on https://localhost:8000/ws/chat/.

docker-compose -f docker-compose-dev.yml up --build
Enter fullscreen mode Exit fullscreen mode

Removing containers, if needed:

docker-compose -f docker-compose-dev.yml down
Enter fullscreen mode Exit fullscreen mode

Git, GitHub

First step is to create a GitHub repository.

Initialize the repo in your project's directory. Make sure you are already there or change to it using the command cd <your-project-path>.

git init
Enter fullscreen mode Exit fullscreen mode

Add the Github repository's url.

git remote add origin <https://repo-url/>
Enter fullscreen mode Exit fullscreen mode

Add the files to commit. Use .gitignore for listing files and directories you don't want to commit.

git add .
Enter fullscreen mode Exit fullscreen mode

Commit and write a custom message that is related to the current task.

git commit -m 'your-message'
Enter fullscreen mode Exit fullscreen mode

Push the project to the GitHub repo.

git push -U origin main
Enter fullscreen mode Exit fullscreen mode

Deploying to Railway

Sign in with your GitHub account and go to the dashboard.

Create a new project using the "New" button and select the GitHub repository (Deploy from GitHub repo) that you want to deploy. After that, the deployment process will start.

Drag from your file browser and drop docker-compose-railway.yml - it should contain only the Redis service - onto the canvas.

Generate a domain (you can customizing it) on 8080 port.

Copy the domain and set the value of the ALLOWED_HOSTS variable.

Setup also the other environment variables (+ New Variable): OPENAI_API_KEY, SECRET_KEY. You can find "Variables" by clicking on the "web" service.

Clicking on "Settings" you can find "Deploy/Start Command". Add daphne -b 0.0.0.0 -p 8080 config.asgi:application to it.

Click on the "Deploy" button and after building and deploying, you can run the web application with its URL in the web browser.


Conclusion

🚀 First, I had to find a solution for the context window of chat and for a distributed real-time app, that’s why I have chosen Django Channels' memory layers. It is worth noting, this package is more versatile when it comes to asynchronous programming, such as high concurrency scenarios and robust I/O operations, for example.

🎯 Finally, my purpose was to create a simple asynchronous chatbot application with basic features, assuming that it will be eligible for deployment, scaling up and building enhanced features.

Thank you for your attention! I hope the series was useful for you. I tried to be as understandable and concise as I could. Please let me know if you have any issues or questions. I will try to help and respond as soon as possible.

Top comments (0)