In this post, I will walk you through how to create chat applications using OpenAI's GPT-3.5-turbo on three different platforms: Streamlit, Chainlit, and Gradio. I will provide the complete code for each platform and explain how it works.
Introduction
Chat applications have become an integral part of modern web applications, providing users with instant support and information. With OpenAI's powerful GPT-3.5-turbo model, building an intelligent chatbot is easier than ever. I'll demonstrate how to create a chat interface using three popular Python libraries: Streamlit, Chainlit, and Gradio.
Prerequisites
Before I begin, ensure you have the following:
- Python installed on your system
- An OpenAI API key (You can get one by signing up on the OpenAI website)
Common Functionality
I will use a common function to interact with the OpenAI GPT-3.5-turbo API. This function will be used in all three implementations.
Install the required libraries:
pip install openai
pip install streamlit
pip install chainlit
pip install gradio
Import the OpenAI library
import openai
This line imports the openai
library, which is required to interact with OpenAI's API. This library provides functions to make API calls to OpenAI's language models.
Set the OpenAI API key
openai.api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxx"
This line sets the API key required to authenticate with OpenAI's API. The API key is a unique identifier that allows access to OpenAI's services. Replace "sk-xxxxxxxxxxxxxxxxxxxxxxxxxx"
with your actual OpenAI API key.
Define the function to get a response from OpenAI's model
def get_response(text):
response = openai.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role":"user","content":text}]
)
return response.choices[0].message.content.strip()
This block defines a function get_response
which takes a string text
as input and returns a response generated by OpenAI's model.
-
Function Definition:
def get_response(text):
- This defines a function named
get_response
that accepts a single parametertext
.
- This defines a function named
-
Create API Request:
response = openai.chat.completions.create(...)
- This line makes an API request to OpenAI to generate a completion based on the given input text.
- The
model
parameter specifies which model to use, in this case,"gpt-3.5-turbo"
. - The
messages
parameter is a list of message objects. Each object should have arole
andcontent
. Here, it indicates that the user is providing a text input.
-
Return Response:
return response.choices[0].message.content.strip()
- This line extracts the content of the first message from the response and removes any leading or trailing whitespace using
.strip()
. -
response.choices
is a list of possible completions generated by the model. In this case, we take the first completion (choices[0]
), then access themessage
andcontent
of that message.
- This line extracts the content of the first message from the response and removes any leading or trailing whitespace using
Main block to handle user input and display chatbot responses
if __name__ == "__main__":
while True:
user_input = input("You: ")
if user_input.lower() in ["bye", "exit"]:
break
response = get_response(user_input)
print("Chatbot: ", response)
This block is the main part of the script that runs if the script is executed directly.
-
Check if Script is Main:
if __name__ == "__main__":
- This checks if the script is being run as the main module. If it is, the code inside this block will execute.
-
Infinite Loop:
while True:
- This creates an infinite loop that will keep running until explicitly broken out of.
-
Get User Input:
user_input = input("You: ")
- This prompts the user for input and stores it in the
user_input
variable.
- This prompts the user for input and stores it in the
-
Check for Exit Condition:
if user_input.lower() in ["bye", "exit"]:
- This checks if the user input is either "bye" or "exit" (in any case). If it is, the loop breaks, ending the program.
-
Get Response from OpenAI:
response = get_response(user_input)
- This calls the
get_response
function with the user's input to get a response from the OpenAI model.
- This calls the
-
Print Chatbot Response:
print("Chatbot: ", response)
- This prints the response from the chatbot to the console.
This script effectively creates a simple chatbot using OpenAI's GPT-3.5-turbo model, allowing for interactive text-based conversations.
Building Chat Applications with OpenAI's GPT-3.5-turbo using Streamlit, Chainlit, and Gradio.
1. Streamlit Implementation
Streamlit is a powerful library for creating web applications with minimal effort. Below is the complete code for a Streamlit chat application.
import streamlit as st
import openai
# Set your OpenAI API key
openai.api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxx"
# Function to get response from OpenAI
def get_response(text):
response = openai.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": text}]
)
return response.choices[0].message.content.strip()
# Streamlit UI
st.title("Chat with OpenAI GPT-3.5-turbo")
user_input = st.text_input("You: ")
if st.button("Send"):
if user_input:
response = get_response(user_input)
st.write(f"Chatbot: {response}")
Explanation
-
Import Libraries: Import
streamlit
andopenai
. - OpenAI API Key: Set your OpenAI API key.
- get_response Function: Define a function to send user input to the OpenAI API and return the response.
-
Streamlit UI: Create a simple UI with a text input box and a button. When the button is clicked, the user's input is sent to the
get_response
function, and the response is displayed.
2. Chainlit Implementation
Chainlit is another library that simplifies the creation of web applications. Here’s how to create a chat application using Chainlit.
import chainlit as cl
import openai
# Set your OpenAI API key
openai.api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxx"
# Function to get response from OpenAI
def get_response(text):
response = openai.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": text}]
)
return response.choices[0].message.content.strip()
@cl.on_message
async def main(message: str):
response = get_response(message)
await cl.Message(content=response).send()
Explanation
-
Import Libraries: Import
chainlit
andopenai
. - OpenAI API Key: Set your OpenAI API key.
- get_response Function: Define the same function to get the response from OpenAI.
-
Chainlit Event: Use
@cl.on_message
decorator to define an asynchronous function that processes incoming messages. When a message is received, the function gets a response from OpenAI and sends it back.
3. Gradio Implementation
Gradio provides an easy way to create web interfaces. Here’s the complete code for a Gradio chat application.
import gradio as gr
import openai
# Set your OpenAI API key
openai.api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxx"
# Function to get response from OpenAI
def get_response(text):
response = openai.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": text}]
)
return response.choices[0].message.content.strip()
# Gradio interface
def chat_interface(user_input):
response = get_response(user_input)
return response
iface = gr.Interface(
fn=chat_interface,
inputs=gr.inputs.Textbox(lines=2, placeholder="Enter your message here..."),
outputs="text",
title="Chat with OpenAI"
)
iface.launch()
Explanation
-
Import Libraries: Import
gradio
andopenai
. - OpenAI API Key: Set your OpenAI API key.
- get_response Function: Define the same function to get the response from OpenAI.
-
Gradio Interface: Define a function
chat_interface
that takes user input and returns the response from OpenAI. Create a Gradio interface with text input and text output, then launch it.
Running the Applications
To run these applications, save each code snippet in a separate Python file and execute it.
-
Streamlit: Save the code in a file, e.g.,
streamlit_chat.py
, and runstreamlit run streamlit_chat.py
. -
Chainlit: Save the code in a file, e.g.,
chainlit_chat.py
, and runpython chainlit_chat.py
. -
Gradio: Save the code in a file, e.g.,
gradio_chat.py
, and runpython gradio_chat.py
.
Each command will start a local web server, and you can access the chat application via the provided URL.
You can find the complete Github repository here
If you find this project helpful, consider giving it a ⭐ star and forking it to contribute or stay updated!
Conclusion
In this post, I've shown how to create a chat application using OpenAI's GPT-3.5-turbo on three different platforms: Streamlit, Chainlit, and Gradio. Each platform has its strengths, and you can choose the one that best fits your needs. With minimal code, you can create a powerful and interactive chat interface for your users.
Happy coding 😀
Top comments (0)