DEV Community

Cover image for How to make your own AI chatbot for absolute beginners?
Kush Dhuvad
Kush Dhuvad

Posted on

How to make your own AI chatbot for absolute beginners?

Github Link - AI Chatbot

Have you ever thought of integrating an AI chatbot for your internal systems, where AI has the knowledge of all of your documents, and you can use it to get answers from your own documents instead of getting generic answers? It all starts by making your own AI chatbot using Langchain.

In this article, we'll understand how to make your own AI chatbot that gives generic answers from the internet based on what it's trained on. Later, in the next article, we will learn how to use our own documents as a reference for our AI chatbot.

In this article, I'll show you how to use LangChain and an OpenAI API key to run your own AI chatbot on Streamlit using Python.

First things first, we need to understand the requirements for this project. Create a new folder and create a new file in it called app.py and requirements.txt. Add the following libraries to requirements.txt and run the command below to install all the requirements for the project.

langchain-openai>=0.1.0

langchain-core>=0.1.0

openai>=1.0.0

streamlit>=1.30.0

python-dotenv>=1.0.0

langchain-community>=0.1.0
Enter fullscreen mode Exit fullscreen mode

Before we get started with making the chatbot, we need to understand what Langchain is and how we are going to use it to make our own chatbot.

Langchain

Think of this as the "LEGO" set for AI. It helps us connect different pieces (like prompts and models) together into a "chain."

The core concept of LangChain is the Chain. Imagine a literal chain with different links. Each link performs a specific task.

In your code, the chain looks like this:

chain = prompt | llm | output_parser

Link 1 (Prompt): Takes your raw text ("What is the moon?") and wraps it in instructions ("You are a helpful assistant...").

Link 2 (LLM): Takes that wrapped package and sends it to the AI brain (OpenAI) to get an answer.

Link 3 (Output Parser): Takes the raw, messy data the AI sends back and turns it into clean text for your website.

Why do we need LangChain?

Without LangChain, you would have to write dozens of lines of manual code to:

Connect to the API.
Format the JSON data correctly.
Check for errors if the AI fails.
Remember what was said three sentences ago (Memory).

LangChain reduces all that work into a single line. It allows you to swap out parts easily. If you decide you don't like the "Gemma" model and want to use "GPT-4," you only have to change one word in your code; the rest of the "chain" stays exactly the same.

We will use Langchain here to monitor the API responses from the AI model. For that, go to the website given below and create a new project and generate an API key.

Langchain - LangSmith setup

Once you set up your account, you will see something like this as a response when you search for anything on the AI chatbot that you made.

Let's understand the code now.

1. Understanding required libraries

from langchain_openai import ChatOpenAI #Importing the brain
from langchain_core.prompts import ChatPromptTemplate #Importing brain prompting
from langchain_core.output_parsers import StrOutputParser #Importing output parser
import streamlit as st # Importing Streamlit for UI 
import os 
from dotenv import load_dotenv #Importing keys 

load_dotenv()
Enter fullscreen mode Exit fullscreen mode

2. Importing the essential keys
Make a separate folder called.env and set up your keys over there. We never use API keys in the main code to avoid exposure to secret API keys.

OPENAI_API_KEY=

LANGCHAIN_API_KEY=

LANGCHAIN_PROJECT="Test Project"
Enter fullscreen mode Exit fullscreen mode

How to generate an OpenAI API key?

Go to the website given below, sign up, and you might have to add $5 to your account to get your API key. Once you have the $5 in the account, you can generate a new API key from the website. You can also use Llama models to run your local AI model for free, but then it requires a high-spec, local system, so it might take a long time to process on lower-end systems.

OpenAI API Key - OpenAI API key

Now we'll import the API keys in our code.

os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")  #Importing OpenAI API key 

#Langsmith Tracing

os.environ["LANGCHAIN_TRACING_V2"] = "true" #Enabling tracing to monitor the API response.
os.environ["LANGCHAIN_API_KEY"] = os.getenv("LANGCHAIN_API_KEY") #Importing Langchain API key for tracing
Enter fullscreen mode Exit fullscreen mode

3. Setting up the AI prompt

Here you can decide what kind of AI you want. You can ask it to be a doctor or an engineer, to tailor your responses.

prompt=ChatPromptTemplate.from_messages([

    ("system","You are a helpful assistant that helps answer questions about the world."),

    ("human","Question:{question}")

])
Enter fullscreen mode Exit fullscreen mode

4. Setting up Streamlit UI for the web

st.title("Chatbot with Langchain and Streamlit")
input_text=st.text_input("Ask a question about the world:")
Enter fullscreen mode Exit fullscreen mode

5. Calling the OpenAI model and setting up the chain

llm=ChatOpenAI(model="gpt-3.5-turbo") #We are using the OpenAI model GPT-3.5 Turbo since it is cost-effective 

output_parser=StrOutputParser() #we are parsing the output to view on the web. 

chain=prompt | llm | output_parser #Setting up the chain so that the prompt goes to the LLM and then we get the output response
Enter fullscreen mode Exit fullscreen mode

6. Invoking the response

When the user presses Enter, we invoke the chain so that the user question goes to the LLM and outputs a response.

if input_text:

    st.write(chain.invoke({"question":input_text}))
Enter fullscreen mode Exit fullscreen mode

This eventually gives you a final AI chatbot where you can ask it any questions, and it will give you a response using the GPT-3.5 Turbo AI model as shown below

Use the command below in your terminal to run the AI chatbot.

streamlit run app.py
Enter fullscreen mode Exit fullscreen mode

Try changing the System Prompt from 'helpful assistant' to 'grumpy pirate' and see how the personality shifts!

If you have any questions, I'm attaching the GitHub link to the final code. Feel free to refer to it for any questions.

Github Link - AI Chatbot

Buy Me a Coffee - Buy Me a Coffee

Top comments (0)