DEV Community

Cover image for How to create your own custom ChatGPT like chatbot in less than 5 minutes using your own data and no OpenAI API
Wilbert Misingo
Wilbert Misingo

Posted on • Edited on

How to create your own custom ChatGPT like chatbot in less than 5 minutes using your own data and no OpenAI API

INTRODUCTION

In today's world, businesses are constantly looking for new ways to improve their customer service and engagement. One way to do this is by creating a chatbot that can quickly and accurately answer customer questions. In this article, we will show you how to create a chatbot that is based on your own company documents using Python and some powerful AI tools.

Although the use of chatbot have become increasingly popular as a way to provide instant and personalized customer support. Building a chatbot that can understand and respond to user queries based on your company's own documents can greatly enhance the efficiency and effectiveness of your customer service.

In my previous article which can be found here I described how you can create a chatbot using your custom data and OpenAI API, and then later wrote another one where you can integrate this chatbot to your WhatsApp business number, the article can be found here.

Thus in this article, we will guide you through the process of creating a chatbot using the code provided below.

To achieve the goal, I initially considered modifying the GPT model using my own data. But, fine-tuning is highly expensive and necessitates a sizable dataset with examples. Also, every time the document is altered, it is impossible to make final adjustments. Perhaps more importantly, fine-tuning teaches the model a new ability rather than merely letting it "know" all the information included in the documents. Consequently, fine-tuning is not the best approach for (multi-)document QA.

Prompt engineering, which includes context in the prompts, is the second strategy that springs to mind. For instance, I could insert the original document's text before the question itself instead of asking it directly. Nevertheless, the GPT model has a short attention span and can only process a small number of the prompt's 2,000 words (about 4000 tokens or 3000 words). Given that we have tens of thousands of emails from customers providing feedback and hundreds of product documentation, it is impossible to convey all the information in the prompt. Because the pricing is based on the number of tokens you use, it is also expensive if you give in a lengthy context to the API.

I thought of the notion of first using an algorithm to search the documents and select the pertinent extracts and then providing only these relevant contexts to the GPT model with my questions because the prompt has restrictions on the number of input tokens. I found a library called llama-index (formerly known as gpt-index) while doing research for my idea that accomplishes exactly what I wanted it to do and is easy to use.

And since the use of OpenAI API is a bit expensive, so I thought of a way to create a similar chatbot by the aid of OpenAI API alternative, which are free Open Source Models like GPT4All, OpenAssistant e.t.c

IMPLEMENTATION

Step 01: Preparing your training data

The first step is to gather all the documents that you want to use to create the chatbot. These documents can include product manuals, FAQs, and other helpful resources that your customers may need to reference. Once you have gathered your documents, you need to organize them into a folder called 'data' and save them in a format that can be easily read by Python.

Step 02: Installing all required libraries

To build a chatbot, we need to use some Python libraries that are specifically designed for natural language processing and machine learning. In this code snippet, we are using the llama_index , transformers and langchain libraries. You can install these libraries using pip:

$ pip install llama_index
$ pip install transformers
$ pip install langchain
Enter fullscreen mode Exit fullscreen mode

Step 03: Importing Libraries and Modules

To begin, we need to import the necessary libraries and modules that will be used throughout the chatbot creation process. The code snippet below demonstrates the required imports:

import torch
from langchain.llms.base import LLM
from llama_index import SimpleDirectoryReader, GPTListIndex, PromptHelper
from llama_index import LLMPredictor, ServiceContext, QuestionAnswerPrompt
from transformers import pipeline
from typing import Optional, List, Mapping, Any
Enter fullscreen mode Exit fullscreen mode

Step 04: Defining Prompt Variables

Next, we define some variables that will be used as prompt variables for the chatbot. These variables determine the maximum input size, the number of desired output tokens, and the maximum overlap between chunks. Here is the code segment that defines the prompt variables:

max_input_size = 2048
num_output = 256
max_chunk_overlap = 20
Enter fullscreen mode Exit fullscreen mode

Step 05: Defining and Using the Prompt Helper

The PromptHelper class helps in handling prompts and chunking long documents. We initialize the prompt helper by passing the previously defined prompt variables. The code snippet below demonstrates the creation of the prompt helper:

prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap)
Enter fullscreen mode Exit fullscreen mode

Step 06: Creating a Custom Language Model (LLM)

In order to generate responses, we need to download and load a pre-trained language model. The code snippet below defines a custom LLM class that uses the facebook/opt-iml-max-30b model from Hugging Face:

class CustomLLM(LLM):
    model_name = "facebook/opt-iml-max-30b"
    pipeline = pipeline("text-generation", model=model_name, device="cuda:0", model_kwargs={"torch_dtype":torch.bfloat16})

    def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
        prompt_length = len(prompt)
        response = self.pipeline(prompt, max_new_tokens=num_output)[0]["generated_text"]
        return response[prompt_length:]

    @property
    def _identifying_params(self) -> Mapping[str, Any]:
        return {"name_of_model": self.model_name}

    @property
    def _llm_type(self) -> str:
        return "custom"
Enter fullscreen mode Exit fullscreen mode

NB:

Before using this you may consider finding the appropriate model that you find it suitable for you by considering:-

  1. The license of the model
  2. The size of the model

Step 07: Initializing the Language Model and Service Context

Once we have defined our custom LLM, we can initialize it and create a service context. The service context encapsulates the necessary components for our chatbot, including the LLM and prompt helper. Here's the code to initialize the LLM and service context:

llm_predictor = LLMPredictor(llm=CustomLLM())
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)
Enter fullscreen mode Exit fullscreen mode

Step 08: Defining the Question-Answer Prompt Template

To structure the interaction with the chatbot, we define a template for the question-answer prompt. This template includes placeholders for the context information and the user's question. The code snippet below shows the template definition:

QA_PROMPT_TMPL = (
    "We have provided context information below. \n"
    "---------------------\n"
    "{context_str}"
    "\n---------------------\n"
    "Given this

 information, please answer the question: {query_str}\n"
)

QA_PROMPT = QuestionAnswerPrompt(QA_PROMPT_TMPL)
Enter fullscreen mode Exit fullscreen mode

Step 09: Loading training data

To make the chatbot knowledgeable about your company, you need to load your company documents into the chatbot's index. The code snippet below demonstrates loading the data from a specified directory:

documents = SimpleDirectoryReader('./data').load_data()
Enter fullscreen mode Exit fullscreen mode

Step 10: Generating the Index

Once the documents are loaded, we generate an index using the GPTListIndex class. The index is responsible for efficiently retrieving relevant information based on user queries. Here's the code to generate the index:

index = GPTListIndex.from_documents(documents, service_context=service_context)
Enter fullscreen mode Exit fullscreen mode

Step 11: Saving and Loading the Index

To avoid re-indexing the documents every time the chatbot is restarted, we can save the index to disk and load it later. Here's how you can save and load the index:

index.save_to_disk('index.json')
index = GPTListIndex.load_from_disk('index.json')
Enter fullscreen mode Exit fullscreen mode

Step 12: Querying the Chatbot and Getting a Response

Finally, we can interact with the chatbot by querying it with user input. The chatbot will process the query and provide a response based on the indexed company documents. The code snippet below demonstrates querying the chatbot and printing the response:

query_engine = index.as_query_engine()
response = query_engine.query("Hello, what is your function?", text_qa_template=QA_PROMPT)
print(response)
Enter fullscreen mode Exit fullscreen mode

CONCLUSION

Congratulations! You have successfully created a chatbot based on your own company documents. This chatbot can provide accurate and relevant responses to user queries, leveraging the power of machine learning and natural language processing.

Remember, the provided code is just a starting point, and you can customize and extend it according to your specific requirements. Building a chatbot is an iterative process, so feel free to experiment and enhance the functionality based on user feedback and additional data.

Happy coding!

Do you have a project πŸš€ that you want me to assist you email me🀝😊: wilbertmisingo@gmail.com
Have a question or wanna be the first to know about my posts:-
Follow βœ… me on GitHub
Follow βœ… me on Twitter/X 𝕏
Follow βœ… me on LinkedIn πŸ’Ό

Top comments (10)

Collapse
 
jack_89 profile image
Jack

Hello!
I have a problem from step 06 onwards:
I set model_name = "facebook/opt-iml-max-1.3b", as 30b seemed too big.
When i run this piece of code it gives me this error "The Kernel crashed while executing code in the the current cell or a previous cell. Please review the code in the cell(s) to identify a possible cause of the failure. Click here for more info. View Jupyter log for further details."
I haven't found anything on the internet. At some point I re-ran the code and it seemed to work but now it gives me this error again.
Can you please help me solve this problem?
Thank you and good day

Collapse
 
wmisingo profile image
Wilbert Misingo

Hello Jack, sorry for the late reply, i think the problem may be that your PC runs out of processing power, for better results i would recommend running the script on Google Collab with GPU on

Collapse
 
scruff profile image
Mark Arb

Hi Wilbert,

I'm having issues with the " from llama_index import LLMPredictor, ServiceContext, QuestionAnswerPrompt" library that it cannot import name 'QuestionAnswerPrompt' from 'llama_index' . I've Google searched for a solution, and had no luck. I've got Python311 installed. Are you able to help here with a work-around?

Thanks in advance.
Mark

Collapse
 
wmisingo profile image
Wilbert Misingo

Hello Mark,

If you are using the latest version of llama index, this its because the references to legacy prompt subclasses such as QuestionAnswerPrompt, RefinePrompt. These have been deprecated (and now are type aliases of PromptTemplate). Now you can directly specify PromptTemplate(template) to construct custom prompts. But you still have to make sure the template string contains the expected parameters (e.g. {context_str} and {query_str}) when replacing a default question answer prompt.

docs.llamaindex.ai/en/stable/modul...

Please check the link above for a quick overview of the changes and how the new code is supposed to be

I hope this would be helpful, feel free to check me again in case of anything.

Cheers.

Collapse
 
dshaw0004 profile image
Dipankar Shaw

I was planning to create a personal chatbot assistant. This post will help me to do that

Collapse
 
wmisingo profile image
Wilbert Misingo

Thanks Dipankar, I am glad you have found this helpful

Collapse
 
aleksandar_devedzic profile image
Aleksadnar Devedzic

Hello!
Amazing code!
One question, in what format should files in data folder be?

Like, one CSV file, or multiple .txt files?

Collapse
 
wmisingo profile image
Wilbert Misingo • Edited

Hello Aleksadnar, regarding the file format of the data files in the data folder, the SimpleDirectoryReader('./data') function accepts data files only in the format of .pdf, .txt, and .docx, .csv etc. although a few modifications can be made by passing certain arguments to the function to allow only one kind of data file to be processed e.g. .pdf only

You could learn more from here gpt-index.readthedocs.io/en/latest...

Collapse
 
anumber8 profile image
anumber8

Great article, congratulations.

As this is a very new concept, it would be good if you could kindly share your sample code on github. Thanks

Collapse
 
wmisingo profile image
Wilbert Misingo

Thanks!!

I am really glad that you have found it of great use.

About the pushing the code to GitHub, sadly I didn't, since after the project had downloaded the LLM from huggingface, I ended having a large sized project.

I also didn't think of sharing the codes on GitHub since I have just shared the project code snippets on the article.