DEV Community

Cover image for How you can create your own custom chatbot with your own custom data using Google Gemini API all for free
Wilbert Misingo
Wilbert Misingo

Posted on • Edited on

How you can create your own custom chatbot with your own custom data using Google Gemini API all for free

INTRODUCTION

Up until now, the conventional method for building multimodal models involves learning independent parts for various modalities and then piecing them together to approximate some of this functionality. Certain activities, like describing visuals, may be areas in which these models excel, but they have trouble with more conceptual and sophisticated reasoning.

As to this moment, Google's most adaptable model to date, Gemini can operate well on a wide range of platforms, including mobile phones and data centers. Its cutting-edge features will greatly improve how developers and business clients use AI to create and grow.

Google have optimized Gemini 1.0, their first version, for three different sizes:

  1. Gemini Ultra, the largest and most capable model for highly complex tasks.
  2. Gemini Pro, the best model for scaling across a wide range of tasks.
  3. Gemini Nano, the most efficient model for on-device tasks.

This makes Gemini one of the most capable models in the world and thus creating an opportunity for people to explore it in many ways. With respect to that google have also released a huge free tier that could really be helpful to help people to create some cool stuff.

In this article, you are going to learn how to create your own custom chatbot using your own data using the Google Gemini API free tier.

IMPLEMENTATION

Step 01: Getting your API Key

To get started you first need to create an API key that would be used to reference the model, to create your API key you need to signup and create a new key at the Google AI Studio at Google Maker suite platform.

Step 02: Installing Libraries

To build a chatbot, we need to use some Python libraries that are specifically designed for natural language processing and machine learning. These would help to facilitate the implementation of the chatbot.

pip install -q llama_index google-generativeai chromadb pypdf transformers chromadb

Enter fullscreen mode Exit fullscreen mode

Step 03: Importing Libraries

To begin, we need to import the necessary libraries and modules that will be used throughout the chatbot creation process. The code snippet below demonstrates the required imports.


from llama_index import SimpleDirectoryReader, VectorStoreIndex
from llama_index.llms import Gemini
from IPython.display import Markdown, display
from llama_index import ServiceContext
from llama_index.vector_stores import ChromaVectorStore
from llama_index.storage.storage_context import StorageContext
from llama_index.prompts import PromptTemplate
import chromadb
import os

Enter fullscreen mode Exit fullscreen mode

Step 04: Loading data from the knowledge base

To make the chatbot knowledgeable about your anything, you need to load your desired documents into the chatbot's index. The documents can be in different formats, in this demo all the documents were in pdf. The code snippet below demonstrates loading the data from a specified directory.

documents = SimpleDirectoryReader("./data").load_data()

Enter fullscreen mode Exit fullscreen mode

Step 05: Innitializing the data embedding database

ChromaDB, a versatile tool for storing vector representations and embeddings, serves as the backbone of our system. Initializing ChromaDB involves creating a client and establishing a collection for storing document embeddings. This sets the stage for efficient storage and retrieval of vector representations.


db = chromadb.PersistentClient(path="./embeddings/chroma_db")
chroma_collection = db.get_or_create_collection("quickstart")

Enter fullscreen mode Exit fullscreen mode

Step 06: Innitializing the model

Initializing Gemini and creating a service context involves setting up the necessary environment and defining how the model is going to interact and process both user inputs and data source.


os.environ['GOOGLE_API_KEY'] = 'PUT YOUR GOOGLE API KEY HERE'
llm = Gemini()
service_context = ServiceContext.from_defaults(llm=llm, chunk_size=800, chunk_overlap=20, embed_model="local")

Enter fullscreen mode Exit fullscreen mode

Step 7: Creating Vector Store Index

With the foundational components in place, the next step is to create the vector store index from the loaded documents. This process involves indexing the documents using the specified vector store and service context.


index = VectorStoreIndex.from_documents(documents, storage_context=storage_context, service_context=service_context
)

Enter fullscreen mode Exit fullscreen mode

Step 08: Defining Prompt Template and Configuring Query Engine

To facilitate question-answering, a prompt template is defined. This helps the bot to understand how it should interact with the user according to how you set, giving it tone, role etc. .The query engine is then configured to leverage this template. This step lays the groundwork for engaging in meaningful interactions with the indexed data.


template = (
    "We have provided context information below. \n"
    "---------------------\n"
    "{context_str}"
    "\n---------------------\n"
    "Given this information, please answer the question: {query_str}\n"
)

qa_template = PromptTemplate(template)

query_engine = index.as_query_engine(text_qa_template=qa_template)

Enter fullscreen mode Exit fullscreen mode

Step 09: Performing a query

The culmination of our journey involves performing a sample query and displaying the result.


response = query_engine.query("What its the shape of the earth?")
print(response)

Enter fullscreen mode Exit fullscreen mode

CONCLUSION

The implementation of a text-to-vector indexing system using ChromaDB, Gemini, and VectorStore opens up a realm of possibilities for advanced NLP applications. This comprehensive guide serves as a foundation for building sophisticated text-based applications. As you continue your exploration, feel free to experiment with different document sets, query templates, and parameters to tailor the system to your specific requirements.

Happy coding!

Do you have a project πŸš€ that you want me to assist you email me🀝😊: wilbertmisingo@gmail.com
Have a question or wanna be the first to know about my posts:-
Follow βœ… me on GitHub
Follow βœ… me on Twitter/X 𝕏
Follow βœ… me on LinkedIn πŸ’Ό

Top comments (9)

Collapse
 
ankit20cse45 profile image
Ankit Kumar Raj

in the last code snippet there is a typo which is the misspelling of response as responce

Collapse
 
wmisingo profile image
Wilbert Misingo

😁 😁 😁 Thanks @ankit20cse45 , i just changes that.

Is there anything else that you would like me to help?

Collapse
 
emily_rooney_2181b67393e8 profile image
Emily Rooney

Do you have a Colab with a full example of this?

Collapse
 
wmisingo profile image
Wilbert Misingo

No i don't have, because this part of the code was used on a real project, which i cant share due to copyrights reasons.

If there is a problem you are facing, i would be glad to help.

Also, i was wondering what difference would a colab make while the code here and that of the colab would be the same.

Collapse
 
prahlad17 profile image
Prahlad17

Can you please provide the code in Javascript

Collapse
 
lee_davidpainter_2de683e profile image
Lee David Painter

Is there a size limit on the data being modelled?

Collapse
 
wmisingo profile image
Wilbert Misingo

Hello @lee_davidpainter_2de683e , actually there is no limit size on the data being modelled (i.e. being transformed to vector indexes or embeddings), you may transform as much as you can.

NB:
But if we were using paid models, e.g. gpt-4, there would also be not size limit, but since the process of transforming the data into embeddings is also paid for such models, thus large amount of data would result into a large bill.

I hope i have answer your question, feel free to ask me anything incase you feel that you don't understand anywhere.

Collapse
 
asif_iqbalkhanle008_c profile image
Asif Iqbal Khan (LE008)

all the import statements are wrong, throwing errors

Collapse
 
wmisingo profile image
Wilbert Misingo

Have you considered the possibility that you just came here, copy and paste the code, expecting it to work while you are using a wrong python and dependencies versions.