DEV Community

Cover image for Building your own ChatGPT with LangChain and AgentLabs
Kevin Piacentini
Kevin Piacentini

Posted on

Building your own ChatGPT with LangChain and AgentLabs

Conversational agents have emerged as a crucial interface between users and machines.

They facilitate natural interactions, making technology more accessible and user-friendly.

Among the forerunners of conversational AI, OpenAI's GPT-3 has made waves with its remarkable ability to understand and generate human-like text.

However, crafting a chatbot with similar capabilities from scratch can be a daunting endeavor, especially for those new to the realm of AI and machine learning.

But worry not, as in this tutorial, we are about to demystify the process of building your very own ChatGPT using the LangChain framework and AgentLabs platform.

LangChain provides a robust framework designed to handle Language Models seamlessly, making it a powerful tool for those aiming to build intelligent agents. On the other hand, AgentLabs offers a frontend as a service solution, enabling you to interact with your users without coding any interface.

Note: In this tutorial, we'll use Python as the primary language, but both LangChain and AgentLabs support TypeScript.

Getting started

To get started you will need to install openai, langchain, agentlabs-sdk.

If you use pip:

pip install openai langchain agentlabs-sdk
Enter fullscreen mode Exit fullscreen mode

If you use poetry:

poetry add openai langchain agentlabs-sdk
Enter fullscreen mode Exit fullscreen mode

Preparing the frontend

We'll start by setting up the user interface with AgentLabs.
It's fairly easy to do:

  • sign-in to https://agentlabs.dev
  • create a project
  • create an agent and name it ChatGPT
  • create a secret key for this agent

Image description

Init the AgentLabs project

Now, we will init AgentLabs with the info they provide to us in our dashboard.

from agentlabs.agent import Agent
from agentlabs.chat import IncomingChatMessage, MessageFormat
from agentlabs.project import Project
import os


alabs = Project(
  project_id="4e51941a-a76c-4593-8921-c60e984aaa4e",
  agentlabs_url="https://chatgpt-langchain.app.agentlabs.dev",
  secret=os.environ['AGENTLABS_SECRET']
)

alabs.connect()
alabs.wait()
Enter fullscreen mode Exit fullscreen mode

| Here we add our secret in an environment variable for safety reasons. All the above variables can be found in your AgentLabs console.

React to user's messages

Your frontend is (almost) ready.

You can now open the URL of your AgentLabs project by clicking "Open my ChatUI".

Image description

However, if you test it now, it won't work.

What we want to do is handle every message sent by the users and pass them to our LLM.

To handle users' messages, we'll use the on_chat_message method and pass our handler as an argument.

Let's create a dead simple message handler for now.

def handle_task(message: IncomingChatMessage):
  agent = alabs.agent("7d8ccee9-9316-4634-b8b5-e4628d438d6b")
  agent.typewrite(
    conversation_id=message.conversation_id,
    text="I got your message: " + message.text
  )

alabs.on_chat_message(handle_task)
Enter fullscreen mode Exit fullscreen mode

Our handler is pretty straightforward at the moment.
When we get a message, we tell our frontend (our AgentLabs agent) to write it back to the user.

[No worries, we'll make it smarter]

Now we can test our UI to see how it works.

Here's what you should get:

Image description

Here's the full code we got so far:

from agentlabs.agent import Agent
from agentlabs.chat import IncomingChatMessage, MessageFormat
from agentlabs.project import Project
import os


alabs = Project(
  project_id="4e51941a-a76c-4593-8921-c60e984aaa4e",
  agentlabs_url="https://chatgpt-langchain.app.agentlabs.dev",
  secret=os.environ['AGENTLABS_SECRET']
)

def handle_task(message: IncomingChatMessage):
  agent = alabs.agent("7d8ccee9-9316-4634-b8b5-e4628d438d6b")
  agent.typewrite(
    conversation_id=message.conversation_id,
    text="I got your message: " + message.text
  )

alabs.on_chat_message(handle_task)

alabs.connect()
alabs.wait()
Enter fullscreen mode Exit fullscreen mode

Now, let's prepare LangChain and make our ChatGPT alive.

Preparing LangChain

Now we have a frontend, and we need to implement some logic.

Let's import everything we'll need from LangChain SDK.

from typing import Any, Dict, List
from langchain.callbacks.base import BaseCallbackHandler
from langchain.memory import ChatMessageHistory
from langchain.chat_models import ChatOpenAI
from langchain.schema.messages import AIMessage, BaseMessage, HumanMessage, SystemMessage
from langchain.schema.output import LLMResult
Enter fullscreen mode Exit fullscreen mode

Now we'll init our LLM and set streaming mode to True so we can stream every token in realtime to our users.

llm = ChatOpenAI(streaming=True)
Enter fullscreen mode Exit fullscreen mode

| IMPORTANT: For this model to work, make sure the OPENAI_API_KEY environment variable is set.

Now, let's update our handler so that every time a message comes in, we forward it to GPT.

We'll first create a message List with a first message giving some context to our model and then we'll append the user's message.

def handle_task(message: IncomingChatMessage):
    messages: List[BaseMessage] = [
            SystemMessage(content="You are a general assistant designed to help people with their daily tasks. You should format your answers in markdown format as you see fit."),
            HumanMessage(content=message.text)
    ]
    llm(messages)
Enter fullscreen mode Exit fullscreen mode

So far so good! But there is a problem with this code.
You probably noticed we lost our AgentLabs agent.

So, even if this code executes, how do we send the model's response back to our frontend?

Well, LangChain provides a feature named Callbacks.

You can extend the BaseCallbackHandler provided by LangChain to get real-time updates from your model.

We'll simply define our callback handler and make sure every time the model streams a token we forward it to the user using our AgentLabs agent.

Let's do it!

class AgentLabsStreamingCallback(BaseCallbackHandler):
    def __init__(self, agent: Agent, conversation_id: str):
        super().__init__()
        self.agent = agent
        self.conversation_id = conversation_id

    def on_llm_start(
        self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
    ) -> Any:
        self.stream = self.agent.create_stream(format=MessageFormat.MARKDOWN, conversation_id=self.conversation_id)

    def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:
        self.stream.write(token)

    def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any:
        self.stream.end()
Enter fullscreen mode Exit fullscreen mode

Now we defined our callback handler, let's instantiate it and use it in our message handler:

def handle_task(message: IncomingChatMessage):
    agent = alabs.agent("7d8ccee9-9316-4634-b8b5-e4628d438d6b")
    messages: List[BaseMessage] = [
            SystemMessage(content="You are a general assistant designed to help people with their daily tasks. You should format your answers in markdown format as you see fit."),
            HumanMessage(content=message.text)
    ]
    callback = AgentLabsStreamingCallback(agent=agent, conversation_id=message.conversation_id)
    llm(messages, callbacks=[callback])
Enter fullscreen mode Exit fullscreen mode

Result

AgentLabs natively supports markdown and plaintext so you can ask ChatGPT to write code.

AgentLabs also supports file upload and many other features that will help you to prototype quickly with LangChain.

Full code

The full code is hosted here on Replit.

Limitation

In this version, we only send two messages to our model, so it has no memory.

In the next tutorial, I show you how you can leverage LangChain's memory feature to store every conversation and make your ChatGPT even smarter.

Conclusion

In conclusion, Langchain and AgentLabs are powerful tools that can be used to prototype conversational AI-based applications.

If you liked this tutorial, feel free to smash the like button and react in the comments below :)

Top comments (2)

Collapse
 
stocki profile image
Thomas

AgentLabs is definitely the missing frontend of Langchain !

It's going to solve a pain point for backend developer 🔥🔥

Collapse
 
kevinpiac profile image
Kevin Piacentini

Thanks for your suport @stocki!