DEV Community

Cover image for Adding Memory to your ChatGPT built with LangChain
Kevin Piacentini
Kevin Piacentini

Posted on

Adding Memory to your ChatGPT built with LangChain

In the previous article, I showed you how you can create your very own ChatGPT using LangChain and AgentLabs.

If you have not seen this article I suggest you go read it first.

One limitation of our work was that ChatGPT did not have any memory.

Every time a user were sending a message we were only providing two messages to our model (one as an initial prompt, one for the user's request).

In this version, I will show you how we can add memory to provide more context to our LLM for every request.

Getting started

We'll start from the previous code we already developed here.

Here is it:

from agentlabs.agent import Agent
from agentlabs.chat import IncomingChatMessage, MessageFormat
from agentlabs.project import Project
from typing import Any, Dict, List
from langchain.callbacks.base import BaseCallbackHandler
from langchain.memory import ChatMessageHistory
from langchain.chat_models import ChatOpenAI
from langchain.schema.messages import AIMessage, BaseMessage, HumanMessage, SystemMessage
from langchain.schema.output import LLMResult
import os


class AgentLabsStreamingCallback(BaseCallbackHandler):
  def __init__(self, agent: Agent, conversation_id: str):
      super().__init__()
      self.agent = agent
      self.conversation_id = conversation_id

  def on_llm_start(
      self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
  ) -> Any:
      self.stream = self.agent.create_stream(format=MessageFormat.MARKDOWN, conversation_id=self.conversation_id)

  def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:
      self.stream.write(token)

  def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any:
      self.stream.end()

alabs = Project(
  project_id="4e51941a-a76c-4593-8921-c60e984aaa4e",
  agentlabs_url="https://chatgpt-langchain.app.agentlabs.dev",
  secret=os.environ['AGENTLABS_SECRET']
)

def handle_task(message: IncomingChatMessage):
  agent = alabs.agent("7d8ccee9-9316-4634-b8b5-e4628d438d6b")
  messages: List[BaseMessage] = [
          SystemMessage(content="You are a general assistant designed to help people with their daily tasks. You should format your answers in markdown format as you see fit."),
          HumanMessage(content=message.text)
  ]
  callback = AgentLabsStreamingCallback(agent=agent, conversation_id=message.conversation_id)
  llm(messages, callbacks=[callback])

llm = ChatOpenAI(streaming=True)

alabs.on_chat_message(handle_task)

alabs.connect()
alabs.wait()
Enter fullscreen mode Exit fullscreen mode

Preparing the Memory

To add some memory to our example, we'll leverage the ChatMessageHistory class provided by LangChain.

As the documentation says:

The ChatMessageHistory class is responsible for remembering all previous chat interactions. These can then be passed directly back into the model, summarized in some way, or some combination.

In AgentLabs, every message we receive is linked to a conversation because the AgentLabs UI manages this for us.

So what we'll do is we'll store a ChatMessageHistory by conversation_id. To do so, let's create a class responsible of managing our conversations memory.

class ConversationMemoryManager:
    _conversation_id_to_memory: Dict[str, ChatMessageHistory] = {}

    def get_memory(self, conversation_id: str) -> ChatMessageHistory:
        if conversation_id not in self._conversation_id_to_memory:
            self._conversation_id_to_memory[conversation_id] = ChatMessageHistory()
        return self._conversation_id_to_memory[conversation_id]

memory_manager = ConversationMemoryManager()
Enter fullscreen mode Exit fullscreen mode

As you can see, this class just helps us organize and retrieve our ChatMessageHistory by conversation_id, nothing too complex.

Using our ConversationMemoryManager

Now we have our ConversationMemoryManager we can use it every time we receive a message to pass the history context to our ChatGPT.

Let's update our message handler:

def handle_task(message: IncomingChatMessage):
    agent = alabs.agent("7d8ccee9-9316-4634-b8b5-e4628d438d6b")

    memory = memory_manager.get_memory(message.conversation_id)

    if len(memory.messages) == 0:
           memory.add_message(SystemMessage(content="You are a general assistant designed to help people with their daily tasks. You should format your answers in markdown format as you see fit."))

    memory.add_message(HumanMessage(content=message.text))

    callback = AgentLabsStreamingCallback(agent, message.conversation_id)
    output = llm(memory.messages, callbacks=[callback])

    memory.add_message(AIMessage(content=output.content))
Enter fullscreen mode Exit fullscreen mode

This code must be easy to understand.

Every time we get a message from the user, we try to retrieve the corresponding conversation's ChatMessageHistory.

memory = memory_manager.get_memory(message.conversation_id)
Enter fullscreen mode Exit fullscreen mode

If we don't get any memory, which means it's a new conversation, then we add the initial context prompt:

if len(memory.messages) == 0:
    memory.add_message(SystemMessage(content="You are a general assistant designed to help people with their daily tasks. You should format your answers in markdown format as you see fit."))
Enter fullscreen mode Exit fullscreen mode

Then, we add the user's message to the history and pass the history when we call our llm.

memory.add_message(HumanMessage(content=message.text))

callback = AgentLabsStreamingCallback(agent, message.conversation_id)
output = llm(memory.messages, callbacks=[callback])
Enter fullscreen mode Exit fullscreen mode

Finally, we also get the output result of the llm so we can add its response to the ChatMessageHistory as well:

output = llm(memory.messages, callbacks=[callback])

memory.add_message(AIMessage(content=output.content))
Enter fullscreen mode Exit fullscreen mode

Et voilà

Congrats, you just added some basic memory support to your own ChatGPT app.

Full code

Here's a replit containing the entire code we added so far.

Result in video

Here's the final result. As you can see, now our ChatGPT has conversation-scoped memory :)

Conclusion

In this series, we saw how to create a basic clone of ChatGPT with LangChain and AgentLabs.

In the next articles, I will show you more examples of what you can build with these tools!

If you liked this tutorial, feel free to smash the like button and react in the comments below :)

Top comments (2)

Collapse
 
kirilldedeshin profile image
Kirill Dedeshin

Welcome GPT lovers^^

Collapse
 
kevinpiac profile image
Kevin Piacentini

Thanks @kirilldedeshin!