In this Story, I have a super quick tutorial showing you how to create a multi-agent chatbot using LangGraph, Knowledge Graph, and Long Term Memory to build a powerful agent chatbot for your business or personal use.
If you’ve worked on the RAG project, you’ve likely encountered the issue of how static knowledge bases can limit the system’s ability to handle new or changing information. RAG systems rely on these knowledge bases, which are fixed and don’t update based on new user interactions.
This is similar to how graph and relational databases have different data structures, making it hard to compare or translate queries between them. In the case of RAG, the problem is that when the context or information changes, the knowledge base doesn’t adapt, causing the system to provide outdated or irrelevant answers.
Especially problematic when users ask questions or give inputs that the system hasn’t seen before. For example, in RAG systems, if a user asks about a new topic, the system might not have the right information, or it could use old data that no longer applies.
Just like how translating between Cypher and SQL can cause errors or mismatches due to differences in data handling, translating user input into relevant answers can be tricky when the knowledge base is not dynamic.
Do not forget to mention that AI agents face memory bottlenecks in complex tasks. Traditional large language models (LLM)-context windows limit-based agents and have difficulty integrating long-term conversation history and dynamic data, which limits performance and easily leads to hallucinations.
That’s where Graphiti comes in, it builds dynamic, temporally-aware knowledge graphs that represent complex, evolving relationships between entities over time. It ingests both unstructured and structured data, and the resulting graph may be queried using a fusion of time, full-text, semantic, and graph algorithm approaches.
So, let me give you a quick demo of a live chatbot to show you what I mean.
Check a Video
When a user asks, “What sizes do the TinyBirds Wool Runners in Natural Black come in?” the Agent loads product data from a JSON file and creates “episodes” in a knowledge graph. It also sets up a user profile (Jess) who is interested in buying shoes. The agent finds Jess’s unique node and the ManyBirds brand node for reference.
A tool function. It get_shoe_data is defined to searches the graph for product details and formats them into a list of facts. A chatbot using GPT-4.1-mini is created with temperature 0, instructed to act like a smart, helpful salesperson who gathers Jess’s preferences.
When the user asks a question, the chatbot searches Jess’s knowledge graph connections for related facts, builds a facts string, and uses it to respond, always logging conversations for memory. If needed, it calls the A get_shoe_data tool to fetch fresh information before replying.
So, by the end of this Story, you will understand what Graphiti is, how it works, what the difference between Graphrag and Graphiti is, and how we going to use LangGraph, Knowledge Graph, and Long Term Memory to create a powerful Agentic chatbot.
What is Graphiti?
Graphiti is an innovative tool that stands out for its ability to build dynamic, time-aware knowledge graphs, which are critical for applications that need to understand complex relationships between entities over time.
Unlike traditional knowledge graphs, Graphiti is uniquely designed to handle information fluidity, making it particularly suitable for applications in sales, customer service, health, and finance that need to adapt to data changes.
It leverages OpenAI’s LLM for reasoning and embedding, ensuring state-of-the-art performance in proxy memory applications.
How does it work?
User Input:
The user reported a problem with their Samsung Galaxy S23, purchased on February 1, 2024. They explain that the phone keeps overheating even when doing simple tasks. They mention that tech support had previously suggested clearing background apps on February 10, 2024, but the issue still happens.
2. Plot Node Ingestion:
Graphiti ingests the user’s latest message as a new plot node into the knowledge graph, recording the current timestamp. It also retrieves past conversations related to “phone overheating problems.”
3. Entity and Relation Extraction:
Graphiti extracts the key entities:
- Phone model: Samsung Galaxy S23
- Purchase date: February 1, 2024
- Problem description: Phone overheating
- Previous contact time: February 10, 2024
- Previous solution: Clear background apps
- It also identifies relationships:
- [User] purchased [Phone]
- [Phone] has [Overheating Problem]
- [Clear Background Apps] was a [Tried Solution]
4. Community Detection:
The system organises all information about the Samsung Galaxy S23 into a community to make future retrieval faster and more accurate.
5. Dynamic Information Update:
Graphiti updates the solution status of [Clear Background Apps] from [Solution] to [Unresolved] since the problem remains.
6. Context Retrieval:
Graphiti finds related information by:
Full-text search: Searching for “phone overheating,” “Samsung Galaxy S23,” “heating issues.”
Cosine similarity search: Finding similar issues like “battery overheating” or “device running hot.”
Breadth-first search: Starting from the Samsung Galaxy S23 community to find known causes and fixes.
7. Response Generation:
Based on the information found, the agent suggests the user:
- Check for any system updates and install them.
Avoid using the phone while charging.
Turn off high-power features like 5G, Bluetooth, or high screen brightness when not needed.
It also asks the user:
- Does the overheating happen during specific apps or games?
- Has the phone ever shown a temperature warning?
8. Knowledge Update:
Suppose the user later confirms that turning off 5G helped fix the overheating. In that case, Graphiti will record [Turn off 5G] as a valid solution for [Overheating Problem] and update the timestamps of related entities and relationships for future cases.
GraphRag Vs Graphiti
GraphRAG and Graphiti are both methods that use knowledge graphs to make large language models smarter, but they focus on different things. GraphRAG improves retrieval by connecting information better through a static knowledge graph, helping LLMs find and understand data more accurately and quickly, especially when the knowledge doesn’t change often.
In contrast, Graphiti acts like a dynamic memory system that constantly updates over time, handling both structured and unstructured data, tracking new information, and maintaining historical context. While GraphRAG is great for making searches smarter, Graphiti is designed to help LLMs remember past conversations and evolving information, making it ideal for real-world applications where things are always changing.
Let’s Start Coding :
Before we dive into our application, we will create an ideal environment for the code to work. For this, we need to install the requirements.txt
pip install graphiti-core
pip install langchain-openai
pip install langgraph
pip install langchain_core
pip install ipywidgets
The next step is the usual one: We will import the relevant libraries, the significance of which will become evident as we proceed.
Graphiti-core is a framework for building and querying temporally-aware knowledge graphs, specifically tailored for AI agents operating in dynamic environments.
import asyncio
import json
import logging
import os
import sys
import uuid
from contextlib import suppress
from datetime import datetime, timezone
from pathlib import Path
from typing import Annotated
import ipywidgets as widgets
from dotenv import load_dotenv
from IPython.display import Image, display
from typing_extensions import TypedDict
from graphiti_core import Graphiti
from graphiti_core.edges import EntityEdge
from graphiti_core.nodes import EpisodeType
from graphiti_core.utils.maintenance.graph_data_operations import clear_data
from graphiti_core.search.search_config_recipes import NODE_HYBRID_SEARCH_EPISODE_MENTIONS
from langchain_core.messages import AIMessage, SystemMessage
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, StateGraph, add_messages
from langgraph.prebuilt import ToolNode
load_dotenv()
We are going setup_logging function, create a root logger and set its level to ERROR.and only process error and critical messages. It then creates a console handler that outputs to standard output and sets its level, INFO, although only error messages will pass through due to the logger's level. A formatter is set up to control how the log messages look, showing the logger’s name, log level, and the message itself.
def setup_logging():
logger = logging.getLogger()
logger.setLevel(logging.ERROR)
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)
return logger
Then they set up the Neo4j URL, username, and password. It then creates a Graphiti client instance using these credentials. Then I tried to build database indices and constraints by printing a success message if it worked or printing an error message if something went wrong.
Configure Graphiti
neo4j_uri = os.environ.get('NEO4J_URI', 'bolt://localhost:7687')
neo4j_user = os.environ.get('NEO4J_USER', 'neo4j')
neo4j_password = os.environ.get('NEO4J_PASSWORD', 'Password')
client = Graphiti(
neo4j_uri,
neo4j_user,
neo4j_password,
)
try:
await client.build_indices_and_constraints()
print("Successfully created indices")
except Exception as e:
print(f"Failed to create indices: {e}")
They developed a ingest_products_data function that reads a JSON file manybirds_products.json located inside a GRAPHTI folder one level above the current working directory. It loads the list of products from the file and loops through each product, using the Graphiti client to add each one as an episode.
For each episode, it uses the product's title if available, includes the product's data, excluding the images field, and attaches a source description, source type, and the current UTC as the reference. After defining the function, it is called immediately with the client instance to start the ingestion process.
async def ingest_products_data(client: Graphiti):
script_dir = Path.cwd().parent
json_file_path = script_dir / 'GRAPHTI' / 'manybirds_products.json'
with open(json_file_path) as file:
products = json.load(file)['products']
for i, product in enumerate(products):
await client.add_episode(
name=product.get('title', f'Product {i}'),
episode_body=str({k: v for k, v in product.items() if k != 'images'}),
source_description='ManyBirds products',
source=EpisodeType.json,
reference_time=datetime.now(timezone.utc),
)
await ingest_products_data(client)
They set the user’s name to 'Jess' and create an episode in the database stating that Jess is interested in buying shoes, using the SalesBot as the source and recording the current UTC. It then searches for Jess's node in the database using a hybrid search method and retrieves her node UUID. After that, it similarly searches for the ManyBirds node and retrieves its UUID for later use.
user_name = 'jess'
await client.add_episode(
name='User Creation',
episode_body=(f'{user_name} is interested in buying a pair of shoes'),
source=EpisodeType.text,
reference_time=datetime.now(timezone.utc),
source_description='SalesBot',
)
# let's get Jess's node uuid
nl = await client._search(user_name, NODE_HYBRID_SEARCH_EPISODE_MENTIONS)
user_node_uuid = nl.nodes[0].uuid
# and the ManyBirds node uuid
nl = await client._search('ManyBirds', NODE_HYBRID_SEARCH_EPISODE_MENTIONS)
manybirds_node_uuid = nl.nodes[0].uuid
The edges_to_facts_string function takes a list of EntityEdge objects, extracts the fact from each edge, and combines them into a single string where each fact is listed on a new line starting with a dash, beginning the entire string with an initial dash.
def edges_to_facts_string(entities: list[EntityEdge]):
return '-' + '\n- '.join([edge.fact for edge in entities])
Then they define an asynchronous tool function called get_shoe_data, which searches the Graphiti graph starting from the ManyBirds node to find information based on a query, formats the results into a list of facts, and returns it as a string.
The function is wrapped in a tool list, and it ToolNode is created to manage it.
Finally, a ChatOpenAI Model (gpt-4.1-mini) is set up with zero randomness (temperature=0) and is connected to the tool, allowing the model to call it during conversations.
@tool
async def get_shoe_data(query: str) -> str:
"""Search the graphiti graph for information about shoes"""
edge_results = await client.search(
query,
center_node_uuid=manybirds_node_uuid,
num_results=10,
)
return edges_to_facts_string(edge_results)
tools = [get_shoe_data]
tool_node = ToolNode(tools)
llm = ChatOpenAI(model='gpt-4.1-mini', temperature=0).bind_tools(tools)
await tool_node.ainvoke({'messages': [await llm.ainvoke('wool shoes')]})
After that, they make a State TypedDict to hold conversation details (messages, user_name, and user_node_uuid). The chatbot function takes the current state and checks if there are previous messages; if so, it formats the last message into a graphiti_query string. It searches a knowledge graph centred around the user’s node UUID to find relevant facts, turning them into a facts_string.
A SystemMessage is then created, instructing the AI to act like a skilled shoe salesperson for "ManyBirds," always selling while being helpful, and prompting it to gather important info about the user's shoe size, needs, style preferences, and budget. The System Message conversation history is sent to the LLM for a response.
After the AI responds, we log the episode into the graph for future searches without blocking the chatbot’s flow. Finally, the chatbot returns the AI's response wrapped inside a messages list.
class State(TypedDict):
messages: Annotated[list, add_messages]
user_name: str
user_node_uuid: str
async def chatbot(state: State):
facts_string = None
if len(state['messages']) > 0:
last_message = state['messages'][-1]
graphiti_query = f'{"SalesBot" if isinstance(last_message, AIMessage) else state["user_name"]}: {last_message.content}'
# search graphiti using Jess's node uuid as the center node
# graph edges (facts) further from the Jess node will be ranked lower
edge_results = await client.search(
graphiti_query, center_node_uuid=state['user_node_uuid'], num_results=5
)
facts_string = edges_to_facts_string(edge_results)
system_message = SystemMessage(
content=f"""You are a skillfull shoe salesperson working for ManyBirds. Review information about the user and their prior conversation below and respond accordingly.
Keep responses short and concise. And remember, always be selling (and helpful!)
Things you'll need to know about the user in order to close a sale:
- the user's shoe size
- any other shoe needs? maybe for wide feet?
- the user's preferred colors and styles
- their budget
Ensure that you ask the user for the above if you don't already know.
Facts about the user and their conversation:
{facts_string or 'No facts about the user and their conversation'}"""
)
messages = [system_message] + state['messages']
response = await llm.ainvoke(messages)
# add the response to the graphiti graph.
# this will allow us to use the graphiti search later in the conversation
# we're doing async here to avoid blocking the graph execution
asyncio.create_task(
client.add_episode(
name='Chatbot Response',
episode_body=f'{state["user_name"]}: {state["messages"][-1]}\nSalesBot: {response.content}',
source=EpisodeType.message,
reference_time=datetime.now(timezone.utc),
source_description='Chatbot',
)
)
return {'messages': [response]}
So, they first create a StateGraph with graph_builder = StateGraph(State) to manage the chatbot flow, and a MemorySaver with memory = MemorySaver() to track conversation history. Then, we define should_continue(state, config), which checks if the chatbot's last message triggered a tool: if not, it ends; if yes, it continues. We add nodes with graph_builder.add_node('agent', chatbot) and graph_builder.add_node('tools', tool_node)Connect them, starting with graph_builder.add_edge(START, 'agent'), then use graph_builder.add_conditional_edges('agent', should_continue, {'continue': 'tools', 'end': END}), and loop back from tools to the agent using graph_builder.add_edge('tools', 'agent'). Finally, we compile the graph with graph_builder.compile(checkpointer=memory).
graph_builder = StateGraph(State)
memory = MemorySaver()
# Define the function that determines whether to continue or not
async def should_continue(state, config):
messages = state['messages']
last_message = messages[-1]
# If there is no function call, then we finish
if not last_message.tool_calls:
return 'end'
# Otherwise if there is, we continue
else:
return 'continue'
graph_builder.add_node('agent', chatbot)
graph_builder.add_node('tools', tool_node)
graph_builder.add_edge(START, 'agent')
graph_builder.add_conditional_edges('agent', should_continue, {'continue': 'tools', 'end': END})
graph_builder.add_edge('tools', 'agent')
graph = graph_builder.compile(checkpointer=memory)
with suppress(Exception):
display(Image(graph.get_graph().draw_mermaid_png()))
They prepare an initial state for the conversation with a dictionary that includes 'messages' containing a user message asking, "What sizes do the TinyBirds Wool Runners in Natural Black come in?", along with 'user_name' and 'user_node_uuid' for tracking the user. And we pass an A config dictionary with a unique 'thread_id' generated by uuid.uuid4().hex to uniquely identify the conversation thread.
await graph.ainvoke(
{
'messages': [
{
'role': 'user',
'content': 'What sizes do the TinyBirds Wool Runners in Natural Black come in?',
}
],
'user_name': user_name,
'user_node_uuid': user_node_uuid,
},
config={'configurable': {'thread_id': uuid.uuid4().hex}},
)
They set up a conversation interface using widgets.Output() for displaying messages and preparing a config with a unique thread_id and a user_state with the user's name and node UUID.
The process_input function handles sending user input to the conversation graph: it appends the user’s message to the output, creates a graph_state containing the latest input and stream events from the graph.
Each AI response is appended to the output in real time. If there's an error, it prints the error message. The on_submit function grabs input from the text box and triggers process_input when the submit button is clicked.
conversation_output = widgets.Output()
config = {'configurable': {'thread_id': uuid.uuid4().hex}}
user_state = {'user_name': user_name, 'user_node_uuid': user_node_uuid}
async def process_input(user_state: State, user_input: str):
conversation_output.append_stdout(f'\nUser: {user_input}\n')
conversation_output.append_stdout('\nAssistant: ')
graph_state = {
'messages': [{'role': 'user', 'content': user_input}],
'user_name': user_state['user_name'],
'user_node_uuid': user_state['user_node_uuid'],
}
try:
async for event in graph.astream(
graph_state,
config=config,
):
for value in event.values():
If 'messages' in value:
last_message = value['messages'][-1]
if isinstance(last_message, AIMessage) and isinstance(
last_message.content, str
):
conversation_output.append_stdout(last_message.content)
except Exception as e:
conversation_output.append_stdout(f'Error: {e}')
def on_submit(b):
user_input = input_box.value
input_box.value = ''
asyncio.create_task(process_input(user_state, user_input))
input_box = widgets.Text(placeholder='Type your message here...')
submit_button = widgets.Button(description='Send')
submit_button.on_click(on_submit)
conversation_output.append_stdout('Asssistant: Hello, how can I help you find shoes today?')
display(widgets.VBox([input_box, submit_button, conversation_output]))
Conclusion :
Graphiti not only advances the field of query equivalence checking but also shows promise for practical applications in database management systems, highlighting the tool’s potential impact on real-world scenarios.
If this article might be helpful to your friends, please forward it to them.
Reference :
I would highly appreciate it if you
- ❣ Join my Patreon: https://www.patreon.com/GaoDalie_AI
- Book an Appointment with me: https://topmate.io/gaodalie_ai
- Support the Content (every Dollar goes back into the video): https://buymeacoffee.com/gaodalie98d
- Subscribe to the Newsletter for free: https://substack.com/@gaodalie
Top comments (0)