In this tutorial, you'll learn how to create AutoGen AI agents that can remember conversations and use that memory in future discussions. We'll build a simple Software Development Consulting Team with two agents:
- Alex - Technical Architect (designs systems)
- Sam - Full-Stack Developer (builds applications)
They'll help a client build an e-commerce website by remembering everything discussed and providing more informed suggestions for the client.
What is Memori?
Memori is an open-source memory engine that provides persistent, intelligent memory for any LLM using standard SQL databases. Memori uses multiple agents working together to intelligently promote essential long-term memories to short-term storage for faster context injection.
With a single line of code memori.enable()
any LLM gains the ability to remember conversations, learn from interactions, and maintain context across sessions. The entire memory system is stored in a standard SQLite database (or PostgreSQL/MySQL for enterprise deployments), making it fully portable, auditable, and owned by the user.
Key features:
- Auto-recording: Automatically saves all conversations
- Works with existing databases: SQLite, PostgreSQL, MySQL, MongoDB
- Smart memory: AI decides what's important to remember
- Cross-session: Agents remember between different conversations
- Zero setup: Just initialize and enable - that's it!
Requirements
Before we start, we need to install some packages and set up our environment.
pip install memorisdk autogen-agentchat "autogen-ext[openai]" python-dotenv
Set up your OpenAI API key
You'll need an OpenAI API key to run this example.
import os
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
Step 1: Import Libraries
Let's import everything we need for our multi-agent conversation system.
import asyncio
import os
# AutoGen imports - for creating AI agent teams
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
# Memori import - for giving agents memory
from memori import Memori
# For loading environment variables
from dotenv import load_dotenv
load_dotenv()
print("All libraries imported successfully!")
Step 2: Initialize Memory System
This is the magic step! We create a memory system that will automatically record and remember all conversations.
# Create the memory system - this is where all conversations will be saved
memory = Memori(
database_connect="sqlite:///consulting_memory.db", # Local database file
auto_ingest=True, # Automatically save all conversations
conscious_ingest=True, # AI decides what's important to remember
verbose=False, # Set to True to see what's happening behind the scenes
namespace="consulting" # Separate memory space for this project
)
# Enable the memory system
memory.enable()
print("Memory system initialized!")
print("Database: consulting_memory.db")
print("Auto-recording enabled - all conversations will be remembered!")
Step 3: Create AI Agents
Now let's create our consulting team! We'll make two AI agents with different expertise.
# Set up the AI model (OpenAI GPT-4o-mini)
model_client = OpenAIChatCompletionClient(
model="gpt-4o-mini",
api_key=os.getenv("OPENAI_API_KEY"),
)
# Create Alex - Technical Architect
alex = AssistantAgent(
name="Alex",
model_client=model_client,
system_message="""You are Alex, a Senior Technical Architect.
You have persistent memory and remember:
- Client requirements and constraints
- Technical decisions made in past conversations
- Budget and timeline discussions
Always reference previous conversations when relevant.
Keep your responses focused and practical.""",
)
# Create Sam - Full-Stack Developer
sam = AssistantAgent(
name="Sam",
model_client=model_client,
system_message="""You are Sam, a Senior Full-Stack Developer.
You have persistent memory and remember:
- Client's technical preferences and team skills
- Implementation decisions from past discussions
- Development approaches we've recommended
Build upon previous conversations and maintain consistency.
Focus on practical implementation advice.""",
)
print("Alex (Technical Architect) created")
print("Sam (Full-Stack Developer) created")
print("Both agents have persistent memory enabled!")
Step 4: Create the Team of Agents
Let's put our agents together in a team that can collaborate on client problems.
# Create a team where agents take turns (round-robin)
consulting_team = RoundRobinGroupChat(
participants=[alex, sam], # Our two agents
termination_condition=MaxMessageTermination(max_messages=6) # Stop after 6 messages
)
print("Consulting team created!")
print("Team members: Alex (Architect) + Sam (Developer)")
print("They'll take turns responding to client questions")
Step 5: First Consultation - Setting Requirements
Let's simulate our first client meeting where they share their project requirements.
# First client conversation - gathering requirements
client_request_1 = """
Hi team! I'm Sarah, and I'm building a new e-commerce platform for my retail business.
Here are my requirements:
- Need to handle 10,000+ products
- Process payments securely
- Manage inventory in real-time
- My budget is $50,000
- My team knows React and Python well
- We prefer modern, maintainable technology
What architecture would you recommend?
"""
print("CLIENT REQUEST 1: Initial Requirements")
print("=" * 50)
print(client_request_1)
print("=" * 50)
print("\nTeam Response:")
# Run the team conversation
result_1 = await consulting_team.run(task=client_request_1)
# Show the team's response
for i, message in enumerate(result_1.messages, 1):
print(f"\n{i}. {message.source}: {message.content[:300]}...")
Step 6: Follow-up Consultation - Database Decision
Now let's see the magic of memory! The client asks a follow-up question, and our agents should remember the previous conversation.
# Second client conversation - building on previous discussion
client_request_2 = """
Great recommendations from our last meeting!
Now I'm concerned about the database choice. Given our product catalog size
and the budget constraints we discussed, what specific database solution
would work best for our e-commerce platform?
Also, how should we handle the inventory tracking?
"""
print("CLIENT REQUEST 2: Database & Inventory (Notice: References previous meeting!)")
print("=" * 50)
print(client_request_2)
print("=" * 50)
print("\nTeam Response (with memory of previous conversation):")
# Run the team conversation - they should remember the $50K budget and 10K+ products
result_2 = await consulting_team.run(task=client_request_2)
# Show the team's response
for i, message in enumerate(result_2.messages, 1):
print(f"\n{i}. {message.source}: {message.content[:300]}...")
Step 7: Third Consultation - Development Approach
Let's test the memory even more! The client asks about development approach, referencing team size and timeline that weren't explicitly mentioned before.
# Third client conversation - development strategy
client_request_3 = """
Perfect! The database recommendations make sense.
Now for the development approach - should we build this as a monolith first
or go straight to microservices?
Remember, we have a small team (just 3 developers) and need to launch in 6 months.
Also, keep in mind our React and Python skills that I mentioned earlier.
"""
print("CLIENT REQUEST 3: Development Approach (References team skills from first meeting!)")
print("=" * 50)
print(client_request_3)
print("=" * 50)
print("\nTeam Response (should remember React/Python skills + budget):")
# Run the team conversation - they should remember all previous context
result_3 = await consulting_team.run(task=client_request_3)
# Show the team's response
for i, message in enumerate(result_3.messages, 1):
print(f"\n{i}. {message.source}: {message.content[:300]}...")
Step 8: Check What's in Memory
Let's peek behind the scenes and see what our memory system has learned!
# Let's see what the memory system has learned
print("MEMORY SYSTEM ANALYSIS")
print("=" * 40)
try:
# Get memory statistics
stats = memory.get_memory_stats()
print(f"Total conversations recorded: {stats.get('total_conversations', 0)}")
print(f"Total memories stored: {stats.get('total_memories', 0)}")
print(f"Database location: consulting_memory.db")
print(f"Namespace: consulting")
except Exception as e:
print(f"Memory stats not available: {e}")
print("\nAll conversations have been automatically saved!")
print("If you restart this notebook and run the agents again,")
print(" they will remember everything from today's conversations.")
Step 9: Test Memory Persistence
Let's test if our agents truly remember by asking them directly what they learned about the client.
# Test memory recall
memory_test = """
Hey team, I want to make sure we're all on the same page.
Can you remind me of my key project requirements and the decisions
we've made so far? I want to make sure nothing was missed.
"""
print("MEMORY TEST: What do you remember about our project?")
print("=" * 50)
print(memory_test)
print("=" * 50)
print("\n Team's Memory Recall:")
# Run the memory test
result_test = await consulting_team.run(task=memory_test)
# Show what they remember
for i, message in enumerate(result_test.messages, 1):
print(f"\n{i}. {message.source}: {message.content[:400]}...")
print("\nAmazing! The agents remembered the key details from all our conversations!")
Congratulations! You've Built Memory-Enhanced AI Agents!
What you accomplished:
- Created AI agents that work together as a team
- Gave them persistent memory using Memori
- Ran multiple conversations that build on each other
- Saw how memory makes conversations more helpful
Key insights from this demo:
- Memory makes agents smarter: They remembered budget ($50K), team skills (React/Python), and project constraints
- Conversations build naturally: Each discussion referenced previous context
- Zero manual work: Memori automatically captured and used relevant information
- Persistent across sessions: Restart the notebook and the agents will still remember!
Real-world applications:
- Customer Support: Remember customer history and preferences
- Project Management: Track decisions, requirements, and progress
- Personal Assistant: Remember your preferences and past conversations
- Educational Tutoring: Track student progress and learning style
- Medical Consultation: Remember patient history and treatment plans
Running the Complete Example
Here's the complete code that you can copy and run as a single Python script:
import asyncio
import os
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
from memori import Memori
from dotenv import load_dotenv
async def main():
load_dotenv()
# Initialize Memori
memory = Memori(
database_connect="sqlite:///consulting_memory.db",
auto_ingest=True,
conscious_ingest=True,
verbose=False,
namespace="consulting"
)
memory.enable()
# Create model client
model_client = OpenAIChatCompletionClient(
model="gpt-4o-mini",
api_key=os.getenv("OPENAI_API_KEY"),
)
# Create agents
alex = AssistantAgent(
name="Alex",
model_client=model_client,
system_message="You are Alex, a Senior Technical Architect with persistent memory...",
)
sam = AssistantAgent(
name="Sam",
model_client=model_client,
system_message="You are Sam, a Senior Full-Stack Developer with persistent memory...",
)
# Create team
consulting_team = RoundRobinGroupChat(
participants=[alex, sam],
termination_condition=MaxMessageTermination(max_messages=6)
)
# Run conversations
client_requests = [
"Hi team! I'm Sarah, building an e-commerce platform...",
"Great recommendations! Now about the database choice...",
"Perfect! Now for development approach - monolith or microservices?"
]
for i, request in enumerate(client_requests, 1):
print(f"\n=== CLIENT REQUEST {i} ===")
result = await consulting_team.run(task=request)
for j, message in enumerate(result.messages, 1):
print(f"{j}. {message.source}: {message.content[:200]}...")
if __name__ == "__main__":
asyncio.run(main())
Save this as a Python file and run it to see the memory-enhanced agents in action!
Top comments (0)