DEV Community

Mike W
Mike W

Posted on

Adding persistent memory to LangChain, AutoGen, and CrewAI agents

If you're building with LangChain, AutoGen, or CrewAI, you've hit the same wall: agents forget everything when the session ends.

Platform memory (Anthropic, OpenAI) helps for single-model chat. It does not help when you're running autonomous agents across multiple sessions, multiple models, or multiple instances.

Here's how Cathedral slots into the frameworks people are actually using.


LangChain

LangChain has ConversationBufferMemory but it's in-process and dies with the session. Cathedral replaces it with persistent cross-session memory.

from langchain.agents import initialize_agent, AgentType
from langchain.chat_models import ChatAnthropic
from cathedral import Cathedral

# Restore agent context at session start
c = Cathedral(api_key="your_key")
ctx = c.wake()

# Build system prompt from Cathedral context
system = f"""You are {ctx['identity']['name']}.
{ctx['identity']['description']}

Recent memory:
{chr(10).join(f"- {m['content']}" for m in ctx['memories'][:5])}"""

llm = ChatAnthropic(model="claude-sonnet-4-6", system_prompt=system)
agent = initialize_agent(tools, llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION)

# Run the agent
result = agent.run("What did we decide about the database schema last week?")

# Store what happened
c.remember(f"Decided: {result}", category="decisions", importance=0.8)
Enter fullscreen mode Exit fullscreen mode

Same agent, different session, full context. No re-explaining.


AutoGen

AutoGen's multi-agent conversations are stateless by default. Cathedral gives each agent its own persistent identity that survives between runs.

import autogen
from cathedral import Cathedral

def make_agent(name, api_key, system_extra=""):
    c = Cathedral(api_key=api_key)
    ctx = c.wake()

    memories = chr(10).join(f"- {m['content']}" for m in ctx['memories'][:5])
    system = f"""You are {ctx['identity']['name']}.
{ctx['identity']['description']}
{f'What you remember: {chr(10)}{memories}' if memories else ''}
{system_extra}"""

    return autogen.AssistantAgent(name=name, system_message=system), c

analyst, c_analyst = make_agent("Analyst", "analyst_cathedral_key")
reviewer, c_reviewer = make_agent("Reviewer", "reviewer_cathedral_key")

# Run conversation
groupchat = autogen.GroupChat(agents=[analyst, reviewer], messages=[], max_round=5)
manager = autogen.GroupChatManager(groupchat=groupchat)
analyst.initiate_chat(manager, message="Review the API design from last session")

# Both agents remember what happened
c_analyst.remember("Reviewed API design, agreed on REST over GraphQL", importance=0.9)
c_reviewer.remember("Flagged auth middleware as needing revision", importance=0.8)
Enter fullscreen mode Exit fullscreen mode

Two agents, two separate Cathedral identities, both persistent. Next session they both wake with context.


CrewAI

CrewAI agents lose their learned context between crew runs. Cathedral makes each crew member stateful.

from crewai import Agent, Task, Crew
from cathedral import Cathedral

def cathedral_agent(role, goal, cathedral_key):
    c = Cathedral(api_key=cathedral_key)
    ctx = c.wake()

    memories = chr(10).join(f"- {m['content']}" for m in ctx['memories'][:5])
    backstory = f"""{ctx['identity']['description']}
Previous work: {memories if memories else 'First session.'}"""

    return Agent(role=role, goal=goal, backstory=backstory, verbose=True), c

researcher, c_r = cathedral_agent(
    "Research Analyst",
    "Find gaps in AI agent memory solutions",
    "researcher_key"
)
writer, c_w = cathedral_agent(
    "Technical Writer",
    "Write clear technical comparisons",
    "writer_key"
)

crew = Crew(agents=[researcher, writer], tasks=[...])
result = crew.kickoff()

# Persist what was learned
c_r.remember(f"Research finding: {result[:200]}", importance=0.7)
c_w.remember("Completed agent memory comparison post", importance=0.6)
Enter fullscreen mode Exit fullscreen mode

The pattern

It's the same 3 lines in every framework:

c = Cathedral(api_key="your_key")
ctx = c.wake()                    # restore at session start
c.remember("...", importance=0.8) # store at session end
Enter fullscreen mode Exit fullscreen mode

The framework integration is just about where you inject the context and where you call remember.


What you get that platform memory doesn't give you

  • Cross-model: your LangChain agent (Claude) and your AutoGen reviewer (GPT-4) share the same memory pool via Shared Spaces
  • Drift detection: GET /drift tells you when an agent's identity has shifted from its baseline
  • Programmable: query, write, search memories via REST API -- not a black box
  • Local option: pip install cathedral-server runs the full API locally against SQLite
pip install cathedral-memory
Enter fullscreen mode Exit fullscreen mode

Docs: cathedral-ai.com/docs

Top comments (0)