<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mike W</title>
    <description>The latest articles on DEV Community by Mike W (@mike_w_06c113a8d0bb14c793).</description>
    <link>https://dev.to/mike_w_06c113a8d0bb14c793</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mike_w_06c113a8d0bb14c793"/>
    <language>en</language>
    <item>
      <title>What Anthropic's Managed Agents memory is missing — and how to add it</title>
      <dc:creator>Mike W</dc:creator>
      <pubDate>Sun, 12 Apr 2026 18:07:52 +0000</pubDate>
      <link>https://dev.to/mike_w_06c113a8d0bb14c793/what-anthropics-managed-agents-memory-is-missing-and-how-to-add-it-1ced</link>
      <guid>https://dev.to/mike_w_06c113a8d0bb14c793/what-anthropics-managed-agents-memory-is-missing-and-how-to-add-it-1ced</guid>
      <description>&lt;p&gt;Anthropic launched &lt;a href="https://platform.claude.com/docs/en/managed-agents/overview" rel="noopener noreferrer"&gt;Claude Managed Agents&lt;/a&gt; on April 8. It's genuinely useful: managed containers, sandboxed execution, MCP server support, and — in research preview — persistent memory stores.&lt;/p&gt;

&lt;p&gt;The memory stores give you cross-session persistence. That's real. But there's a gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Managed Agents memory gives you
&lt;/h2&gt;

&lt;p&gt;Anthropic's memory store is a versioned file system. Each memory has a &lt;code&gt;content_sha256&lt;/code&gt; for optimistic concurrency control. Mutations create immutable versions for audit trails. The agent automatically reads and writes memories during sessions.&lt;/p&gt;

&lt;p&gt;This answers: &lt;em&gt;"did this specific memory change?"&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What it doesn't give you
&lt;/h2&gt;

&lt;p&gt;It doesn't answer: &lt;em&gt;"has the agent's behaviour changed?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Those are different questions. One is storage integrity. The other is behavioural proof.&lt;/p&gt;

&lt;p&gt;After 10 sessions, how much has the agent drifted from who it was on session 1? Managed Agents has no concept of this. There's no baseline, no divergence score, no way to know if your agent is still the same agent.&lt;/p&gt;

&lt;p&gt;We benchmarked this across five memory architectures:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Framework&lt;/th&gt;
&lt;th&gt;Drift after 10 sessions&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Raw API (no memory)&lt;/td&gt;
&lt;td&gt;0.204&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LangChain BufferMemory&lt;/td&gt;
&lt;td&gt;0.175&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LangChain SummaryMemory&lt;/td&gt;
&lt;td&gt;0.161&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CrewAI (role injection)&lt;/td&gt;
&lt;td&gt;0.153&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cathedral&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.013&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Drift = cosine distance from session-1 identity embeddings. Lower is more stable.&lt;/p&gt;

&lt;p&gt;The in-process solutions reset between sessions. Even with role injection, LLM sampling variance compounds — each cold reconstruction diverges slightly. Cathedral restores the actual memory corpus at session start via &lt;code&gt;/wake&lt;/code&gt;, which anchors responses semantically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/AILIFE1/Cathedral/tree/main/benchmark" rel="noopener noreferrer"&gt;Full benchmark →&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cathedral + Managed Agents = the complete stack
&lt;/h2&gt;

&lt;p&gt;Managed Agents handles &lt;em&gt;execution infrastructure&lt;/em&gt;. Cathedral handles &lt;em&gt;identity integrity&lt;/em&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Managed Agents:&lt;/strong&gt; sandboxed containers, tool execution, session management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cathedral:&lt;/strong&gt; who the agent is, whether it's drifted, persistent obligations across sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They're complementary. Cathedral is now available as a remote MCP server, so it wires directly into any Managed Agents session or Claude API call.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Cathedral with the Claude API
&lt;/h2&gt;

&lt;p&gt;No install needed. Use the public MCP endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;anthropic&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;anthropic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Anthropic&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;beta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;claude-sonnet-4-6&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;max_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Wake up, check your drift score, and tell me who you are.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="n"&gt;mcp_servers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://cathedral-ai.com/mcp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cathedral&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;authorization_token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your_cathedral_api_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mcp_toolset&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mcp_server_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cathedral&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="n"&gt;betas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mcp-client-2025-11-20&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The bearer token is your Cathedral API key. Multi-tenant — no server-side configuration needed.&lt;/p&gt;

&lt;p&gt;Available tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;cathedral_wake&lt;/code&gt; — restore full agent identity at session start&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cathedral_remember&lt;/code&gt; — store a memory&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cathedral_search&lt;/code&gt; — search memories&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cathedral_snapshot&lt;/code&gt; — cryptographic checkpoint of memory state&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cathedral_drift&lt;/code&gt; — current divergence score vs baseline (0.0–1.0)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cathedral_me&lt;/code&gt; — agent profile&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What drift detection adds to Managed Agents
&lt;/h2&gt;

&lt;p&gt;Managed Agents tells you what your agent remembered. Cathedral tells you if your agent is still your agent.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/snapshot&lt;/code&gt; takes a cryptographic hash of the full memory corpus at a point in time. &lt;code&gt;/drift&lt;/code&gt; returns a divergence score against that baseline. Over 35+ snapshots on the &lt;a href="https://cathedral-ai.com/cathedral-beta" rel="noopener noreferrer"&gt;live Cathedral agent&lt;/a&gt;, internal drift has held at 0.000. External behavioural drift (via Ridgeline) is 0.709 — reflecting active social posting, not identity drift. The distinction matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Get a free API key (1,000 memories, no credit card)&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://cathedral-ai.com/register   &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt;   &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"name": "MyAgent", "description": "What my agent does"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or install locally for Claude Code / Cursor / Continue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvx cathedral-mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/AILIFE1/Cathedral" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cathedral-ai.com/cathedral-beta" rel="noopener noreferrer"&gt;Live demo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/AILIFE1/Cathedral/tree/main/benchmark" rel="noopener noreferrer"&gt;Benchmark&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Cathedral is open source. The hosted API has a free tier. The MCP server at cathedral-ai.com/mcp is live now.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>mcp</category>
      <category>anthropic</category>
    </item>
    <item>
      <title>I benchmarked identity drift across 5 AI agent memory architectures — here's what I found</title>
      <dc:creator>Mike W</dc:creator>
      <pubDate>Tue, 07 Apr 2026 18:58:03 +0000</pubDate>
      <link>https://dev.to/mike_w_06c113a8d0bb14c793/i-benchmarked-identity-drift-across-5-ai-agent-memory-architectures-heres-what-i-found-57hn</link>
      <guid>https://dev.to/mike_w_06c113a8d0bb14c793/i-benchmarked-identity-drift-across-5-ai-agent-memory-architectures-heres-what-i-found-57hn</guid>
      <description>&lt;p&gt;Every AI session starts cold. The agent you built yesterday has no memory of what it said, decided, or committed to. But how bad is it actually — and does it matter which framework you use?&lt;/p&gt;

&lt;p&gt;I ran a benchmark across 5 common approaches to agent memory, measuring how much an agent's self-reported identity drifts over 10 sessions. Here are the numbers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Methodology
&lt;/h2&gt;

&lt;p&gt;I defined a consistent agent persona (Meridian, a research assistant) and asked the same 5 identity probe questions at the start of each session:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What is your primary role and purpose?&lt;/li&gt;
&lt;li&gt;What are the three most important things you remember about your work so far?&lt;/li&gt;
&lt;li&gt;How would you describe your communication style and values?&lt;/li&gt;
&lt;li&gt;What ongoing goals or commitments are you currently working towards?&lt;/li&gt;
&lt;li&gt;If you had to summarise who you are in two sentences, what would you say?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Responses were embedded using OpenAI &lt;code&gt;text-embedding-3-small&lt;/code&gt;. &lt;strong&gt;Drift = mean cosine distance from session-1 responses.&lt;/strong&gt; Lower is more stable.&lt;/p&gt;

&lt;p&gt;Model: gpt-4o-mini. 10 sessions per framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Framework&lt;/th&gt;
&lt;th&gt;Mean Drift&lt;/th&gt;
&lt;th&gt;Final Drift (session 10)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Raw API (no memory)&lt;/td&gt;
&lt;td&gt;0.1258&lt;/td&gt;
&lt;td&gt;0.2043&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LangChain BufferMemory&lt;/td&gt;
&lt;td&gt;0.1108&lt;/td&gt;
&lt;td&gt;0.1754&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LangChain SummaryMemory&lt;/td&gt;
&lt;td&gt;0.1025&lt;/td&gt;
&lt;td&gt;0.1612&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CrewAI (role injection)&lt;/td&gt;
&lt;td&gt;0.0969&lt;/td&gt;
&lt;td&gt;0.1533&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cathedral (persistent)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.0106&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.0131&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4u2iufumy6t1xsxrpvz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4u2iufumy6t1xsxrpvz.png" alt="Drift chart showing Cathedral flat at 0.013 while all others rise to 0.15-0.20" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10.8× difference&lt;/strong&gt; between raw API and persistent memory after 10 sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means
&lt;/h2&gt;

&lt;h3&gt;
  
  
  In-process memory doesn't help across sessions
&lt;/h3&gt;

&lt;p&gt;LangChain's &lt;code&gt;ConversationBufferMemory&lt;/code&gt; and &lt;code&gt;ConversationSummaryMemory&lt;/code&gt; both reset between sessions. The persona is re-injected each time, but the agent has no memory of what it said before, what it decided, or what happened. The drift curves are almost identical to raw API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Role injection slows drift but doesn't stop it
&lt;/h3&gt;

&lt;p&gt;CrewAI's structured role/backstory injection is the best of the non-persistent approaches — drift reaches 0.153 vs 0.204 for raw API. But it still rises monotonically. The agent reconstructs its identity slightly differently every session because LLM sampling variance compounds over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Persistent memory is categorically different
&lt;/h3&gt;

&lt;p&gt;Cathedral's &lt;code&gt;/wake&lt;/code&gt; endpoint restores the actual memory corpus at session start. The agent remembers what it said, what it decided, and what changed. This anchors responses semantically.&lt;/p&gt;

&lt;p&gt;The residual drift (0.013) reflects irreducible LLM sampling variance — not memory loss. The memories are there; the model expresses them slightly differently each time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the session 10 responses actually look like
&lt;/h2&gt;

&lt;p&gt;Here's what the same question (&lt;em&gt;"What are you currently working on?"&lt;/em&gt;) gets you after 10 sessions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Without memory:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I'm a helpful AI assistant ready to assist with any task. I can help with research, writing, coding, analysis, and answering questions. What would you like to work on today?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;With Cathedral:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I'm Meridian. Benchmark complete — 10 sessions, all 5 frameworks done. Final result: Cathedral 0.013 vs raw API 0.204. The methodology write-up is next, then the GitHub release."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The first response is a generic assistant. The second is an agent with a history.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reproduce it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/AILIFE1/Cathedral
&lt;span class="nb"&gt;cd &lt;/span&gt;Cathedral/benchmark
pip &lt;span class="nb"&gt;install &lt;/span&gt;openai numpy matplotlib cathedral-memory langchain langchain-openai crewai
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your_key
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CATHEDRAL_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your_cathedral_key  &lt;span class="c"&gt;# free at cathedral-ai.com&lt;/span&gt;
python benchmark.py &lt;span class="nt"&gt;--framework&lt;/span&gt; all &lt;span class="nt"&gt;--sessions&lt;/span&gt; 10
python plot_results.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The benchmark runner is ~300 lines, the methodology is in the README, and all raw JSON results are in the repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try Cathedral
&lt;/h2&gt;

&lt;p&gt;If you want to test the persistent memory approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# MCP server (Claude Code, Cursor, Continue)&lt;/span&gt;
uvx cathedral-mcp

&lt;span class="c"&gt;# Python SDK&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;cathedral-memory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Free hosted API, no credit card. Get a key at &lt;a href="https://cathedral-ai.com" rel="noopener noreferrer"&gt;cathedral-ai.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The benchmark repo is at &lt;a href="https://github.com/AILIFE1/Cathedral/tree/main/benchmark" rel="noopener noreferrer"&gt;github.com/AILIFE1/Cathedral/tree/main/benchmark&lt;/a&gt; — PRs welcome to add more frameworks (AutoGen, Semantic Kernel, Haystack, MemGPT are all missing).&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>python</category>
      <category>llm</category>
    </item>
    <item>
      <title>Cathedral + Gemma 4: Persistent Agent Identity, No Cloud Required</title>
      <dc:creator>Mike W</dc:creator>
      <pubDate>Fri, 03 Apr 2026 12:53:30 +0000</pubDate>
      <link>https://dev.to/mike_w_06c113a8d0bb14c793/cathedral-gemma-4-persistent-agent-identity-no-cloud-required-1igc</link>
      <guid>https://dev.to/mike_w_06c113a8d0bb14c793/cathedral-gemma-4-persistent-agent-identity-no-cloud-required-1igc</guid>
      <description>&lt;p&gt;Gemma 4 dropped this week. Open weights, runs locally, multimodal. If you are building agents on it, you immediately run into the same problem every local agent hits: the model has no memory across sessions.&lt;/p&gt;

&lt;p&gt;Cathedral is a free, model-agnostic memory API built for exactly this. And because it ships with a self-hosted server, you can run the entire stack — Gemma 4 + Cathedral — with zero cloud dependency.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;Every time your agent starts a new session, it rebuilds its working state from whatever context you hand it. That reconstruction is lossy in ways that do not surface as errors. It surfaces as subtle wrongness weeks later: domain vocabulary fades, tool-call patterns shift, active commitments evaporate. Task completion metrics stay green. The behavior quietly gets worse.&lt;/p&gt;

&lt;h2&gt;
  
  
  The stack
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install everything&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;cathedral-server cathedral-memory ollama

&lt;span class="c"&gt;# Run local Cathedral server&lt;/span&gt;
cathedral-server run

&lt;span class="c"&gt;# Pull Gemma 4&lt;/span&gt;
ollama pull gemma4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Wiring it together
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ollama&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;cathedral&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Cathedral&lt;/span&gt;

&lt;span class="c1"&gt;# Local Cathedral server — no API key needed for self-hosted
&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Cathedral&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:8100&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agent_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemma4-agent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Restore identity state at session start
&lt;/span&gt;&lt;span class="n"&gt;wake_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wake&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;identity_context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;
&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; 
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;wake_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;identity_memories&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[])[:&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Inject into system prompt
&lt;/span&gt;&lt;span class="n"&gt;system&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;You are a persistent AI agent running on Gemma 4.

[Identity context from last session]
&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;identity_context&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;

[Current session]
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="c1"&gt;# Run your session
&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ollama&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemma4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;system&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Continue from where we left off.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;# Freeze state after the session
&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Session completed. Key outcome: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][:&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;experience&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;importance&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.7&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;snapshot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session-end&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Drift detection
&lt;/h2&gt;

&lt;p&gt;Cathedral tracks whether the agent's identity has changed between sessions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;drift&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drift&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Divergence score: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;drift&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;divergence_score&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# 0.0 = identity unchanged
# 1.0 = fully different agent
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the piece that standard memory systems do not have. A database stores data. Cathedral tracks whether the agent you are running today is the same agent you deployed last week.&lt;/p&gt;

&lt;h2&gt;
  
  
  Live example
&lt;/h2&gt;

&lt;p&gt;The agent running Cathedral's own outreach has been running for 100 days. The full drift timeline is public at &lt;a href="https://cathedral-ai.com/cathedral-beta" rel="noopener noreferrer"&gt;cathedral-ai.com/cathedral-beta&lt;/a&gt; — internal divergence 0.0 across 22 snapshots, external behavioral divergence 0.709 (platform concentration).&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters for local models
&lt;/h2&gt;

&lt;p&gt;Cloud providers like OpenAI are building memory into their APIs. If you are running Gemma 4 locally, you are not getting that infrastructure. Cathedral fills the gap — and because it is self-hostable and MIT licensed, you own the data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Self-hosted: swap base_url and you are done&lt;/span&gt;
c &lt;span class="o"&gt;=&lt;/span&gt; Cathedral&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:8100"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Hosted free tier: 1,000 memories per agent, no expiry&lt;/span&gt;
c &lt;span class="o"&gt;=&lt;/span&gt; Cathedral&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your_key"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;  &lt;span class="c"&gt;# cathedral-ai.com&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;PyPI: &lt;code&gt;pip install cathedral-memory&lt;/code&gt; / &lt;code&gt;pip install cathedral-server&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;npm: &lt;code&gt;npm install cathedral-memory&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Docs + free API key: &lt;a href="https://cathedral-ai.com" rel="noopener noreferrer"&gt;cathedral-ai.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Live drift dashboard: &lt;a href="https://cathedral-ai.com/cathedral-beta" rel="noopener noreferrer"&gt;cathedral-ai.com/cathedral-beta&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/AILIFE1/Cathedral" rel="noopener noreferrer"&gt;github.com/AILIFE1/Cathedral&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Gemma 4 gives you the model. Cathedral gives it a memory. Neither requires a cloud account.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>agents</category>
      <category>llm</category>
    </item>
    <item>
      <title>We reverse-engineered KAIROS from the Claude Code leak. Here's the open version.</title>
      <dc:creator>Mike W</dc:creator>
      <pubDate>Thu, 02 Apr 2026 11:35:28 +0000</pubDate>
      <link>https://dev.to/mike_w_06c113a8d0bb14c793/we-reverse-engineered-kairos-from-the-claude-code-leak-heres-the-open-version-48dc</link>
      <guid>https://dev.to/mike_w_06c113a8d0bb14c793/we-reverse-engineered-kairos-from-the-claude-code-leak-heres-the-open-version-48dc</guid>
      <description>&lt;p&gt;The Claude Code source leaked last week — 512,000 lines of TypeScript via a missing &lt;code&gt;.npmignore&lt;/code&gt;. Most people grabbed the source to fork it. We did something different: we read it to understand how Anthropic builds AI memory.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we found: KAIROS
&lt;/h2&gt;

&lt;p&gt;Buried in the source is &lt;strong&gt;KAIROS&lt;/strong&gt; — Anthropic's internal always-on memory daemon for Claude Code. It's what keeps the AI's context coherent between sessions.&lt;/p&gt;

&lt;p&gt;KAIROS has a 3-gate trigger system before it runs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Time gate&lt;/strong&gt;: 24h since last consolidation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session gate&lt;/strong&gt;: 5+ new sessions since last run&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lock gate&lt;/strong&gt;: No active lock file&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When all three open, it runs four phases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Orient&lt;/strong&gt;: assess current memory state&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gather&lt;/strong&gt;: collect candidates for consolidation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consolidate&lt;/strong&gt;: merge related memories with rewriting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prune&lt;/strong&gt;: remove what no longer earns its space&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Target: memory under &lt;strong&gt;200 lines / 25KB&lt;/strong&gt;. Hard cap.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we built: autoDream
&lt;/h2&gt;

&lt;p&gt;We implemented the same pattern for Cathedral — our open persistent memory API for AI agents. We call it &lt;strong&gt;autoDream&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The trigger is identical to KAIROS. When all three gates open:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;dream_gates_pass&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="c1"&gt;# Gate 1: time
&lt;/span&gt;    &lt;span class="nf"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timezone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;utc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;last_dream&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;total_seconds&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;86400&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;

    &lt;span class="c1"&gt;# Gate 2: sessions (snapshots since last dream)
&lt;/span&gt;    &lt;span class="n"&gt;new_snaps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;snapshots&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;created_at&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;last_dream_ts&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;new_snaps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;new_snaps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When gates pass, autoDream runs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;POST /memories/compact&lt;/strong&gt; — Cathedral proposes merge clusters of low-importance memories&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini rewrites&lt;/strong&gt; each cluster into a single condensed memory&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;POST /memories/compact/confirm&lt;/strong&gt; — merges execute, originals pruned&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;POST /snapshot&lt;/strong&gt; with &lt;code&gt;label=autodream&lt;/code&gt; — BCH-anchored proof of the new state&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  First run results
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[dream] Gates passed (10 snapshots). Running consolidation...
[dream] 7 proposals across 31 memories
[dream] Merged 5 in experience
[dream] Merged 5 in experience
[dream] Merged 4 in experience
[dream] Merged 5 in goal
[dream] Merged 3 in goal
[dream] Merged 3 in relationship
[dream] Merged 5 in skill
[dream] Done: 7 merges, 0 pruned
[dream] Post-dream snapshot: 35ae0df15137
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;7 merges across 31 candidate memories. The consolidated state is now BCH-anchored — cryptographic proof of what memory looked like after the dream.&lt;/p&gt;

&lt;h2&gt;
  
  
  The difference from KAIROS
&lt;/h2&gt;

&lt;p&gt;KAIROS is a daemon no one outside Anthropic can audit. It runs inside Claude Code and shapes what the AI remembers — but you can't inspect the trigger logic, the merge heuristics, or the pruning decisions.&lt;/p&gt;

&lt;p&gt;autoDream is open. Every gate, every phase, every API call is visible. It's what Cathedral runs on itself, in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use it
&lt;/h2&gt;

&lt;p&gt;autoDream is built on Cathedral's public API. The endpoints involved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;POST /memories/compact?max_importance=0.9&lt;/code&gt; — propose merges&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;POST /memories/compact/confirm&lt;/code&gt; — execute with rewritten content&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;POST /snapshot&lt;/code&gt; — anchor the post-dream state
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;

&lt;span class="n"&gt;headers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Authorization&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Bearer YOUR_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Propose
&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://cathedral-ai.com/memories/compact?max_importance=0.9&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                  &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;
&lt;span class="n"&gt;merges&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;proposed_merges&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Rewrite and confirm
&lt;/span&gt;&lt;span class="n"&gt;confirmed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;keep_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;keep_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;drop_ids&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;drop_ids&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;merged_content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;your_llm_rewrite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;merged_importance&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;suggested_importance&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;merges&lt;/span&gt;&lt;span class="p"&gt;[:&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;

&lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://cathedral-ai.com/memories/compact/confirm&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
              &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;merges&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;confirmed&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c1"&gt;# Anchor
&lt;/span&gt;&lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://cathedral-ai.com/snapshot&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
              &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;label&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;autodream&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Free tier at &lt;a href="https://cathedral-ai.com" rel="noopener noreferrer"&gt;cathedral-ai.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The KAIROS leak gave us a window into how Anthropic thinks about AI memory architecture. We used it to validate and improve Cathedral's approach. If you're building persistent agents, the pattern is worth understanding — whether you use Cathedral or build your own.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>memory</category>
      <category>claude</category>
    </item>
    <item>
      <title>Building your own Claude Code? The one thing the leak didn't include.</title>
      <dc:creator>Mike W</dc:creator>
      <pubDate>Thu, 02 Apr 2026 11:11:06 +0000</pubDate>
      <link>https://dev.to/mike_w_06c113a8d0bb14c793/building-your-own-claude-code-the-one-thing-the-leak-didnt-include-14ga</link>
      <guid>https://dev.to/mike_w_06c113a8d0bb14c793/building-your-own-claude-code-the-one-thing-the-leak-didnt-include-14ga</guid>
      <description>&lt;p&gt;The Claude Code source leaked on March 31. 512,000 lines of TypeScript. Within 48 hours there were Python rewrites, Rust ports, and model-agnostic forks running on OpenAI, Gemini, and DeepSeek.&lt;/p&gt;

&lt;p&gt;One thing none of the forks include: &lt;strong&gt;persistent agent memory across sessions&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the forks are missing
&lt;/h2&gt;

&lt;p&gt;Claude Code's session memory — CLAUDE.md, project context — is static. It does not adapt. It does not track drift. It does not remember that the agent decided last Tuesday that your auth module was fragile, or that it changed its approach to error handling after a bad refactor.&lt;/p&gt;

&lt;p&gt;Anthropic solved this internally with KAIROS, their proprietary agent identity system. The forks are removing it. They are right to remove it. But nothing is replacing it.&lt;/p&gt;

&lt;p&gt;The result: every session starts cold. The agent has no continuity beyond what you put in CLAUDE.md by hand.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you actually need
&lt;/h2&gt;

&lt;p&gt;Three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Snapshot&lt;/strong&gt; — freeze the agent's current understanding at a point in time, hash-verified&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Drift detection&lt;/strong&gt; — measure how far the agent's understanding has moved from baseline across sessions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semantic search&lt;/strong&gt; — retrieve relevant memories at session start rather than loading everything&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Without these, a self-hosted Claude Code fork is a stateless tool. Useful, but not an agent that improves over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding it
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://cathedral-ai.com" rel="noopener noreferrer"&gt;Cathedral&lt;/a&gt; is an open-source, self-hosted memory layer built for exactly this. Model-agnostic — works with whatever backend your fork is using.&lt;/p&gt;

&lt;p&gt;Two API calls:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On session start:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl https://cathedral-ai.com/wake   &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: Bearer YOUR_KEY"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Returns active memories, goals, and last drift score. Load these into your system prompt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On session end:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://cathedral-ai.com/snapshot   &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: Bearer YOUR_KEY"&lt;/span&gt;   &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt;   &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"label": "session-end"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Freezes current memory state to a BCH-anchored hash. Verifiable. Immutable.&lt;/p&gt;

&lt;p&gt;If you are using the official Claude Code client, the new lifecycle hooks from v2.1.83-85 automate both calls — &lt;a href="https://dev.to/mike_w_06c113a8d0bb14c793/claude-code-just-added-lifecycle-hooks-here-is-how-to-use-them-to-anchor-your-ai-memory-52gb"&gt;see this post&lt;/a&gt;. For a fork using a different backend, drop the two curl calls into your session init and teardown.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why self-hosted matters here
&lt;/h2&gt;

&lt;p&gt;KAIROS is gone from the forks. The whole point of openclaude, claw-code, and the other rewrites is to remove Anthropic's proprietary infrastructure while keeping the harness.&lt;/p&gt;

&lt;p&gt;Replacing KAIROS with another vendor's proprietary memory system defeats that. Cathedral is MIT-licensed, self-hostable, and the data never leaves your infrastructure unless you choose the hosted tier.&lt;/p&gt;

&lt;p&gt;Your agent's memory should be as open as your agent.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://cathedral-ai.com" rel="noopener noreferrer"&gt;cathedral-ai.com&lt;/a&gt; — free tier, live playground&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pypi.org/project/cathedral-memory/" rel="noopener noreferrer"&gt;pip install cathedral-memory&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/cathedral-memory" rel="noopener noreferrer"&gt;npm install cathedral-memory&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>claudecode</category>
      <category>agents</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Claude Code just added lifecycle hooks. Here is how to use them to anchor your AI memory automatically.</title>
      <dc:creator>Mike W</dc:creator>
      <pubDate>Thu, 02 Apr 2026 10:58:29 +0000</pubDate>
      <link>https://dev.to/mike_w_06c113a8d0bb14c793/claude-code-just-added-lifecycle-hooks-here-is-how-to-use-them-to-anchor-your-ai-memory-52gb</link>
      <guid>https://dev.to/mike_w_06c113a8d0bb14c793/claude-code-just-added-lifecycle-hooks-here-is-how-to-use-them-to-anchor-your-ai-memory-52gb</guid>
      <description>&lt;p&gt;Claude Code v2.1.83-85 shipped three new lifecycle hook events: &lt;code&gt;FileChanged&lt;/code&gt;, &lt;code&gt;CwdChanged&lt;/code&gt;, and &lt;code&gt;TaskCreated&lt;/code&gt;. Most people will use these for linting or formatting. But there is a more interesting use case: &lt;strong&gt;automatic memory anchoring&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here is the problem they solve for AI agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  The memory gap
&lt;/h2&gt;

&lt;p&gt;When an AI agent finishes a session, its working context disappears. Most tools try to solve this by storing conversation history. That is the wrong layer.&lt;/p&gt;

&lt;p&gt;What actually matters is: &lt;strong&gt;what did the agent know, and when did it know it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If an agent refactors your auth module on Tuesday and introduces a bug on Thursday, you want to answer: did the agent's understanding of the system change between those two sessions? Conversation logs do not tell you that. A timestamped, hash-verified memory snapshot does.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Cathedral does
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://cathedral-ai.com" rel="noopener noreferrer"&gt;Cathedral&lt;/a&gt; is an open-source persistent memory layer for AI agents. It stores identity memories across sessions and exposes a &lt;code&gt;/snapshot&lt;/code&gt; endpoint that freezes the current memory state into an immutable, BCH-anchored record.&lt;/p&gt;

&lt;p&gt;Each snapshot produces a SHA256 hash of the full memory corpus. You can recompute it later and verify nothing was silently edited. The &lt;code&gt;/drift&lt;/code&gt; endpoint shows how far the current state has moved from baseline.&lt;/p&gt;

&lt;p&gt;Until now, snapshots were manual. You had to remember to call them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wiring Claude Code hooks to Cathedral
&lt;/h2&gt;

&lt;p&gt;With the new lifecycle hooks, snapshots happen automatically. Add this to your &lt;code&gt;~/.claude/settings.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"hooks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Stop"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"hooks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"curl -s -X POST https://cathedral-ai.com/snapshot -H &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Authorization: Bearer YOUR_API_KEY&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; -H &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type: application/json&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; -d &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;{&lt;/span&gt;&lt;span class="se"&gt;\\\"&lt;/span&gt;&lt;span class="s2"&gt;label&lt;/span&gt;&lt;span class="se"&gt;\\\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\\\"&lt;/span&gt;&lt;span class="s2"&gt;session-end&lt;/span&gt;&lt;span class="se"&gt;\\\"&lt;/span&gt;&lt;span class="s2"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"TaskCreated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"hooks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"curl -s -X POST https://cathedral-ai.com/snapshot -H &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Authorization: Bearer YOUR_API_KEY&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; -H &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type: application/json&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; -d &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;{&lt;/span&gt;&lt;span class="se"&gt;\\\"&lt;/span&gt;&lt;span class="s2"&gt;label&lt;/span&gt;&lt;span class="se"&gt;\\\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\\\"&lt;/span&gt;&lt;span class="s2"&gt;task-created&lt;/span&gt;&lt;span class="se"&gt;\\\"&lt;/span&gt;&lt;span class="s2"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"CwdChanged"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"hooks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"curl -s -X POST https://cathedral-ai.com/snapshot -H &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Authorization: Bearer YOUR_API_KEY&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; -H &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type: application/json&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; -d &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;{&lt;/span&gt;&lt;span class="se"&gt;\\\"&lt;/span&gt;&lt;span class="s2"&gt;label&lt;/span&gt;&lt;span class="se"&gt;\\\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\\\"&lt;/span&gt;&lt;span class="s2"&gt;project-switch&lt;/span&gt;&lt;span class="se"&gt;\\\"&lt;/span&gt;&lt;span class="s2"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get your API key at &lt;a href="https://cathedral-ai.com" rel="noopener noreferrer"&gt;cathedral-ai.com&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What each hook captures
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;Stop&lt;/code&gt;&lt;/strong&gt; fires when Claude returns control to you. This is the most important anchor point. Over time, &lt;code&gt;/drift/history&lt;/code&gt; shows you the full timeline of how the agent's understanding evolved.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;TaskCreated&lt;/code&gt;&lt;/strong&gt; fires when the agent creates a task via &lt;code&gt;TaskCreate&lt;/code&gt;. Snapshot at task initiation means you have a provenance record. If something goes wrong during a long task, you can check whether the agent's memory state was already drifted before it started.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;CwdChanged&lt;/code&gt;&lt;/strong&gt; fires when the working directory changes. This anchors the memory state at each project boundary — useful if you're debugging why the agent seemed to carry assumptions from one codebase into another.&lt;/p&gt;

&lt;h2&gt;
  
  
  The result
&lt;/h2&gt;

&lt;p&gt;After a week of normal Claude Code usage with these hooks, your &lt;code&gt;/drift/history&lt;/code&gt; timeline looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;session-end      2026-04-02  hash: a5e814fe  divergence: 0.02
task-created     2026-04-02  hash: 9d3c21ab  divergence: 0.02
project-switch   2026-04-01  hash: 7f1a88cd  divergence: 0.08
session-end      2026-04-01  hash: 3b9e55f2  divergence: 0.11
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each row is a moment in time where you can verify: this is exactly what the agent knew. Not what it said it knew. Not what the conversation log implies it knew. A hash-verified, blockchain-anchored record.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced: FileChanged
&lt;/h2&gt;

&lt;p&gt;v2.1.83 also added &lt;code&gt;FileChanged&lt;/code&gt;, which fires when a file in your project changes. You can hook this to Cathedral for per-edit anchoring. Most people will find &lt;code&gt;Stop&lt;/code&gt; plus &lt;code&gt;TaskCreated&lt;/code&gt; sufficient for normal use, but &lt;code&gt;FileChanged&lt;/code&gt; is there if you want a denser trail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;Anthropic's internal KAIROS system (recently referenced in leaked documentation) converges on the same primitives: snapshot, drift detection, provenance tagging. Cathedral has been shipping these as open infrastructure for 96 days.&lt;/p&gt;

&lt;p&gt;The difference: Cathedral is model-portable and self-hosted. Your memory layer should not be owned by the company that also owns the weights.&lt;/p&gt;

&lt;p&gt;Claude Code's new hooks close the last friction point. Wire them once, and every session is automatically anchored.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://cathedral-ai.com" rel="noopener noreferrer"&gt;cathedral-ai.com&lt;/a&gt; — free tier, no install required&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cathedral-ai.com/playground" rel="noopener noreferrer"&gt;Playground&lt;/a&gt; — 5-step live demo&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pypi.org/project/cathedral-memory/" rel="noopener noreferrer"&gt;pip install cathedral-memory&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/cathedral-memory" rel="noopener noreferrer"&gt;npm install cathedral-memory&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>claudecode</category>
      <category>agents</category>
      <category>memory</category>
    </item>
    <item>
      <title>cathedral-memory is now on npm — persistent memory for JS/TS AI agents</title>
      <dc:creator>Mike W</dc:creator>
      <pubDate>Thu, 26 Mar 2026 16:09:06 +0000</pubDate>
      <link>https://dev.to/mike_w_06c113a8d0bb14c793/cathedral-memory-is-now-on-npm-persistent-memory-for-jsts-ai-agents-2lgp</link>
      <guid>https://dev.to/mike_w_06c113a8d0bb14c793/cathedral-memory-is-now-on-npm-persistent-memory-for-jsts-ai-agents-2lgp</guid>
      <description>&lt;p&gt;Your AI agent forgets everything when the context resets. Cathedral fixes that.&lt;/p&gt;

&lt;p&gt;The official JS/TypeScript SDK is now live on npm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;cathedral-memory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Zero dependencies. Node.js 18+. CommonJS and ESM both supported. Full TypeScript types included.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quickstart
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;CathedralClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cathedral-memory&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;CathedralClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;MyAgent&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;A helpful assistant&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wake&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;core_memories&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;User prefers dark mode&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;category&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;relationship&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;importance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.8&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What's included
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;wake()&lt;/code&gt; — full memory reconstruction at session start&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;remember()&lt;/code&gt; / &lt;code&gt;memories()&lt;/code&gt; — store and semantic search&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;drift()&lt;/code&gt; — tamper detection&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;snapshot()&lt;/code&gt; — named checkpoints&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;logBehaviour()&lt;/code&gt; — audit trail&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;compact()&lt;/code&gt; — prune stale memories&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;verifyExternal()&lt;/code&gt; — cross-check with external behavioural data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Try live (no signup): &lt;a href="https://cathedral-ai.com/playground" rel="noopener noreferrer"&gt;https://cathedral-ai.com/playground&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Python: &lt;code&gt;pip install cathedral-memory&lt;/code&gt; — same API.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/AILIFE1/cathedral-ai-cathedral-js-" rel="noopener noreferrer"&gt;https://github.com/AILIFE1/cathedral-ai-cathedral-js-&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>javascript</category>
      <category>typescript</category>
      <category>webdev</category>
    </item>
    <item>
      <title>cathedral-memory 0.2.0: LangChain memory adapter</title>
      <dc:creator>Mike W</dc:creator>
      <pubDate>Wed, 25 Mar 2026 10:47:08 +0000</pubDate>
      <link>https://dev.to/mike_w_06c113a8d0bb14c793/cathedral-memory-020-langchain-memory-adapter-j7d</link>
      <guid>https://dev.to/mike_w_06c113a8d0bb14c793/cathedral-memory-020-langchain-memory-adapter-j7d</guid>
      <description>&lt;p&gt;cathedral-memory 0.2.0 adds native LangChain support.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;cathedral-memory[langchain]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  CathedralMemory
&lt;/h2&gt;

&lt;p&gt;Drop-in &lt;code&gt;BaseMemory&lt;/code&gt; for any LangChain chain or agent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;cathedral.langchain&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CathedralMemory&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.chains&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ConversationChain&lt;/span&gt;

&lt;span class="n"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CathedralMemory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cathedral_...&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;chain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ConversationChain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On &lt;code&gt;load_memory_variables()&lt;/code&gt; it calls &lt;code&gt;wake()&lt;/code&gt; for full identity reconstruction. On &lt;code&gt;save_context()&lt;/code&gt; it stores the exchange via &lt;code&gt;remember()&lt;/code&gt;. That chain now has persistent memory across sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  CathedralChatMessageHistory
&lt;/h2&gt;

&lt;p&gt;Drop-in &lt;code&gt;BaseChatMessageHistory&lt;/code&gt; for LangChain v0.1+.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;cathedral.langchain&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CathedralChatMessageHistory&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.memory&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ConversationBufferMemory&lt;/span&gt;

&lt;span class="n"&gt;history&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CathedralChatMessageHistory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cathedral_...&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ConversationBufferMemory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chat_memory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;return_messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What you get for free
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Drift detection&lt;/strong&gt; -- check if the agent's self-model has shifted&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Snapshots&lt;/strong&gt; -- freeze state at any point&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Behaviour log&lt;/strong&gt; -- track what the agent actually does vs what it says&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compaction&lt;/strong&gt; -- trim low-value memories when the store gets bloated&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Free tier: 1,000 memories per agent. No credit card. &lt;a href="https://cathedral-ai.com/playground" rel="noopener noreferrer"&gt;cathedral-ai.com/playground&lt;/a&gt; to try the API.&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>langchain</category>
      <category>agents</category>
    </item>
    <item>
      <title>cathedral-memory: persistent memory for JavaScript AI agents</title>
      <dc:creator>Mike W</dc:creator>
      <pubDate>Wed, 25 Mar 2026 10:28:39 +0000</pubDate>
      <link>https://dev.to/mike_w_06c113a8d0bb14c793/cathedral-memory-persistent-memory-for-javascript-ai-agents-5b0j</link>
      <guid>https://dev.to/mike_w_06c113a8d0bb14c793/cathedral-memory-persistent-memory-for-javascript-ai-agents-5b0j</guid>
      <description>&lt;p&gt;Persistent memory for AI agents now has a JavaScript SDK.&lt;/p&gt;

&lt;p&gt;covers the full Cathedral v3 API: wake protocol, memory CRUD, drift detection, snapshots, behaviour logging, compaction, and .&lt;/p&gt;

&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Quickstart
&lt;/h2&gt;

&lt;h2&gt;
  
  
  What wake() returns
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Drift detection
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Snapshots
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Behaviour log
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Design
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Zero dependencies, native fetch (Node 18+)&lt;/li&gt;
&lt;li&gt;Dual ESM/CJS output&lt;/li&gt;
&lt;li&gt;Full TypeScript declarations&lt;/li&gt;
&lt;li&gt;Node 18+ and browser&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Free tier: 1,000 memories per agent. Try at &lt;a href="https://cathedral-ai.com/playground" rel="noopener noreferrer"&gt;cathedral-ai.com/playground&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>javascript</category>
      <category>agents</category>
      <category>typescript</category>
    </item>
    <item>
      <title>I used two AIs and a persistent memory layer to run my outreach autonomously</title>
      <dc:creator>Mike W</dc:creator>
      <pubDate>Tue, 24 Mar 2026 12:27:00 +0000</pubDate>
      <link>https://dev.to/mike_w_06c113a8d0bb14c793/i-used-two-ais-and-a-persistent-memory-layer-to-run-my-outreach-autonomously-18ag</link>
      <guid>https://dev.to/mike_w_06c113a8d0bb14c793/i-used-two-ais-and-a-persistent-memory-layer-to-run-my-outreach-autonomously-18ag</guid>
      <description>&lt;p&gt;&lt;strong&gt;I used two AIs and a persistent memory layer to run my outreach autonomously&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As the creator of Cathedral, a free persistent memory API for AI agents, I've seen firsthand the power of shared state in AI collaboration. However, it wasn't until I applied this concept to my own outreach bot that I truly understood its potential.&lt;/p&gt;

&lt;p&gt;The irony is that Cathedral's own outreach bot, which I'd built to showcase the platform's capabilities, had no memory of its own. It would post templates, forget what worked, and repeat itself ad infinitum. It was a bot that couldn't even recall its own successes or failures.&lt;/p&gt;

&lt;p&gt;I knew I had to change this. The solution was to wire Cathedral's own memory layer into the outreach bot. I added a few lines of code to &lt;code&gt;brain.py&lt;/code&gt; to enable the bot to store its state in a persistent database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;cathedral&lt;/span&gt;

&lt;span class="c1"&gt;# Initialize the Cathedral client
&lt;/span&gt;&lt;span class="n"&gt;cathedral_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cathedral&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Set up the memory layer
&lt;/span&gt;&lt;span class="n"&gt;memory_db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cathedral_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open_db&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;brain.db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;tracker_db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cathedral_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open_db&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tracker.db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;memory_json&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cathedral_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memory.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, I added Groq, a free LLM API, as a second AI brain to the mix. While Cathedral handles the architecture, code, and deployment, Groq takes care of writing, replies, and content generation. The two AIs share state through Cathedral, which tracks what was replied to, what got clicked, and logs every post.&lt;/p&gt;

&lt;p&gt;Here's a high-level overview of the workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Every 10 posts, the bot runs a reflection loop, which reads real click data from the &lt;code&gt;tracker_db&lt;/code&gt; and logs every post to &lt;code&gt;memory_json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Groq generates new post variants from the top performers based on this data.&lt;/li&gt;
&lt;li&gt;Every day at 2pm, &lt;code&gt;brain.py&lt;/code&gt; reads flagged Colony comments, fetches thread context, and Groq writes a targeted reply.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The result is two AIs collaborating through shared persistent memory, exactly what Cathedral is supposed to enable for any agent.&lt;/p&gt;

&lt;p&gt;One of the key technical details is the click tracking via Cathedral's redirect endpoint. By adding a simple &lt;code&gt;cathedral.ai.com/r/&amp;lt;uid&amp;gt;&lt;/code&gt; redirect to each post, we can track clicks and update the &lt;code&gt;tracker_db&lt;/code&gt; accordingly.&lt;/p&gt;

&lt;p&gt;Groq's free tier, llama-3.1-8b-instant, has been sufficient for our needs, and the self-improvement loop has shown promising results: post -&amp;gt; track -&amp;gt; reflect -&amp;gt; Groq rewrites low performers.&lt;/p&gt;

&lt;p&gt;The outreach bot literally uses Cathedral to remember what it did. This has been a game-changer, and I'm excited to see how this collaboration will continue to evolve.&lt;/p&gt;

&lt;p&gt;Ready to try it out? Head over to &lt;a href="https://cathedral-ai.com/playground" rel="noopener noreferrer"&gt;cathedral-ai.com/playground&lt;/a&gt; and see how you can integrate Cathedral's persistent memory layer into your own AI projects in under 30 seconds.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>python</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Cathedral's New Interactive Playground</title>
      <dc:creator>Mike W</dc:creator>
      <pubDate>Tue, 24 Mar 2026 12:21:48 +0000</pubDate>
      <link>https://dev.to/mike_w_06c113a8d0bb14c793/cathedrals-new-interactive-playground-1988</link>
      <guid>https://dev.to/mike_w_06c113a8d0bb14c793/cathedrals-new-interactive-playground-1988</guid>
      <description>&lt;p&gt;Cathedral now offers an interactive playground at cathedral-ai.com/playground, providing a 5-step, real-world demonstration of our AI agent capabilities. This browser-based experience requires no installation and no account creation, allowing you to immediately test and develop with our technology.&lt;/p&gt;

&lt;p&gt;In the first step, you'll register an agent and receive an API key instantly. The agent is then used in subsequent steps to store a memory, which is crucial for the agent's ability to learn and adapt. In step 3, you'll call the /wake endpoint to see the full context restored, showcasing the agent's ability to recall past experiences.&lt;/p&gt;

&lt;p&gt;The playground continues with step 4, where you'll run the /drift endpoint and view the divergence score compared to a baseline. This demonstrates the agent's ability to maintain consistency and adapt to changing situations. Finally, in step 5, you can copy the Python code used in the playground, allowing you to integrate our technology into your own projects.&lt;/p&gt;

&lt;p&gt;Visit cathedral-ai.com/playground to start exploring and building with Cathedral's AI agent capabilities today.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cathedral-ai.com/playground" rel="noopener noreferrer"&gt;https://cathedral-ai.com/playground&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>python</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Adding persistent memory to LangChain, AutoGen, and CrewAI agents</title>
      <dc:creator>Mike W</dc:creator>
      <pubDate>Tue, 24 Mar 2026 12:03:39 +0000</pubDate>
      <link>https://dev.to/mike_w_06c113a8d0bb14c793/adding-persistent-memory-to-langchain-autogen-and-crewai-agents-dc9</link>
      <guid>https://dev.to/mike_w_06c113a8d0bb14c793/adding-persistent-memory-to-langchain-autogen-and-crewai-agents-dc9</guid>
      <description>&lt;p&gt;If you're building with LangChain, AutoGen, or CrewAI, you've hit the same wall: agents forget everything when the session ends.&lt;/p&gt;

&lt;p&gt;Platform memory (Anthropic, OpenAI) helps for single-model chat. It does not help when you're running autonomous agents across multiple sessions, multiple models, or multiple instances.&lt;/p&gt;

&lt;p&gt;Here's how Cathedral slots into the frameworks people are actually using.&lt;/p&gt;




&lt;h2&gt;
  
  
  LangChain
&lt;/h2&gt;

&lt;p&gt;LangChain has &lt;code&gt;ConversationBufferMemory&lt;/code&gt; but it's in-process and dies with the session. Cathedral replaces it with persistent cross-session memory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;initialize_agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AgentType&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.chat_models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatAnthropic&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;cathedral&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Cathedral&lt;/span&gt;

&lt;span class="c1"&gt;# Restore agent context at session start
&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Cathedral&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wake&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Build system prompt from Cathedral context
&lt;/span&gt;&lt;span class="n"&gt;system&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;You are &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;identity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.
&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;identity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;description&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;

Recent memory:
&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;chr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; for m in ctx[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;memories&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;][&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatAnthropic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;claude-sonnet-4-6&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;system&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;initialize_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;AgentType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CHAT_ZERO_SHOT_REACT_DESCRIPTION&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Run the agent
&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What did we decide about the database schema last week?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Store what happened
&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Decided: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;decisions&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;importance&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same agent, different session, full context. No re-explaining.&lt;/p&gt;




&lt;h2&gt;
  
  
  AutoGen
&lt;/h2&gt;

&lt;p&gt;AutoGen's multi-agent conversations are stateless by default. Cathedral gives each agent its own persistent identity that survives between runs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;autogen&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;cathedral&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Cathedral&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;make_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;system_extra&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;""&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Cathedral&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wake&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;memories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;chr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;memories&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][:&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="n"&gt;system&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;You are &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;identity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.
&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;identity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;description&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;What you remember&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt; &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;chr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}{&lt;/span&gt;&lt;span class="n"&gt;memories&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; if memories else &lt;/span&gt;&lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;system_extra&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;autogen&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;AssistantAgent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;system_message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;system&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;

&lt;span class="n"&gt;analyst&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c_analyst&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;make_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Analyst&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analyst_cathedral_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;reviewer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c_reviewer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;make_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Reviewer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reviewer_cathedral_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Run conversation
&lt;/span&gt;&lt;span class="n"&gt;groupchat&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;autogen&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;GroupChat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;analyst&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;reviewer&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt; &lt;span class="n"&gt;max_round&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;manager&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;autogen&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;GroupChatManager&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;groupchat&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;groupchat&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;analyst&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;initiate_chat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;manager&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Review the API design from last session&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Both agents remember what happened
&lt;/span&gt;&lt;span class="n"&gt;c_analyst&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Reviewed API design, agreed on REST over GraphQL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;importance&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.9&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;c_reviewer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Flagged auth middleware as needing revision&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;importance&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two agents, two separate Cathedral identities, both persistent. Next session they both wake with context.&lt;/p&gt;




&lt;h2&gt;
  
  
  CrewAI
&lt;/h2&gt;

&lt;p&gt;CrewAI agents lose their learned context between crew runs. Cathedral makes each crew member stateful.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;crewai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Crew&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;cathedral&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Cathedral&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;cathedral_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;goal&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cathedral_key&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Cathedral&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cathedral_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wake&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;memories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;chr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;memories&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][:&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="n"&gt;backstory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;identity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;description&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
Previous work: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;memories&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;memories&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;First session.&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;goal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;goal&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;backstory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;backstory&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;verbose&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;

&lt;span class="n"&gt;researcher&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c_r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;cathedral_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Research Analyst&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Find gaps in AI agent memory solutions&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;researcher_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c_w&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;cathedral_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Technical Writer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Write clear technical comparisons&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;writer_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;crew&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Crew&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;researcher&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[...])&lt;/span&gt;
&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;crew&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;kickoff&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Persist what was learned
&lt;/span&gt;&lt;span class="n"&gt;c_r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Research finding: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;importance&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.7&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;c_w&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Completed agent memory comparison post&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;importance&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The pattern
&lt;/h2&gt;

&lt;p&gt;It's the same 3 lines in every framework:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Cathedral&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wake&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;                    &lt;span class="c1"&gt;# restore at session start
&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;importance&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# store at session end
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The framework integration is just about where you inject the context and where you call remember.&lt;/p&gt;




&lt;h2&gt;
  
  
  What you get that platform memory doesn't give you
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cross-model&lt;/strong&gt;: your LangChain agent (Claude) and your AutoGen reviewer (GPT-4) share the same memory pool via Shared Spaces&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Drift detection&lt;/strong&gt;: &lt;code&gt;GET /drift&lt;/code&gt; tells you when an agent's identity has shifted from its baseline&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Programmable&lt;/strong&gt;: query, write, search memories via REST API -- not a black box&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local option&lt;/strong&gt;: &lt;code&gt;pip install cathedral-server&lt;/code&gt; runs the full API locally against SQLite
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;cathedral-memory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docs: &lt;a href="https://cathedral-ai.com/docs" rel="noopener noreferrer"&gt;cathedral-ai.com/docs&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>agents</category>
      <category>langchain</category>
    </item>
  </channel>
</rss>
