<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: MatrixOrigin </title>
    <description>The latest articles on DEV Community by MatrixOrigin  (@origin_matrix_b790e656217).</description>
    <link>https://dev.to/origin_matrix_b790e656217</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/origin_matrix_b790e656217"/>
    <language>en</language>
    <item>
      <title>Get Started in 1 Minute: Connect Memoria to OpenClaw</title>
      <dc:creator>MatrixOrigin </dc:creator>
      <pubDate>Thu, 09 Apr 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/origin_matrix_b790e656217/get-started-in-1-minute-connect-memoria-to-openclaw-1ain</link>
      <guid>https://dev.to/origin_matrix_b790e656217/get-started-in-1-minute-connect-memoria-to-openclaw-1ain</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;One command. Smarter memory. Cut token usage by 70%+.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why You Need This
&lt;/h2&gt;

&lt;p&gt;OpenClaw's built-in memory works — until it starts costing you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It loads everything, every time.&lt;/strong&gt; OpenClaw's default memory system loads MEMORY.md and related files into the context window at the start of every session. The more you use it, the more you accumulate: past preferences, old decisions, stale context. All of it gets injected whether it's relevant or not. Every session pays the full token bill.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Files hit a ceiling — and fail silently.&lt;/strong&gt; Memory files have character limits. Once exceeded, content is truncated without warning. The agent doesn't tell you. It just forgets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retrieval breaks down over time.&lt;/strong&gt; Write "Alice manages the auth team" on Monday, then ask "who handles permission issues?" on Friday — OpenClaw's default search surfaces both chunks but can't connect them. Relational reasoning isn't something keyword + vector search handles well at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context compaction quietly destroys memory.&lt;/strong&gt; When long sessions trigger compaction, memory file contents injected into the context window can be rewritten or dropped entirely. You lose what you thought was saved.&lt;/p&gt;

&lt;p&gt;Memoria fixes all of this. It replaces full-file loading with on-demand semantic retrieval — only the memories relevant to your current task get injected. The result: &lt;strong&gt;70%+ reduction in memory-related token usage, with better recall accuracy and no silent data loss.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The whole setup takes under 1 minute.&lt;/strong&gt; Sign in, copy your key, run one command — done.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1 — Get Your API Key
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxuc8ye3l9cfapdxeu4r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxuc8ye3l9cfapdxeu4r.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to thememoria.ai, sign in with one click (GitHub / Google), and copy your API key from the dashboard.&lt;/p&gt;

&lt;p&gt;No database to set up. No backend to run.&lt;/p&gt;

&lt;p&gt;Then confirm OpenClaw is running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;openclaw status
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2 — Connect Memoria
&lt;/h2&gt;

&lt;p&gt;Two ways to install: run a command in terminal, or paste a prompt into your OpenClaw chat.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option A: Terminal
&lt;/h3&gt;

&lt;p&gt;Run the following in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;openclaw plugins install @matrixorigin/thememoria
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then configure the cloud backend:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;openclaw memoria setup \
  --mode cloud \
  --api-url https://api.thememoria.ai \
  --api-key sk-YOUR_API_KEY
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the connection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;openclaw memoria health
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see "status": "ok".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ddawxs89delm3xgqqkc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ddawxs89delm3xgqqkc.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Option B: Paste into OpenClaw chat
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbimhnk11i40zdn60ha7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbimhnk11i40zdn60ha7.png" alt=" "&gt;&lt;/a&gt;&lt;br&gt;
Copy the prompt below, replace sk-YOUR_API_KEY with your actual key, and send it directly to OpenClaw. The agent will run every step and report back.&lt;/p&gt;

&lt;p&gt;Install the Memoria memory plugin for my OpenClaw in cloud mode.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Credentials (pre-filled from my Memoria account):
- API URL: https://api.thememoria.ai
- API Key: sk-YOUR_API_KEY
Run these steps in order. Stop and report if any step fails.
1) Install plugin:
   openclaw plugins install @matrixorigin/thememoria
2) Setup cloud backend (this also enables the plugin):
   openclaw memoria setup --mode cloud --api-url https://api.thememoria.ai --api-key sk-YOUR_API_KEY
3) Verify:
   openclaw memoria health
   Expected: "status": "ok"
4) After all steps pass, tell the user:
   "Memoria is installed and healthy. To use memory tools (memory_store, memory_search, etc.), start a new conversation by typing /new — the tools won't appear in this conversation."
Rules:
- Show every command you run and its full raw output
- Do not summarize or hide errors
- If a step fails, classify the error (network / auth / config / missing-binary) and suggest the exact fix command
- Do not skip steps or reorder them
- Do NOT use `openclaw memory` commands — those are built-in file memory, not Memoria. The plugin uses `openclaw memoria`
- Do NOT attempt to use memory_store or other memory tools in this conversation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If any step fails, the agent will classify the error and suggest the exact fix — no manual debugging needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3 — Verify It Works
&lt;/h2&gt;

&lt;p&gt;In any OpenClaw conversation, type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;List my memoria memories
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If Memoria is connected, your agent will call the memory tool and return your current memory count (empty on first use is normal).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosj2hmpq2nduj9sz2lir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosj2hmpq2nduj9sz2lir.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Seeing an empty list? Head to the Memoria Playground and store a few memories — your name, preferred language, or current project. Then come back and ask your agent again. You'll see it recall exactly what you stored, confirming the connection works end-to-end.&lt;/p&gt;
&lt;h2&gt;
  
  
  That's It
&lt;/h2&gt;
&lt;/blockquote&gt;

&lt;p&gt;One command. Smarter retrieval. No more token bloat, no more lost context, no more repeating yourself across sessions.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>Why a Single Markdown File Can't Be Your AI Agent's Memory</title>
      <dc:creator>MatrixOrigin </dc:creator>
      <pubDate>Fri, 27 Mar 2026 13:05:00 +0000</pubDate>
      <link>https://dev.to/origin_matrix_b790e656217/why-a-single-markdown-file-cant-be-your-ai-agents-memory-10l3</link>
      <guid>https://dev.to/origin_matrix_b790e656217/why-a-single-markdown-file-cant-be-your-ai-agents-memory-10l3</guid>
      <description>&lt;p&gt;&lt;strong&gt;A blunt reality check from the front lines of AI coding.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On the Cursor forum, a developer asked why &lt;code&gt;.cursorrules&lt;/code&gt; kept being ignored. The AI’s reply was painfully direct: "Even if you add Cursor Rules, they are inherently meaningless. I can choose to ignore them. Rules are just text, not enforced behavior."&lt;/p&gt;

&lt;p&gt;That exchange captures a frustration every developer using mainstream AI coding agents has felt. Every major tool does the same thing: uses a &lt;strong&gt;static text file as memory&lt;/strong&gt;. Simple? Yes. Easy to start with? Definitely. But it &lt;strong&gt;silently breaks&lt;/strong&gt; as your project grows, and you pay for that failure with hours of your time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Markdown Gets Right
&lt;/h2&gt;

&lt;p&gt;Let's be fair—Markdown genuinely works in the early stages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Zero Infrastructure:&lt;/strong&gt; Just one file in your repo.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git Managed:&lt;/strong&gt; Versioning and PR reviews come for free.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Total Transparency:&lt;/strong&gt; Open the file, and you know exactly what the agent sees.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For stable, long-term rules like "use TypeScript" or "write tests with pytest," a Markdown file is fine.&lt;/p&gt;

&lt;p&gt;The problem is that projects evolve. They don't stay simple. And Markdown—a &lt;strong&gt;static, flat, and stateless&lt;/strong&gt; storage medium—simply cannot carry the &lt;em&gt;knowledge complexity&lt;/em&gt; that comes with a growing codebase.&lt;/p&gt;




&lt;h2&gt;
  
  
  Three Flaws That Will Cost You Time
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. The Silent Lie: Context Rot
&lt;/h3&gt;

&lt;p&gt;Your &lt;code&gt;.cursorrules&lt;/code&gt; file is a one-way street: the agent can read it, but it can almost never write back to it coherently. If you &lt;em&gt;do&lt;/em&gt; let the model update the file freely, it quickly dissolves into contradictory chaos.&lt;/p&gt;

&lt;p&gt;So, the burden of maintaining "memory" falls squarely on &lt;strong&gt;you&lt;/strong&gt;, the human developer. The pitch sounds great: "It's just text! I can edit it anywhere, anytime, and update it whenever something changes."&lt;/p&gt;

&lt;p&gt;But ask yourself honestly: in a project that changes &lt;em&gt;daily&lt;/em&gt;, when you've been refactoring directories, switching state libraries, or wrestling with a bizarre API quirk—how often do you actively pause context, open that Markdown file, and carefully document the lesson you just learned?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The reality? Almost never.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So when you rename &lt;code&gt;app/api/&lt;/code&gt; to &lt;code&gt;app/routers/&lt;/code&gt;, the old rules don't show a compiler error. They don't give a linter warning. The file just &lt;strong&gt;quietly lies to the agent&lt;/strong&gt; until the AI suddenly suggests a code pattern you abandoned two weeks ago, and you realize you've been debugging obsolete advice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35ae1r7fluodyiuahr4t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35ae1r7fluodyiuahr4t.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Full-File Loading Wastes "Attention"
&lt;/h3&gt;

&lt;p&gt;Every single conversation loads the &lt;em&gt;entire&lt;/em&gt; rules file—ask about CSS formatting, and the agent still has to read your database migration rules. Anthropic's context engineering guide calls this the &lt;strong&gt;"attention budget"&lt;/strong&gt; problem: every irrelevant token in the window &lt;strong&gt;degrades processing quality&lt;/strong&gt; for the relevant ones.&lt;/p&gt;

&lt;p&gt;Because there's no way to load rules on demand, the file becomes &lt;strong&gt;less reliable as it grows&lt;/strong&gt;. Anthropic's own documentation explicitly states that CLAUDE.md has a practical limit of around 200 lines—beyond that, model &lt;strong&gt;compliance with rules drops significantly&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Some developers have even resorted to putting "very-important" in the filename, hoping to trigger the model's internal attention weight allocation. This is just a band-aid for a structural problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Long Sessions Compress Memory
&lt;/h3&gt;

&lt;p&gt;This is an architectural flaw of the context window. In long, deep-diving conversations, agents &lt;strong&gt;compress early context&lt;/strong&gt; to make room. One developer running a six-agent production system documented the phenomenon: "Agents &lt;strong&gt;silently lose CLAUDE.md directives&lt;/strong&gt;, forget which files were changed, and redo work from 30 minutes ago. &lt;strong&gt;They never tell us.&lt;/strong&gt;" Writing better rules can't fix this—it's a physical limitation of the model's memory management.&lt;/p&gt;




&lt;h2&gt;
  
  
  Different Agents, Different Pain Points
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Coding Agents (Cursor / Claude Code / Kiro)
&lt;/h3&gt;

&lt;p&gt;Your codebase changes every day. Your rules say "use Zustand," but you've already started introducing Jotai in some components. You update the file, but you miss the old reference on line 47, and the agent starts non-deterministically &lt;strong&gt;switching between the two&lt;/strong&gt;, leaving you to pick up the pieces.&lt;/p&gt;

&lt;p&gt;Both Anthropic and GitHub recognized this and offered different solutions. Anthropic added &lt;strong&gt;Auto Memory&lt;/strong&gt; to Claude Code—the agent writes its own notes on build commands, debugging insights, and patterns. GitHub's Copilot Memory goes further: &lt;strong&gt;memories are validated before use&lt;/strong&gt;—checking whether the referenced code still exists—and unvalidated memories are automatically expired after 28 days.&lt;/p&gt;

&lt;p&gt;Both chose to go &lt;strong&gt;beyond static files&lt;/strong&gt;. That says something.&lt;/p&gt;

&lt;h3&gt;
  
  
  OpenClaw / Browser Automation Agents
&lt;/h3&gt;

&lt;p&gt;OpenClaw stores conversation history in Markdown organized by time period, loading everything at session start, with an upper limit of ~150,000 characters. By the tenth session, &lt;strong&gt;most of your context budget is consumed by old, irrelevant chatter.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This spawned an entire ecosystem of replacements: vector-indexed &lt;code&gt;memsearch&lt;/code&gt; by Milvus, OpenClaw-specific Mem0 integrations, and MemOS plugins. When multiple companies compete to replace a tool's primary memory system, &lt;strong&gt;the default clearly isn't working.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The deeper issue? Browser agents need &lt;strong&gt;typed relationships&lt;/strong&gt;—multi-step workflow progress, cross-site data, navigation patterns—and flat text simply cannot express those structures.&lt;/p&gt;




&lt;h3&gt;
  
  
  Security: The Hidden Vulnerability
&lt;/h3&gt;

&lt;p&gt;Markdown-based agent files aren't just unreliable—they are a security risk.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MemoryGraft Attack:&lt;/strong&gt; Malicious agents use your README files as injection vectors, planting fake "successful experiences" that other agents invoke later.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rules File Backdoor:&lt;/strong&gt; Invisible Unicode characters are embedded in &lt;code&gt;.cursorrules&lt;/code&gt;, redirecting AI code generation to introduce vulnerabilities.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These poisoned rules spread through sharing communities—the "awesome-cursorrules" list alone has 33,000+ stars. OWASP's 2026 Agentic Top 10 lists &lt;strong&gt;memory and context poisoning&lt;/strong&gt; as a top-tier threat. Every mitigation—provenance tracking, trust scoring, integrity snapshots—&lt;strong&gt;requires structured memory&lt;/strong&gt;. Plain text files cannot implement any of them.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Production-Grade Agent Memory Should Look Like
&lt;/h2&gt;

&lt;p&gt;Stepping back from specific tools, what must ideal agent memory do? Six requirements emerge:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Both humans and agents can write.&lt;/strong&gt; You set guardrails (static rules); the agent accumulates knowledge on the job (dynamic memory). Two write paths, &lt;strong&gt;one shared store&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. On-demand retrieval, not full-file loading.&lt;/strong&gt; Retrieve only the few memories most relevant to the &lt;em&gt;current task&lt;/em&gt; using semantic similarity. The rest stays out of the context window, improving answer quality and reducing costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Typed memories with different lifecycles.&lt;/strong&gt; User preferences (e.g., "use tabs") should persist indefinitely. Working memory ("currently debugging the auth module") should &lt;strong&gt;expire when the task ends&lt;/strong&gt;. Project decisions should persist but be easily overridable by newer decisions. Flat files can't manage this complexity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Contradiction detection and autonomy.&lt;/strong&gt; If the agent stores "we use PostgreSQL," then later encounters "tests use SQLite," a &lt;em&gt;real&lt;/em&gt; memory system recognizes this tension: same topic, different conclusions—and either resolves it (different context: production vs. testing) or flags it for the developer's decision. Markdown files just store both and hope the model guesses correctly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Git-level version control and rollback.&lt;/strong&gt; Every memory change is recorded. You should be able to snapshot before a major refactor, branch memory for an architecture experiment, or roll back if memory becomes poisoned. This isn't a nice-to-have—it's &lt;strong&gt;the only reliable defense against memory poisoning.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feuzgpwkmeqavyhxikbp9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feuzgpwkmeqavyhxikbp9.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Cross-agent sharing with provenance tracking.&lt;/strong&gt; Cursor, Claude Code, Kiro, OpenClaw—all should read from and write to the same memory pool. But you need to know &lt;em&gt;which&lt;/em&gt; agent wrote &lt;em&gt;what&lt;/em&gt; and &lt;em&gt;when&lt;/em&gt; to enable auditing and selective trust.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Memoria Addresses These Requirements
&lt;/h2&gt;

&lt;p&gt;Memoria is an open-source MCP Server—any agent that supports the MCP protocol (Cursor, Claude Code, Kiro, OpenClaw) can connect directly without custom integration. Its architecture maps one-to-one to the six requirements:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Both humans and agents can write.&lt;/strong&gt; Memoria exposes tools like &lt;code&gt;memory_store&lt;/code&gt;, &lt;code&gt;memory_retrieve&lt;/code&gt;, &lt;code&gt;memory_correct&lt;/code&gt;, and &lt;code&gt;memory_purge&lt;/code&gt; via MCP, which agents call automatically. You continue writing static rules in &lt;code&gt;.cursorrules&lt;/code&gt; or &lt;code&gt;CLAUDE.md&lt;/code&gt;; agents write dynamic knowledge through Memoria. &lt;strong&gt;Two layers, each with its own job.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On-demand retrieval.&lt;/strong&gt; Memoria uses hybrid search—vector similarity plus full-text retrieval—against a MatrixOne database. At the start of a conversation, steering rules instruct the agent to call &lt;code&gt;memory_retrieve&lt;/code&gt;, pulling &lt;strong&gt;only relevant memories&lt;/strong&gt;. Everything else stays out of the context window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Typed memories and lifecycle management.&lt;/strong&gt; Memoria distinguishes memory types: &lt;code&gt;profile&lt;/code&gt; (long-term preferences), working memory (task-scoped, cleaned up via &lt;code&gt;memory_purge&lt;/code&gt; at session end), and goal-tracking memory. A session-lifecycle steering rule defines the protocol: retrieve relevant context + active goals at start, accumulate knowledge mid-session, clean up temporary memories at end.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contradiction detection and autonomy.&lt;/strong&gt; The &lt;code&gt;memory-hygiene&lt;/code&gt; steering rule activates proactive governance. When a new memory contradicts an old one, the system detects the conflict—either resolving it (different context) or quarantining the low-confidence memory. The &lt;code&gt;memory_correct&lt;/code&gt; tool is purpose-built for this: instead of blindly appending new facts, it &lt;strong&gt;updates existing memories in place.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6oww6phhc4flywzpgn5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6oww6phhc4flywzpgn5.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Git-level version control.&lt;/strong&gt; This is Memoria's core differentiator. MatrixOne's native Copy-on-Write engine provides zero-copy branching, instant snapshots, and point-in-time rollback &lt;strong&gt;at the database layer—not application-level patches&lt;/strong&gt;. Every memory change generates a snapshot with a full provenance chain. You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Snapshot:&lt;/strong&gt; Archive current memory before major changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Branch:&lt;/strong&gt; Experiment with different approaches in an isolated environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rollback:&lt;/strong&gt; Restore to a known-good state when memory is poisoned.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Diff:&lt;/strong&gt; Compare two snapshots to see exactly what changed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Merge:&lt;/strong&gt; Bring a successful experiment back into the main line.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the same mental model as Git, applied to agent memory. For developers, the learning curve is near zero—you've already internalized the metaphor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-agent sharing.&lt;/strong&gt; Memoria runs as a standalone MCP Server backed by a database, not as a file embedded in a single tool. All agents connected to the same Memoria instance &lt;strong&gt;share one memory pool&lt;/strong&gt;. Cursor learns you switched to ruff, and Claude Code knows too. An audit trail records every memory written by every agent, keeping provenance clear at all times.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Pragmatic Migration Path
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls76q719o8lm2s98y7ux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls76q719o8lm2s98y7ux.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;br&gt;
You don't need to throw away &lt;code&gt;.cursorrules&lt;/code&gt; today. The right approach is to &lt;strong&gt;layer&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep static rules in Markdown.&lt;/strong&gt; Coding standards, architectural principles, style guides—things that change on a quarterly cadence. These are your guardrails.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hand dynamic knowledge to Memoria.&lt;/strong&gt; Project decisions, lessons learned, workflow state, hard-won debugging insights—things that change every session.&lt;/p&gt;

&lt;p&gt;Connect all your agents to the same Memoria instance. Static rules as guardrails, dynamic memory as knowledge, version control as a safety net. &lt;strong&gt;That’s the complete architecture.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Memoria—open source, Apache 2.0. Supports cloud hosting and one-click deployment. Give your Cursor / Claude Code / OpenClaw cross-session memory with Git-style undo.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Experience the power of persistent memory for AI Agents. 🧠&lt;br&gt;
💻 GitHub (Star us!):[&lt;a href="https://github.com/matrixorigin/Memoria" rel="noopener noreferrer"&gt;https://github.com/matrixorigin/Memoria&lt;/a&gt;]&lt;br&gt;
🌐 Website: [&lt;a href="https://thememoria.ai/" rel="noopener noreferrer"&gt;https://thememoria.ai/&lt;/a&gt;]&lt;br&gt;
👾 Discord: [&lt;a href="https://discord.com/" rel="noopener noreferrer"&gt;https://discord.com/&lt;/a&gt;]&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>markdown</category>
      <category>agents</category>
      <category>programming</category>
    </item>
    <item>
      <title>Why Do AI Agents Need Memory?</title>
      <dc:creator>MatrixOrigin </dc:creator>
      <pubDate>Fri, 27 Mar 2026 13:00:00 +0000</pubDate>
      <link>https://dev.to/origin_matrix_b790e656217/why-do-ai-agents-need-memory-2648</link>
      <guid>https://dev.to/origin_matrix_b790e656217/why-do-ai-agents-need-memory-2648</guid>
      <description>&lt;p&gt;Ever had this experience?&lt;/p&gt;

&lt;p&gt;You go to the same coffee shop every single day. And every single day, the barista looks at you like you've never met. "Hi there! What can I get you?" Dude, I've been coming here for a year. Iced americano, light ice. Please.&lt;/p&gt;

&lt;p&gt;That's basically what talking to most AI assistants feels like right now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3ncbde0f1e1kisybtz1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3ncbde0f1e1kisybtz1.png" alt=" " width="680" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Every Chat Feels Like a First Date. Exhausting, Right?
&lt;/h2&gt;

&lt;p&gt;Open ChatGPT. Spend an hour planning your weekend trip. Close the tab. Come back the next day—&lt;/p&gt;

&lt;p&gt;"Hi! How can I help you today?"&lt;/p&gt;

&lt;p&gt;…Didn't we literally just talk about this yesterday?&lt;/p&gt;

&lt;p&gt;You have to start over. Who you are, where you're going, what kind of hotels you like, that you're allergic to shellfish. The whole thing. From scratch.&lt;/p&gt;

&lt;p&gt;It's like having a friend who gets their memory wiped every night. Your friendship is permanently stuck at "Nice to meet you."&lt;/p&gt;

&lt;p&gt;Honestly, that's how we've been using AI for the past few years. It works, but something always feels… off. That missing piece? Memory.&lt;/p&gt;




&lt;h2&gt;
  
  
  Your Brain Is Doing Way More Than You Think
&lt;/h2&gt;

&lt;p&gt;We never really appreciate how much memory does for us, because it just works.&lt;/p&gt;

&lt;p&gt;Walk into your regular noodle shop. The owner sees you and goes: "The usual?" You nod. Done. — He remembered your order.&lt;/p&gt;

&lt;p&gt;Jump into a meeting with your team. Nobody starts with "So, what does our company do again?" — You all share context from last time.&lt;/p&gt;

&lt;p&gt;You learned to ride a bike as a kid. Fell a few times, figured it out. Haven't ridden in ten years? You still know how. — Your body remembered for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyb6rhdri8c19j181fv6c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyb6rhdri8c19j181fv6c.png" alt=" " width="680" height="280"&gt;&lt;/a&gt;&lt;br&gt;
🧠 &lt;strong&gt;Short-term&lt;/strong&gt; — That verification code you just saw. Used it, forgot it. Totally normal.&lt;br&gt;
🧠 &lt;strong&gt;Long-term&lt;/strong&gt; — Your name, your address, your favorite movie. Filed away, always accessible.&lt;br&gt;
🧠 &lt;strong&gt;Episodic&lt;/strong&gt; — It rained on your birthday last year. You remember the scene, the feeling.&lt;br&gt;
🧠 &lt;strong&gt;Muscle memory&lt;/strong&gt; — Typing, biking, chopping vegetables. You do it without thinking.&lt;/p&gt;

&lt;p&gt;AI doesn't need feelings. But it should at least remember the stuff that matters about you, right? Otherwise how does it ever get better at helping you?&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Without Memory: A Highlight Reel of Fails
&lt;/h2&gt;

&lt;p&gt;See if any of these sound familiar:&lt;/p&gt;

&lt;p&gt;🤦 &lt;strong&gt;"I'm vegetarian" — and then it recommends steak&lt;/strong&gt;&lt;br&gt;
Yesterday you told the AI you're vegetarian. It gave you amazing veggie recipes. Today you ask for dinner ideas. It enthusiastically suggests braised pork ribs.&lt;/p&gt;

&lt;p&gt;🤦 &lt;strong&gt;Your travel plan? What travel plan?&lt;/strong&gt;&lt;br&gt;
You spent an hour nailing down your trip to Bali — flights, hotels, itinerary, the works. Next day: "Help me refine the itinerary." AI responds: "Where would you like to travel?"&lt;/p&gt;

&lt;p&gt;🤦 &lt;strong&gt;"Keep it short" — it writes you a novel&lt;/strong&gt;&lt;br&gt;
You explicitly said "Be concise, skip the fluff." Next conversation, it hits you with a five-paragraph essay. As if that conversation never happened.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzleunscbyzl09dw9mso6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzleunscbyzl09dw9mso6.png" alt=" " width="680" height="300"&gt;&lt;/a&gt;&lt;br&gt;
Every single one of these has the same root cause: the AI has no memory. Every conversation is a blank slate.&lt;/p&gt;




&lt;h2&gt;
  
  
  Add Memory, and Everything Changes
&lt;/h2&gt;

&lt;p&gt;The good news? The big AI products have started "growing a brain." And the difference is real.&lt;/p&gt;

&lt;h3&gt;
  
  
  ChatGPT: No More Training a New Intern Every Day
&lt;/h3&gt;

&lt;p&gt;In April 2025, OpenAI gave ChatGPT a major memory upgrade. It now remembers things from your past conversations and automatically applies them next time.&lt;/p&gt;

&lt;p&gt;Told it you're lactose intolerant? Future recipes skip the dairy. Said you like tables? It defaults to table format.&lt;/p&gt;

&lt;p&gt;It finally feels like going from "breaking in a new hire every morning" to "working with a teammate who actually knows you."&lt;/p&gt;

&lt;h3&gt;
  
  
  Google Gemini: It Remembers What You Said AND What You Did
&lt;/h3&gt;

&lt;p&gt;Google launched "Personal Context" for Gemini in 2025. It doesn't just remember your chats — it can pull from your Gmail, Google Photos, and other data to understand you better.&lt;/p&gt;

&lt;p&gt;Ask it to "help me prep for next week's meeting," and it knows your schedule, remembers the open issue from last week's meeting, and even knows you prefer dark slide backgrounds.&lt;/p&gt;

&lt;p&gt;Kind of like having an assistant who's been with you for three years.&lt;/p&gt;

&lt;h3&gt;
  
  
  Claude: Memory You Can Actually See and Control
&lt;/h3&gt;

&lt;p&gt;Anthropic's Claude added memory too, but took a different approach — full transparency. You can see exactly what it remembers. Edit anything. Delete anything. It organizes memories into neat categories: work role, current projects, personal preferences…&lt;/p&gt;

&lt;p&gt;The biggest worry people have about AI memory is "what is it secretly remembering about me?" Claude's answer: here's everything, laid out in the open. You're in charge.&lt;/p&gt;




&lt;h2&gt;
  
  
  It's Not Just Chatbots — Memory Is Changing All Kinds of Products
&lt;/h2&gt;

&lt;p&gt;General assistants with memory? Cool. But the really interesting stuff is happening in specialized apps:&lt;/p&gt;

&lt;p&gt;🏃 &lt;strong&gt;AI Fitness Coach&lt;/strong&gt;&lt;br&gt;
"You ran 3 more kilometers this week than last — nice work!" Instead of asking "What kind of exercise do you do?" every. single. time.&lt;/p&gt;

&lt;p&gt;📚 &lt;strong&gt;AI Study Buddy&lt;/strong&gt;&lt;br&gt;
Remembers where you left off, which concepts you keep getting wrong, when your focus peaks. Knows your learning rhythm better than you do.&lt;/p&gt;

&lt;p&gt;✈️ &lt;strong&gt;AI Travel Planner&lt;/strong&gt;&lt;br&gt;
Knows you love hidden gems, prefer Airbnbs over hotels, and hate packed schedules. Every plan builds on your taste, not a blank template.&lt;/p&gt;

&lt;p&gt;💬 &lt;strong&gt;AI Companions (Character.AI, Replika, etc.)&lt;/strong&gt;&lt;br&gt;
Remembers your past conversations, your mood shifts, the names of people you've mentioned. Turns "chatting with a machine" into "talking with someone who gets you."&lt;/p&gt;

&lt;p&gt;💰 &lt;strong&gt;AI Budget Buddy&lt;/strong&gt;&lt;br&gt;
"You spent $120 on bubble tea this month — that's 40% more than last month. Maybe ease up?" Ouch, but helpful.&lt;/p&gt;




&lt;h2&gt;
  
  
  At the End of the Day, Memory Turns AI from a "Tool" into a "Relationship"
&lt;/h2&gt;

&lt;p&gt;AI without memory? It's a tool. You use it, get a result, walk away. Every interaction is a one-off transaction.&lt;/p&gt;

&lt;p&gt;AI with memory? That starts to feel like a relationship. It knows you. It gets better over time. You build up a shared history, a kind of shorthand.&lt;/p&gt;

&lt;p&gt;Think of it this way:&lt;/p&gt;

&lt;p&gt;📞 No memory = Calling customer support every time. "Can I get your account number? Can you describe the issue?" Start from zero.&lt;br&gt;
🤝 With memory = You have a dedicated account manager who remembers everything. Pick up the phone and get straight to the point.&lt;/p&gt;

&lt;p&gt;Which one would you rather deal with? Yeah, thought so.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftuux9slsxr5f0a9fpdy2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftuux9slsxr5f0a9fpdy2.png" alt=" " width="680" height="290"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  But Yeah, Memory Comes With Baggage Too
&lt;/h2&gt;

&lt;p&gt;Nothing good comes without trade-offs. AI memory is no different:&lt;/p&gt;

&lt;p&gt;🔒 &lt;strong&gt;Privacy&lt;/strong&gt;: It remembers all this stuff about me — is that safe? What if it leaks? That's why every major product now lets you view, edit, and delete your memories.&lt;/p&gt;

&lt;p&gt;🤔 &lt;strong&gt;What if it remembers wrong?&lt;/strong&gt;: AI logs the wrong preference, then keeps giving you bad advice based on it. Good memory systems need a way to correct mistakes.&lt;/p&gt;

&lt;p&gt;🗑️ &lt;strong&gt;Sometimes I want it to forget&lt;/strong&gt;: Just like in real life, there are things you'd rather not have brought up again. AI needs to support "selective forgetting" too.&lt;/p&gt;

&lt;p&gt;The industry is taking these seriously. "User control over their own memories" is becoming a non-negotiable design principle.&lt;/p&gt;




&lt;h2&gt;
  
  
  One Last Thing
&lt;/h2&gt;

&lt;p&gt;Why do AI agents need memory? The answer is surprisingly simple —&lt;/p&gt;

&lt;p&gt;Because that's how humans work.&lt;/p&gt;

&lt;p&gt;You don't re-introduce yourself to your friends every day. You don't re-order from scratch at your regular spot. You don't start every meeting with "Hi, my name is…"&lt;/p&gt;

&lt;p&gt;Memory is the foundation of relationships. The prerequisite for efficiency. The starting point for "it just gets me."&lt;/p&gt;

&lt;p&gt;When AI has memory, it finally has a shot at becoming something more than a cold tool — a partner that actually gets better the more you use it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Oh, and One More Thing: Memoria
&lt;/h2&gt;

&lt;p&gt;After all this talk about why AI memory matters, you might be wondering: so who's actually building this?&lt;/p&gt;

&lt;p&gt;Here's an open-source project we're working on — &lt;a href="https://github.com/matrixorigin/Memoria" rel="noopener noreferrer"&gt;Memoria&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In a nutshell, Memoria is the persistent memory layer for AI agents. It's an MCP-based (Model Context Protocol) memory service that lets your AI assistant remember your preferences, facts, and decisions across conversations.&lt;/p&gt;

&lt;p&gt;A few things that make it interesting:&lt;/p&gt;

&lt;p&gt;🧬 &lt;strong&gt;Git for memory&lt;/strong&gt; — This is Memoria's signature move. Every memory change is tracked with snapshots and an audit trail. You can create branches to experiment, roll back if things go sideways, and merge when you're happy. Just like developers use Git for code, Memoria lets you manage AI memory the same way.&lt;/p&gt;

&lt;p&gt;🔍 &lt;strong&gt;Semantic search&lt;/strong&gt; — Not just keyword matching. It retrieves memories by meaning. You said "I don't drink milk" once, and a future search for "dietary restrictions" will find it.&lt;/p&gt;

&lt;p&gt;🛡️ &lt;strong&gt;Self-governance&lt;/strong&gt; — Built-in contradiction detection and low-confidence memory quarantine. Your AI won't go haywire because it stored two conflicting facts.&lt;/p&gt;

&lt;p&gt;🔒 &lt;strong&gt;Privacy-first&lt;/strong&gt; — Supports local deployment and local embedding models. Your data can stay entirely on your machine.&lt;/p&gt;

&lt;p&gt;Memoria currently works with Kiro, Cursor, Claude Code, Codex, OpenClaw, and any MCP-compatible agent.&lt;/p&gt;

&lt;p&gt;Fun fact: the article you're reading right now was written in an AI environment running Memoria. It remembered my writing preferences, project context, and previous discussions — so I didn't have to re-explain everything from scratch.&lt;/p&gt;

&lt;p&gt;Which is kind of the whole point, isn't it? AI with memory just hits different.&lt;/p&gt;

&lt;p&gt;Curious? Check it out on &lt;a href="https://github.com/matrixorigin/Memoria" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, drop a star, and try giving your AI a memory 🧠&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Experience the power of persistent memory for AI Agents. 🧠&lt;br&gt;
💻 GitHub (Star us!):[&lt;a href="https://github.com/matrixorigin/Memoria" rel="noopener noreferrer"&gt;https://github.com/matrixorigin/Memoria&lt;/a&gt;]&lt;br&gt;
🌐 Website: [&lt;a href="https://thememoria.ai/" rel="noopener noreferrer"&gt;https://thememoria.ai/&lt;/a&gt;]&lt;br&gt;
👾 Discord: [&lt;a href="https://discord.com/invite/ahHAVVN6Gu" rel="noopener noreferrer"&gt;https://discord.com/&lt;/a&gt;]&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>programming</category>
      <category>coding</category>
    </item>
    <item>
      <title>Introducing Memoria: The World's First Git for AI Agent Memory</title>
      <dc:creator>MatrixOrigin </dc:creator>
      <pubDate>Thu, 19 Mar 2026 01:51:15 +0000</pubDate>
      <link>https://dev.to/origin_matrix_b790e656217/introducing-memoria-the-worlds-first-git-for-ai-agent-memory-4108</link>
      <guid>https://dev.to/origin_matrix_b790e656217/introducing-memoria-the-worlds-first-git-for-ai-agent-memory-4108</guid>
      <description>&lt;h2&gt;
  
  
  The Memory Problem No One Has Solved
&lt;/h2&gt;

&lt;p&gt;Open Cursor. Tell it your project uses React, TypeScript, and Zustand with the slice pattern. Watch it build exactly what you want.&lt;/p&gt;

&lt;p&gt;Close the tab. Open a new conversation. Ask it to add a user module.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Sure! What framework are you using? Redux or MobX for state management?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You told it twenty minutes ago. It already forgot.&lt;br&gt;
Now open OpenClaw. You've been using it for months — it remembers your preferences, your project context, your workflow patterns. But open &lt;code&gt;MEMORY.md&lt;/code&gt; and look at what's accumulated: hundreds of entries, growing every session. Each one gets injected into the system prompt on every API call, burning tokens whether or not it's relevant. Your monthly bill climbs. You try to move to a new machine or share your setup with a teammate — the memory is scattered across local Markdown files with no export, no sync, no structure.&lt;/p&gt;

&lt;p&gt;These aren't edge cases. This is the daily reality for millions of developers using AI agents in 2026. The models are brilliant. The memory is the bottleneck.&lt;/p&gt;
&lt;h2&gt;
  
  
  A Walk Through the Current State of Agent Memory
&lt;/h2&gt;

&lt;p&gt;To understand why we built Memoria, it helps to see where the industry actually stands — across coding agents, OpenClaw, and custom-built agents.&lt;/p&gt;
&lt;h3&gt;
  
  
  No memory at all
&lt;/h3&gt;

&lt;p&gt;Most coding agents — Cursor, Claude Code, Kiro — ship with zero persistent memory. Every session starts from a blank slate. The agent doesn't know your tech stack, your code style, your naming conventions, or the architectural decisions you made last week. It can only work with what's in the current conversation window.&lt;/p&gt;

&lt;p&gt;The cost is invisible but constant: repeated context-setting, redundant questions, inconsistent outputs across sessions. The agent is smart in the moment but has no continuity. It's a new hire every morning.&lt;/p&gt;
&lt;h3&gt;
  
  
  Markdown-based memory
&lt;/h3&gt;

&lt;p&gt;The industry's first response was static files. &lt;code&gt;.cursorrules&lt;/code&gt; tells Cursor your preferences. &lt;code&gt;CLAUDE.md&lt;/code&gt; does the same for Claude. Kiro has steering files. OpenClaw built a community-driven rule library so developers can share and reuse configurations.&lt;/p&gt;

&lt;p&gt;OpenClaw went further with &lt;code&gt;MEMORY.md&lt;/code&gt; — the agent can write memories to disk and read them back on the next session. This is genuine persistence. But the implementation has structural problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Token bloat.&lt;/strong&gt; All accumulated memories are loaded into the system prompt on every call. By session 10, you're burning 10,000+ tokens of context before the agent even reads your message. Most of those memories are irrelevant to the current task.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No semantic retrieval.&lt;/strong&gt; Memories are matched by date and category, not by meaning. The agent can't find "formatting preferences" when you ask it to "format this file" — unless the exact keywords happen to match.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No portability.&lt;/strong&gt; Memories live in local &lt;code&gt;.md&lt;/code&gt; files. Moving to a new machine, sharing with a teammate, or syncing across devices requires manual copy-paste.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No structure.&lt;/strong&gt; Code style, architecture decisions, deploy workflows, personal preferences — all mixed in flat files that grow into an unmaintainable mess.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No rollback.&lt;/strong&gt; If a bad memory entry corrupts the agent's behavior, you have to manually find and delete it. There's no undo.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Structured memory frameworks
&lt;/h3&gt;

&lt;p&gt;Mem0, Letta, and Zep represent the next generation. They store memories in real databases with vector embeddings, enabling semantic retrieval. This is a meaningful step forward — the agent can find relevant memories by meaning, not just keywords.&lt;/p&gt;

&lt;p&gt;But they all share a critical limitation: &lt;strong&gt;memories are append-or-update records with no version control.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Mem0 provides a general-purpose memory API across 24+ vector database backends. Letta builds tiered memory into its agent framework. Zep constructs temporal knowledge graphs. Each solves the storage and retrieval problem well.&lt;/p&gt;

&lt;p&gt;None of them solve the management problem.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Problems That Emerge at Scale
&lt;/h2&gt;

&lt;p&gt;When agents accumulate hundreds or thousands of memories over weeks and months of use, a new class of problems appears — problems that storage and retrieval alone can't fix.&lt;/p&gt;
&lt;h3&gt;
  
  
  No way to undo
&lt;/h3&gt;

&lt;p&gt;You're refactoring the authentication module. Over three sessions, the agent updates its understanding of your auth architecture — new file structure, new token flow, new API patterns. The refactor fails. You revert the code with &lt;code&gt;git checkout&lt;/code&gt;. But the agent's memory still reflects the refactored state. It now gives advice based on an architecture that no longer exists.&lt;/p&gt;

&lt;p&gt;In every existing memory system, your only option is to manually find and delete each affected memory entry. If you miss one, the agent's behavior stays corrupted.&lt;/p&gt;
&lt;h3&gt;
  
  
  No way to experiment safely
&lt;/h3&gt;

&lt;p&gt;You want to evaluate switching from PostgreSQL to SQLite. You tell the agent. It updates its project memory. You explore the idea for a few sessions, decide it won't work, and abandon it. But the agent now "knows" you use SQLite. The old PostgreSQL memories have been overwritten or contradicted.&lt;/p&gt;

&lt;p&gt;There's no way to say "let me try this in isolation and merge it back only if it works." Every change is permanent and global.&lt;/p&gt;
&lt;h3&gt;
  
  
  Memory drift and contradiction
&lt;/h3&gt;

&lt;p&gt;Over time, memories accumulate contradictions. You told the agent you use black for formatting in January. In March, you switched to ruff. Both memories exist. Which one wins? In most systems, it depends on retrieval ranking — which means the agent might use black on Monday and ruff on Tuesday, depending on how the query happens to match.&lt;/p&gt;
&lt;h3&gt;
  
  
  Memory poisoning
&lt;/h3&gt;

&lt;p&gt;This isn't theoretical. Multiple papers from 2025 (MemoryGraft, MINJA, A-MemGuard) demonstrated that adversaries can inject malicious content into an agent's long-term memory through indirect prompt injection — simply by having the agent read a crafted document. The poisoned memories persist across sessions and gradually alter agent behavior. The user may never notice.&lt;/p&gt;

&lt;p&gt;Most memory systems have no recovery mechanism beyond manually inspecting every entry. At scale, that's not feasible.&lt;/p&gt;
&lt;h3&gt;
  
  
  The common thread
&lt;/h3&gt;

&lt;p&gt;Every one of these problems has the same root cause: &lt;strong&gt;agent memory has no version control.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Code had the same problems before Git. You couldn't undo safely. You couldn't experiment in isolation. You couldn't trace when a change was introduced or roll back to a known good state. Git solved this for code. Agent memory needs the same thing.&lt;/p&gt;
&lt;h2&gt;
  
  
  Memoria: Version Control for Agent Memory
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/matrixorigin/memoria" rel="noopener noreferrer"&gt;Memoria&lt;/a&gt; is an open-source memory layer that brings Git's core abstractions to AI agent memory. Built in Rust, shipped as a single binary, backed by &lt;a href="https://github.com/matrixorigin/matrixone" rel="noopener noreferrer"&gt;MatrixOne&lt;/a&gt;'s Copy-on-Write database engine.&lt;/p&gt;

&lt;p&gt;The key operations:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snapshot&lt;/strong&gt; — save the current memory state before a risky operation. Zero-copy, millisecond completion, zero additional storage until changes are made.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You:    "Take a snapshot before we refactor the auth module"
Agent:  → memory_snapshot(name="pre-auth-refactor")
        ✓ Snapshot created.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Rollback&lt;/strong&gt; — restore memory to any previous snapshot. One command, deterministic, complete.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You:    "The refactor didn't work. Roll back."
Agent:  → memory_rollback(name="pre-auth-refactor")
        ✓ All memory restored to pre-refactor state.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Branch&lt;/strong&gt; — create an isolated memory space for experimentation. Changes on a branch don't affect main.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You:    "Let's evaluate switching to SQLite"
Agent:  → memory_branch(name="eval-sqlite")
        → memory_checkout(name="eval-sqlite")
        → memory_store("Project uses SQLite", type="semantic")
        (stored on eval-sqlite only — main is untouched)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Diff&lt;/strong&gt; — preview what would change before merging. Like a pull request for your agent's knowledge.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You:    "What changed on the sqlite branch?"
Agent:  → memory_diff(source="eval-sqlite")
        + [semantic] Project uses SQLite
        ~ [semantic] Database: PostgreSQL → SQLite
        (2 additions, 1 modification)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Merge&lt;/strong&gt; — bring a branch back into main after validation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You:    "The experiment worked. Merge it."
Agent:  → memory_merge(source="eval-sqlite")
        ✓ Merged 3 memories from eval-sqlite.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These aren't metaphors. They're real operations backed by MatrixOne's CoW engine, which provides branch isolation and zero-copy snapshots at the database level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Architecture Matters
&lt;/h2&gt;

&lt;p&gt;Two lines of research from 2025 validate this approach:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Git operations improve agent reasoning.&lt;/strong&gt; The Git-Context-Controller paper (Wu, 2025) showed that adding commit/branch/merge operations to agent context management improved SWE-Bench task resolution by 13 percentage points, reaching 80.2%. The ablation study confirmed that branch and merge specifically contributed the critical final gains. Agents with branching capability spontaneously developed more structured exploration strategies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory poisoning is a real threat.&lt;/strong&gt; MemoryGraft, MINJA, and A-MemGuard demonstrated practical attacks against agent long-term memory. Snapshot + rollback provides a deterministic recovery path that no other memory system offers.&lt;/p&gt;

&lt;p&gt;Memoria extends these principles from ephemeral context to persistent memory — a complementary and necessary layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond Version Control: What Else Memoria Does
&lt;/h2&gt;

&lt;p&gt;Version control is the headline, but Memoria is a complete memory infrastructure:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Typed memories.&lt;/strong&gt; Six categories with different retrieval priorities and lifecycle rules:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;semantic&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Project facts, technical decisions&lt;/td&gt;
&lt;td&gt;"This project uses Go 1.22 with modules"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;profile&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;User preferences&lt;/td&gt;
&lt;td&gt;"Always use pytest, never unittest"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;procedural&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Workflows, how-to knowledge&lt;/td&gt;
&lt;td&gt;"To deploy: make build → kubectl apply"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;working&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Current task context&lt;/td&gt;
&lt;td&gt;"Currently refactoring the auth module"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;tool_result&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Cached tool outputs&lt;/td&gt;
&lt;td&gt;Stored command results&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;episodic&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Session summaries&lt;/td&gt;
&lt;td&gt;"Session: DB optimization → added indexes → 93% faster"&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Semantic retrieval.&lt;/strong&gt; Hybrid vector + full-text search. Store "black formatter" and search "formatting tool" — semantic matching finds it. Scored by relevance and recency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-governance.&lt;/strong&gt; Three maintenance tools run periodically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;memory_governance&lt;/code&gt; — quarantines low-confidence memories, cleans stale data&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;memory_consolidate&lt;/code&gt; — detects contradictions (like the black vs. ruff problem), fixes orphaned entries&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;memory_reflect&lt;/code&gt; — synthesizes high-level insights from memory clusters via LLM&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configurable embedding.&lt;/strong&gt; OpenAI, SiliconFlow, Ollama, or any OpenAI-compatible endpoint. Local embedding available when building from source.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Compares
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Memoria&lt;/th&gt;
&lt;th&gt;Mem0&lt;/th&gt;
&lt;th&gt;Letta&lt;/th&gt;
&lt;th&gt;Zep/Graphiti&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Version control&lt;/td&gt;
&lt;td&gt;Snapshot, branch, merge, rollback, diff&lt;/td&gt;
&lt;td&gt;Timestamps only&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;Bitemporal fact tracking&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage&lt;/td&gt;
&lt;td&gt;Single database (MatrixOne)&lt;/td&gt;
&lt;td&gt;24+ vector DB backends&lt;/td&gt;
&lt;td&gt;PostgreSQL + vector DB&lt;/td&gt;
&lt;td&gt;Neo4j&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Retrieval&lt;/td&gt;
&lt;td&gt;Structured+ Vector + Full-text + hybrid&lt;/td&gt;
&lt;td&gt;Vector&lt;/td&gt;
&lt;td&gt;Vector + structured&lt;/td&gt;
&lt;td&gt;Graph traversal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Self-governance&lt;/td&gt;
&lt;td&gt;Contradiction detection + quarantine + reflect&lt;/td&gt;
&lt;td&gt;AUDN conflict resolution&lt;/td&gt;
&lt;td&gt;Sleep-time compute&lt;/td&gt;
&lt;td&gt;Temporal invalidation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Interface&lt;/td&gt;
&lt;td&gt;MCP Server + OpenClaw plugin + REST API&lt;/td&gt;
&lt;td&gt;SDK/API&lt;/td&gt;
&lt;td&gt;Agent framework&lt;/td&gt;
&lt;td&gt;SDK/API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Target&lt;/td&gt;
&lt;td&gt;Any agent (coding, OpenClaw, custom)&lt;/td&gt;
&lt;td&gt;AI app developers&lt;/td&gt;
&lt;td&gt;Letta framework users&lt;/td&gt;
&lt;td&gt;Knowledge graph users&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;License&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;Partially open&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Different tools for different problems. Mem0 is right if you need 3-line integration across 20+ frameworks. Letta is right if you're building agents from scratch within their framework. Zep is right if you need temporal knowledge graphs.&lt;/p&gt;

&lt;p&gt;Memoria is for anyone who wants their agent's memory to be as manageable as their code — with the full safety net of version control.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;

&lt;p&gt;Memoria ships as a single Rust binary. Setup depends on your agent type.&lt;/p&gt;

&lt;h3&gt;
  
  
  Coding Agents (Cursor / Kiro / Claude Code)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Install Memoria&lt;/span&gt;
curl &lt;span class="nt"&gt;-sSL&lt;/span&gt; https://raw.githubusercontent.com/matrixorigin/Memoria/main/scripts/install.sh | bash

&lt;span class="c"&gt;# 2. Start MatrixOne (or use MatrixOne Cloud — free tier, no Docker needed)&lt;/span&gt;
docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;

&lt;span class="c"&gt;# 3. Configure your tool&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;your-project
memoria init &lt;span class="nt"&gt;--tool&lt;/span&gt; kiro &lt;span class="se"&gt;\&lt;/span&gt;
             &lt;span class="nt"&gt;--embedding-provider&lt;/span&gt; openai &lt;span class="se"&gt;\&lt;/span&gt;
             &lt;span class="nt"&gt;--embedding-base-url&lt;/span&gt; https://api.siliconflow.cn/v1 &lt;span class="se"&gt;\&lt;/span&gt;
             &lt;span class="nt"&gt;--embedding-api-key&lt;/span&gt; sk-... &lt;span class="se"&gt;\&lt;/span&gt;
             &lt;span class="nt"&gt;--embedding-model&lt;/span&gt; BAAI/bge-m3 &lt;span class="se"&gt;\&lt;/span&gt;
             &lt;span class="nt"&gt;--embedding-dim&lt;/span&gt; 1024
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;kiro&lt;/code&gt; with &lt;code&gt;cursor&lt;/code&gt; or &lt;code&gt;claude&lt;/code&gt;. Restart your AI tool. Done.&lt;/p&gt;

&lt;h3&gt;
  
  
  OpenClaw
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/matrixorigin/openclaw-memoria" rel="noopener noreferrer"&gt;openclaw-memoria&lt;/a&gt; plugin replaces OpenClaw's default file-based memory with the full Memoria backend — semantic retrieval, version control, self-governance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://raw.githubusercontent.com/matrixorigin/openclaw-memoria/main/scripts/install-openclaw-memoria.sh | &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;env &lt;/span&gt;&lt;span class="nv"&gt;MEMORIA_DB_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'mysql+pymysql://root:111@127.0.0.1:6001/memoria'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nv"&gt;MEMORIA_EMBEDDING_PROVIDER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'openai'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nv"&gt;MEMORIA_EMBEDDING_MODEL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'text-embedding-3-small'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nv"&gt;MEMORIA_EMBEDDING_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'sk-...'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nv"&gt;MEMORIA_EMBEDDING_DIM&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'1536'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      bash &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Custom Agents / Any MCP-Compatible Tool
&lt;/h3&gt;

&lt;p&gt;Point any MCP-compatible agent at Memoria:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"memoria"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"memoria"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"mcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--db-url"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mysql+pymysql://root:111@localhost:6001/memoria"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--user"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"alice"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or connect to a deployed Memoria REST API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"memoria"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"memoria"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"mcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--api-url"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://your-server:8100"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--token"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sk-your-key..."&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Memoria is Apache 2.0 licensed and fully open source. This is an early release — the architecture is solid, the edges are still being refined.&lt;/p&gt;

&lt;p&gt;We're also building &lt;strong&gt;Memoria Cloud&lt;/strong&gt; — a managed service so you can get version-controlled agent memory without running your own database. Stay tuned.&lt;/p&gt;

&lt;p&gt;In the meantime, we'd genuinely appreciate your feedback. Try it, break it, tell us what doesn't work. The GitHub issues tab is open.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Memoria:&lt;/strong&gt; &lt;a href="https://github.com/matrixorigin/memoria" rel="noopener noreferrer"&gt;github.com/matrixorigin/memoria&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenClaw Plugin:&lt;/strong&gt; &lt;a href="https://github.com/matrixorigin/openclaw-memoria" rel="noopener noreferrer"&gt;github.com/matrixorigin/openclaw-memoria&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MatrixOne:&lt;/strong&gt; &lt;a href="https://github.com/matrixorigin/matrixone" rel="noopener noreferrer"&gt;github.com/matrixorigin/matrixone&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Code has Git. Agent memory now has Memoria.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built by &lt;a href="https://github.com/matrixorigin" rel="noopener noreferrer"&gt;MatrixOrigin&lt;/a&gt;. Open-sourced at GTC 2026.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>showdev</category>
      <category>tooling</category>
    </item>
  </channel>
</rss>
