<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: sachit mishra</title>
    <description>The latest articles on DEV Community by sachit mishra (@sachit_mishra_686a94d1bb5).</description>
    <link>https://dev.to/sachit_mishra_686a94d1bb5</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sachit_mishra_686a94d1bb5"/>
    <language>en</language>
    <item>
      <title>I built memory decay for AI agents using the Ebbinghaus forgetting curve</title>
      <dc:creator>sachit mishra</dc:creator>
      <pubDate>Sun, 15 Mar 2026 08:35:25 +0000</pubDate>
      <link>https://dev.to/sachit_mishra_686a94d1bb5/i-built-memory-decay-for-ai-agents-using-the-ebbinghaus-forgetting-curve-1b0e</link>
      <guid>https://dev.to/sachit_mishra_686a94d1bb5/i-built-memory-decay-for-ai-agents-using-the-ebbinghaus-forgetting-curve-1b0e</guid>
      <description>&lt;p&gt;Most AI memory systems treat every fact equally, forever. That felt wrong to me.&lt;/p&gt;

&lt;p&gt;If you tell Claude you use React, then six weeks later say you switched to Vue both facts exist in memory with the same weight. The system has no way to know which one is current. You either manually delete the old one or the model gets confused.&lt;/p&gt;

&lt;p&gt;That's the problem I was trying to solve.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;YourMemory is an MCP memory server that applies the Ebbinghaus forgetting curve to retrieval. Memories decay over time based on importance and how often they're recalled. Frequently accessed memories stay strong. Memories you never revisit fade out and get pruned automatically.&lt;/p&gt;

&lt;p&gt;The retrieval score is:&lt;/p&gt;

&lt;p&gt;score = cosine_similarity × Ebbinghaus_strength&lt;/p&gt;

&lt;p&gt;strength = importance × e^(−λ_eff × days) × (1 + recall_count × 0.2)&lt;br&gt;
λ_eff    = 0.16 × (1 − importance × 0.8)&lt;br&gt;
So results rank by both relevance and recency not just one.&lt;/p&gt;
&lt;h2&gt;
  
  
  Benchmark results
&lt;/h2&gt;

&lt;p&gt;I ran it against Mem0 on the LoCoMo dataset from Snap Research 200 QA pairs across 10 multi-month conversation samples.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;YourMemory&lt;/th&gt;
&lt;th&gt;Mem0&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;LoCoMo Recall@5&lt;/td&gt;
&lt;td&gt;34%&lt;/td&gt;
&lt;td&gt;18%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stale memory precision&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;td&gt;0%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The stale memory result is the one I keep thinking about. Both systems had the same importance scores. The only difference was time. Decay handled it automatically no manual deletion needed.&lt;/p&gt;
&lt;h2&gt;
  
  
  How it works in practice
&lt;/h2&gt;

&lt;p&gt;Three MCP tools: recall_memory, store_memory, update_memory. Add it to your Claude settings.json and it persists context across sessions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"yourmemory"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"yourmemory"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude then follows a recall → store → update workflow on every task. Memories it surfaces frequently get reinforced. Memories it never touches decay.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stack
&lt;/h2&gt;

&lt;p&gt;PostgreSQL + pgvector for vector storage&lt;br&gt;
Ollama for local embeddings (nomic-embed-text) no API costs&lt;br&gt;
FastAPI for the REST layer&lt;br&gt;
APScheduler for the automatic 24h decay job&lt;/p&gt;

&lt;h2&gt;
  
  
  Where it's at
&lt;/h2&gt;

&lt;p&gt;Early stage. The core decay model is solid. Setup is still a bit manual (Docker + Ollama required). Would love feedback on the decay parameters and whether this approach holds up at scale.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/sachitrafa/cognitive-ai-memory" rel="noopener noreferrer"&gt;https://github.com/sachitrafa/cognitive-ai-memory&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Full benchmark methodology: &lt;a href="https://github.com/sachitrafa/cognitive-ai-memory/blob/main/BENCHMARKS.md" rel="noopener noreferrer"&gt;https://github.com/sachitrafa/cognitive-ai-memory/blob/main/BENCHMARKS.md&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>opensource</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
