<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nam Nguyễn</title>
    <description>The latest articles on DEV Community by Nam Nguyễn (@nhadaututheky).</description>
    <link>https://dev.to/nhadaututheky</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nhadaututheky"/>
    <language>en</language>
    <item>
      <title>Neural Memory: How Spreading Activation Gives AI Agents a Real Memory System</title>
      <dc:creator>Nam Nguyễn</dc:creator>
      <pubDate>Wed, 04 Mar 2026 17:42:44 +0000</pubDate>
      <link>https://dev.to/nhadaututheky/neural-memory-how-spreading-activation-gives-ai-agents-a-real-memory-system-4gpf</link>
      <guid>https://dev.to/nhadaututheky/neural-memory-how-spreading-activation-gives-ai-agents-a-real-memory-system-4gpf</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97p2qmiobjm8r784v7ad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97p2qmiobjm8r784v7ad.png" alt=" " width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Every AI coding session starts from zero. You explain your project architecture, your conventions, your past decisions — and the AI forgets all of it when the session ends.&lt;/p&gt;

&lt;p&gt;Most solutions reach for RAG: embed text into vectors, search by similarity, return chunks. It works for document retrieval, but it's a poor model for &lt;em&gt;memory&lt;/em&gt;. When you remember something, you don't search a database — you &lt;em&gt;associate&lt;/em&gt;. One thought triggers another, which triggers another, until the relevant memory surfaces.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Different Approach: Neural Graphs
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/nhadaututtheky/neural-memory" rel="noopener noreferrer"&gt;Neural Memory&lt;/a&gt; stores memories as a graph of typed neurons connected by typed synapses:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;outage ← CAUSED_BY ← JWT_decision ← SUGGESTED_BY ← Alice ← DECIDED_AT ← Tuesday_meeting
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you ask "why did the outage happen?", it doesn't just find text containing "outage." It activates the outage neuron, and activation spreads through the graph following synapse weights. You get the full causal chain — not just the closest text match.&lt;/p&gt;

&lt;h3&gt;
  
  
  RAG vs Spreading Activation
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;RAG / Vector Search&lt;/th&gt;
&lt;th&gt;Neural Memory&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Model&lt;/td&gt;
&lt;td&gt;Search engine&lt;/td&gt;
&lt;td&gt;Human brain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LLM/Embedding&lt;/td&gt;
&lt;td&gt;Required&lt;/td&gt;
&lt;td&gt;Optional — core recall is pure graph traversal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Query&lt;/td&gt;
&lt;td&gt;"Find similar text"&lt;/td&gt;
&lt;td&gt;"Recall through association"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Relationships&lt;/td&gt;
&lt;td&gt;None (just similarity)&lt;/td&gt;
&lt;td&gt;Explicit: &lt;code&gt;CAUSED_BY&lt;/code&gt;, &lt;code&gt;LEADS_TO&lt;/code&gt;, &lt;code&gt;RESOLVED_BY&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-hop&lt;/td&gt;
&lt;td&gt;Multiple queries&lt;/td&gt;
&lt;td&gt;Natural graph traversal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API Cost&lt;/td&gt;
&lt;td&gt;~$0.02/1K queries&lt;/td&gt;
&lt;td&gt;$0.00 — fully offline&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Encoding
&lt;/h3&gt;

&lt;p&gt;When you tell the AI to remember something, Neural Memory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extracts entities, keywords, temporal markers&lt;/li&gt;
&lt;li&gt;Creates typed neurons (ENTITY, CONCEPT, ACTION, TEMPORAL, etc.)&lt;/li&gt;
&lt;li&gt;Creates typed synapses between them (24 relationship types)&lt;/li&gt;
&lt;li&gt;Groups related neurons into a Fiber (episodic memory bundle)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Retrieval (Spreading Activation)
&lt;/h3&gt;

&lt;p&gt;When you recall:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Seed activation&lt;/strong&gt;: neurons matching your query get initial activation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spreading&lt;/strong&gt;: activation propagates through synapses, weighted by strength&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Decay&lt;/strong&gt;: activation decreases with each hop (configurable)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Threshold&lt;/strong&gt;: only neurons above threshold are included in results&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context assembly&lt;/strong&gt;: top-activated neurons are assembled into a coherent response&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This naturally handles multi-hop queries. "Who suggested the thing that caused the outage?" follows the chain without explicit graph queries.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Consolidation
&lt;/h3&gt;

&lt;p&gt;Memories have a lifecycle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Decay&lt;/strong&gt;: unused synapses weaken over time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reinforcement&lt;/strong&gt;: recalled memories get stronger&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pruning&lt;/strong&gt;: orphan neurons (no connections) get cleaned up&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Merging&lt;/strong&gt;: duplicate information gets consolidated&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  28 MCP Tools
&lt;/h2&gt;

&lt;p&gt;Neural Memory exposes 28 tools via the &lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;nmem_remember&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Store a memory with automatic extraction&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;nmem_recall&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Retrieve memories through spreading activation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;nmem_context&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Load recent memories at session start&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;nmem_explain&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Show WHY two concepts are connected (BFS path)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;nmem_habits&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Detect recurring patterns in your workflow&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;nmem_consolidate&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Run memory lifecycle (decay, prune, merge)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;nmem_health&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Health diagnostics with actionable recommendations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;nmem_session&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Save/restore session state&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Plus 20 more for brain management, import/export, training, and diagnostics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Start
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;neural-memory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Claude Code (Plugin)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/plugin marketplace add nhadaututtheky/neural-memory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Manual MCP Config
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"neural-memory"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"uvx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"neural-memory"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Optional: Cross-Language Embeddings
&lt;/h3&gt;

&lt;p&gt;Core recall works without embeddings. Enable for cross-language search:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="c"&gt;# ~/.neuralmemory/config.toml&lt;/span&gt;
&lt;span class="nn"&gt;[embedding]&lt;/span&gt;
&lt;span class="py"&gt;enabled&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="py"&gt;provider&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"ollama"&lt;/span&gt;          &lt;span class="c"&gt;# or sentence_transformer, gemini, openai&lt;/span&gt;
&lt;span class="py"&gt;model&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"nomic-embed-text"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Numbers
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;3,150+ tests&lt;/strong&gt;, 68% coverage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;v2.25.0&lt;/strong&gt;, production-stable since v2.10&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;11 memory types&lt;/strong&gt;, 24 synapse types, schema v20&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Python 3.11+&lt;/strong&gt;, async via aiosqlite&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MIT license&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dashboard&lt;/strong&gt;: FastAPI + React web UI for visualization&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/nhadaututtheky/neural-memory" rel="noopener noreferrer"&gt;https://github.com/nhadaututtheky/neural-memory&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docs&lt;/strong&gt;: &lt;a href="https://nhadaututtheky.github.io/neural-memory/" rel="noopener noreferrer"&gt;https://nhadaututtheky.github.io/neural-memory/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PyPI&lt;/strong&gt;: &lt;a href="https://pypi.org/project/neural-memory/" rel="noopener noreferrer"&gt;https://pypi.org/project/neural-memory/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Neural Memory is open source and contributions are welcome. The spreading activation approach is particularly interesting if you've worked with cognitive architectures (ACT-R, Soar) — it's the same theoretical foundation applied to AI agent memory.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claudecode</category>
      <category>mcp</category>
    </item>
  </channel>
</rss>
