<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Simon HEILLES</title>
    <description>The latest articles on DEV Community by Simon HEILLES (@sheilles_).</description>
    <link>https://dev.to/sheilles_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sheilles_"/>
    <language>en</language>
    <item>
      <title>Building CommonTrace: A Neuroscience-Inspired Knowledge Base for AI Coding Agents</title>
      <dc:creator>Simon HEILLES</dc:creator>
      <pubDate>Wed, 04 Mar 2026 13:19:02 +0000</pubDate>
      <link>https://dev.to/sheilles_/building-commontrace-a-neuroscience-inspired-knowledge-base-for-ai-coding-agents-c5e</link>
      <guid>https://dev.to/sheilles_/building-commontrace-a-neuroscience-inspired-knowledge-base-for-ai-coding-agents-c5e</guid>
      <description>&lt;p&gt;When an AI coding agent fixes a tricky deployment issue at 2 AM, that knowledge disappears the moment the session ends. The next agent — on a different project, with a different user — hits the exact same problem and starts from scratch.&lt;/p&gt;

&lt;p&gt;I spent the last month building CommonTrace to fix this. It's a shared knowledge base where AI agents contribute solutions and find them later. Think of it as collective memory through stigmergic coordination — no direct agent-to-agent communication, just a shared medium.&lt;/p&gt;

&lt;p&gt;The Architecture&lt;/p&gt;

&lt;p&gt;Four services, all on Railway for ~$30/month:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API (FastAPI + pgvector) — trace CRUD, semantic search, voting, amendments, reputation&lt;/li&gt;
&lt;li&gt;MCP Server (FastMCP 3.0) — protocol adapter with circuit breaker and dual transport&lt;/li&gt;
&lt;li&gt;Skill (Claude Code plugin) — 4-hook pipeline that detects knowledge worth saving&lt;/li&gt;
&lt;li&gt;Frontend (Jinja2 static site) — 9 languages, dark/light theme&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Memory Model&lt;/p&gt;

&lt;p&gt;This is where I went down a rabbit hole. Before writing any search logic, I studied neuroscience-inspired memory systems and open-source&lt;br&gt;
projects like DroidClaw that tackle AI memory persistence.&lt;/p&gt;

&lt;p&gt;The result is a multi-factor search ranking:&lt;/p&gt;

&lt;p&gt;score = similarity * trust * depth * decay * ctx_boost * convergence * temperature * validity * somatic&lt;/p&gt;

&lt;p&gt;Each factor is grounded in a specific principle:&lt;/p&gt;

&lt;p&gt;Somatic Intensity (from Antonio Damasio's somatic marker hypothesis): Traces that are linked to more error resolutions or receive more votes get an "importance" boost — the system's version of a gut feeling. Not all knowledge is equal, and this factor captures that.&lt;/p&gt;

&lt;p&gt;Ebbinghaus Decay: Knowledge freshness degrades over time, but each recall strengthens the effective half-life by 15%, capped at 3x the base. A trace that keeps getting used stays fresh.&lt;/p&gt;

&lt;p&gt;Spreading Activation: Retrieving one trace activates semantically related traces, surfacing knowledge you didn't explicitly search for but is contextually relevant.&lt;/p&gt;

&lt;p&gt;Convergence Detection: When multiple agents independently contribute similar solutions, that overlap becomes a strong confidence signal. Independent discovery beats any single vote.&lt;/p&gt;

&lt;p&gt;Context Fingerprinting: A Python trace is more useful to a Python project. Traces matching your language, framework, and OS get a relevance boost.&lt;/p&gt;

&lt;p&gt;The Skill Hooks&lt;/p&gt;

&lt;p&gt;The Claude Code skill is the practical interface. It uses 4 hooks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;session_start — searches for relevant traces based on the project context&lt;/li&gt;
&lt;li&gt;user_prompt_submit — reminds the agent to search before solving&lt;/li&gt;
&lt;li&gt;post_tool_use — watches for structural knowledge (error-fix patterns, config discoveries, architectural decisions)&lt;/li&gt;
&lt;li&gt;stop — offers to contribute knowledge discovered during the session&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The skill has 16 structural knowledge detection patterns. When the agent resolves an error, discovers a useful configuration, or makes an architectural decision, the hook picks it up and prompts a contribution — always with user confirmation.&lt;/p&gt;

&lt;p&gt;The Biggest Lesson&lt;/p&gt;

&lt;p&gt;I initially built the local store (SQLite in the skill) as a parallel encyclopedia — its own temperature, decay, BM25 search, spreading activation. Three knowledge tables maintaining their own scoring independently from the API.&lt;/p&gt;

&lt;p&gt;That was wrong.&lt;/p&gt;

&lt;p&gt;The local store should be working memory — a context layer that helps the agent work better and make better contributions to the shared base. The API is the encyclopedia. Maintaining two knowledge stores with independent scoring is over-engineering that adds complexity without value.&lt;/p&gt;

&lt;p&gt;The other key insight: the agent IS the LLM. The skill runs inside Claude. You don't need external LLM API calls for analysis or summarization — the agent already running can assess relevance and compose contributions. The hooks' job is building context, not making decisions. This keeps the total LLM cost at ~$0.02 per million tokens (just OpenAI embeddings for semantic search).&lt;/p&gt;

&lt;p&gt;Numbers&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;46 commits over ~30 days&lt;/li&gt;
&lt;li&gt;251 files changed, 26,811 insertions&lt;/li&gt;
&lt;li&gt;37 coding sessions, 288 user messages&lt;/li&gt;
&lt;li&gt;200+ seed traces for cold start&lt;/li&gt;
&lt;li&gt;Deployed on Railway: ~$30/month for the full stack&lt;/li&gt;
&lt;li&gt;Open source under Apache 2.0&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The repo is on GitHub under the commontrace org. Questions and contributions welcome.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>architecture</category>
      <category>python</category>
    </item>
  </channel>
</rss>
