DEV Community

thebasedcapital
thebasedcapital

Posted on

Why I Ditched RAG for Hebbian Synapses (and My Agent Actually Got Faster)

Every agent memory system I tried follows the same playbook: embed text into vectors, store them in a database, cosine search on recall. It works for storing facts. But my Claude Code agent kept doing the same dumb thing — grepping for auth.ts every single session, even though I open it 10 times a day.

The problem: RAG gives agents declarative memory (facts). It doesn't give them procedural memory (learned behavior). No amount of vector embeddings will teach an agent that auth.ts and session.ts always go together.

The neuroscience answer

Real brains don't embed memories into vectors. They form synaptic connections through repeated co-activation. Donald Hebb described this in 1949: "neurons that fire together wire together."

I built BrainBox to apply this to coding agents.

How it works

BrainBox has three primitives:

Neurons — represent files, tools, and errors. Created automatically when your agent interacts with them.

Synapses — connections between neurons. When you access auth.ts then session.ts within the same session, a synapse forms. Access them together 10 more times and the synapse strengthens.

Myelination — real neurons wrap frequently-used axons in myelin sheaths for faster signal propagation. BrainBox does the same: once a synapse crosses a threshold, it becomes a "superhighway" — instant recall, maximum confidence.

The math (simplified)

Synapse strengthening uses SNAP sigmoid plasticity:

delta = learning_rate * sigmoid_gain(current_weight)
Enter fullscreen mode Exit fullscreen mode

Where sigmoid_gain makes strong synapses resist further strengthening (prevents any single connection from dominating). This mirrors real synaptic saturation.

Spreading activation (Collins & Loftus, 1975):

When you recall auth.ts, BrainBox activates it, then propagates activation to connected neurons through 2-hop BFS. Each hop decays by 1/sqrt(degree) (Anderson's fan effect). Files 2 hops away get weaker activation than direct neighbors.

Decay follows Ebbinghaus forgetting curves — unused connections weaken naturally:

  • Activation: -15% per cycle
  • Synapses: -2% per cycle
  • Myelination: -0.5% per cycle

Results from production

After 5 hours of real Claude Code usage:

  • 79 neurons, 3,554 synapses formed
  • 67% top-1 recall accuracy (the first file recalled was the right one)
  • 8.9% gross token savings (fewer grep/search operations)
  • <5ms recall latency
  • 3 superhighways formed naturally

Install

npm install brainbox-hebbian
Enter fullscreen mode Exit fullscreen mode

Auto-configures Claude Code hooks. Works with any MCP agent. Full whitepaper in the repo.

GitHub: github.com/thebasedcapital/brainbox

Why not both?

BrainBox is complementary to RAG, not a replacement. RAG handles L2 (declarative facts). BrainBox handles L3 (behavioral patterns). The 4-layer memory model:

Layer What Examples
L1 Buffer Context window, chat history
L2 Declarative Mem0, Zep, SuperMemory (facts/conversations)
L3 Procedural BrainBox (behavior patterns, muscle memory)
L4 Identity Personal style, values, habits

L3 was empty until now. That's the gap BrainBox fills.


Full algorithm details in WHITEPAPER.md — covers SNAP plasticity, BCM theory, spreading activation, error-fix learning, and production benchmarks.

Top comments (0)