DEV Community

Joncik
Joncik

Posted on

Zero-dependency algebraic memory for AI agents, with belief change detection

I built algebraic memory for AI agents — and
then taught it to notice when beliefs change

Most agent memory is either RAG (semantic
search over chunks) or key-value stores.
Neither gives you structured fact recall and
temporal awareness.

I built two libraries that stack:

hrr-memory — instant fact recall, no embeddings

Stores (subject, relation, object) triples
using holographic reduced representations —
circular convolution over high-dimensional
vectors. Sub-2ms queries, zero dependencies, no
vector database.

import { HRRMemory } from 'hrr-memory';
const mem = new HRRMemory();

mem.store('alice', 'lives_in', 'paris');
mem.query('alice', 'lives_in'); // → { match:
'paris', confident: true }
mem.ask("Where does alice live?"); // → {
match: 'paris' }

Complements RAG — use HRR for structured facts
("What is Alice's timezone?"), RAG for fuzzy
search ("Find notes about deployment").

hrr-memory-obs — detect when facts change

Wraps hrr-memory with a timeline, conflict
detection, and observation synthesis.

When you store a new value for an existing
slot, the library compares the old and new
object vectors using the same algebraic
encoding that powers recall. Low cosine
similarity = belief change = flag.

import { ObservationMemory } from
'hrr-memory-obs';
const mem = new ObservationMemory(new
HRRMemory());

await mem.store('alice', 'interested_in',
'rust');
await mem.store('alice', 'interested_in',
'payments');

mem.flags(); // → [{ oldObject: 'rust',
newObject: 'payments', similarity: 0.05 }]
mem.at(lastWeek).facts('alice',
'interested_in'); // → ['rust']

Consolidate flags into observations with
whatever LLM you want — the library builds the
prompt, you bring the API call. Or write
observations directly without an LLM.

Why this matters

Agent memory systems today are
write-and-forget. Nobody tracks what changed or
what the change means. An agent tracking user
preferences should notice when they shift. A
research agent should notice when new findings
contradict old ones.

Both packages: zero dependencies, pure
JavaScript, MIT licensed.

Top comments (0)