DEV Community

Lax
Lax

Posted on

Why we ditched the knowledge graph approach for agent memory

Every other week someone drops a new memory layer for AI agents. Most of them do the same thing-> take conversation history, extract entities and relationships, compress it into a knowledge graph.

The problem is thats lossy compression. You are making irreversible decisions about what matters at ingestion time before you know what the agent will actually need. Information that doesnt fit the graph schema gets dropped. Nuance gets flattened into edges.

We ran into this building Vektori and ended up going a different direction.

Instead of compressing conversations into a graph, we keep three layers:

L0: extracted facts - high signal, quality filtered, your fast search surface

L1: episodes - auto-discovered across conversations, not hand-written schemas

L2: raw sentences - never loaded by default, only fetched when you need to trace something back

The raw sentence layer is the key difference. Nothing gets thrown away at ingestion. If the agent needs to reconstruct exactly what was said in session 47 it can. The graph structure lives above it not instead of it.

Early benchmarks: 73% on LongMemEval-S.

Free and open source: github.com/vektori-ai/vektori (do star if found useful :)

Top comments (0)