DEV Community

Cover image for Dryft: What if AI memory worked like an ecosystem instead of a filing cabinet?
steelponymike
steelponymike

Posted on

Dryft: What if AI memory worked like an ecosystem instead of a filing cabinet?

I'm a vegetable farmer in Western Canada. I run a regional food hub. I'm not a developer. But I spend a lot of time thinking about how systems work, how pieces interact, and what happens when they don't.

I've been watching the AI memory conversation, especially in the agent space. The pattern I keep seeing is the same one I see in a lot of tech: things built in isolation that don't interact well. Memory right now is basically a static field. You write things to files. You search for similar text. When the context window fills up, you compress. Important stuff disappears. Old decisions sit at equal weight to current ones. Nothing ages. Nothing connects. Nothing dies.

In nature, that's not how memory works. Things that matter get reinforced. Things that don't, fade. Related things bond together. And there's a predator that removes what the system can no longer afford to carry. It's not storage. It's ecology.

What I built

Dryft is a working system for ecological AI memory. Memories aren't static entries. They're a living population, a herd, that self-regulates.

Fitness emergence. Memories that get used become stronger. Memories that don't, weaken over time. No manual curation. No one decides what to delete. The herd figures it out based on what's actually useful.

Relational bonding. Memories that get activated together develop bonds. The more they co-occur, the stronger the bond. You don't need to extract entities and build a knowledge graph. The relationships emerge from use patterns. "Alice" and "auth team" bond because they keep coming up together, not because someone mapped them.

The predator. Memories that stay at zero fitness for consecutive cycles get culled. Automatically. Every other memory system I've seen only adds. Dryft subtracts. That's the part nobody is building, and it might be the most important part. Without something removing irrelevant memories, retrieval quality degrades as the store grows. More memories means more noise competing for the same similarity search. The predator keeps the herd lean enough that what comes back is actually useful.

Memory types with different lifespans. Episodic memories (what happened Tuesday) decay faster than semantic memories (what's always true). This prevents old events from sitting at the same weight as foundational facts.

Conflict detection. When the system finds two memories that contradict each other, it flags the conflict and surfaces it for resolution. Wrong memories don't just sit there forever.

Temporal awareness. The system infers when memories were created from ecological metadata, tracks generational lineage, and can identify when a newer memory supersedes an older one.

Decomposition. When a memory dies, it doesn't just vanish. Its substance feeds back into a substrate layer (the "grass"), where patterns can be synthesized into new general knowledge. Death feeds the system.

Why this matters for agents

If you're using agents to manage your work and your life, memory bloat is inevitable. You ask about a project deadline one week, a recipe the next, a code architecture the week after. Over time, hundreds of entries accumulate. A memory about how long to boil an egg sits at the same weight as the architecture decision that defines your whole project. Nothing distinguishes them. Nothing removes the egg.

In Dryft, the egg memory fades on its own because it stops being referenced. The architecture decision stays strong because it keeps getting activated. You don't have to decide what to keep. The ecology decides based on what you actually use.

The agent memory problem breaks down into a few core issues:

Context loss during compaction. Long sessions fill up, the system summarizes, details vanish. Fitness-based memory means the important stuff has been reinforced long before compaction happens. The herd already knows what matters.

No relationships between memories. Flat text search can find similar words but can't connect related concepts across conversations. Dryft's bonding model creates these connections organically through co-activation. No graph extraction needed.

Memory bloat. Entries accumulate and nothing ever leaves. The predator solves it. Things die. The herd stays at a manageable size. The system gets sharper over time instead of noisier.

Stale memories at equal weight to current ones. Decisions from three months ago sitting alongside decisions from yesterday with no way to tell which is current. Fitness decay handles this naturally. Recent, frequently-used memories are strong. Old, unused ones are weak or dead.

What exists right now

This isn't a paper or a slide deck. It's a working system I've been using daily through a Telegram bot for weeks.

The core ecology engine: fitness scoring, decay by memory type, proximity bonding, predator culling, decomposition into a grass substrate layer, dormancy staging for new signals, and rehydration (a culled memory can come back if it becomes relevant again).

A six-layer architecture: foundational layer (permanent core knowledge), grass layer (substrate that feeds new memories), main herd (operational memories, the cattle), evaluation herd (a separate flock of sheep for evaluative/opinion memories), dormancy staging (incubator for new signals), and a temporal layer for time-aware reasoning.

Conflict detection and resolution. The system identifies contradictory memories, surfaces them conversationally, and can cull the wrong one through what I call "humane dispatch" (user-confirmed removal, no decomposition, because wrong memories have zero substrate value).

An 83% weighted score on a 56-query benchmark I built to test it against my own usage patterns. Five categories: single-hop recall (100%), multi-hop reasoning (73%), temporal queries (70%), open-domain (58%), and adversarial hallucination traps (100%). Scored by an independent judge model. This is a custom benchmark, not an industry standard, but it gave me a way to measure whether the ecology was actually working. It is.

A 50% score on LOCOMO, which is a standardized research benchmark from Meta. For context, Mem0 published 66.9% and OpenAI Memory published 53% on the same benchmark. Dryft's gap is mostly temporal extraction failures (relative dates not being resolved) and the predator culling memories the benchmark later asks about. Both are fixable. The predator tension is actually the interesting part: what's good for real daily use (culling stale memories) costs you points on a benchmark that quizzes you on everything equally. That's a design tradeoff I'm willing to make.

Emergent behaviors that weren't explicitly coded. A memory that hit zero fitness spontaneously recovered when it became relevant again. Memories form clusters through bonding that mirror how I actually think about topics, without anyone mapping those relationships.

What I'm not pretending this is

It's not production-ready software. The architecture works. The benchmarks show it. But turning this into something that plugs into an agent framework and scales is a build I can't do alone and wouldn't pretend to lead alone.

I'm a systems thinker, not a developer. I have no experience managing development projects. What I do have is a design methodology that keeps producing results on this project, and 15 years of watching complex living systems self-organize that I can draw from.

The ecological framing isn't decorative. It's where every feature came from. 15 years of working with living systems gave me a way of seeing these problems that keeps producing useful answers. The architecture didn't come from studying other memory systems. It came from watching how populations self-regulate.

What I'm looking for

I'm open-sourcing this because I want it to work somewhere useful. AI memory is going to be important for everyone, and right now the solutions are filing cabinets pretending to be brains.

Take it and run with it if you want. MIT license, no strings. But if you're interested in building on the ecological approach, I'd love to stay involved. The design methodology is the thing I bring to the table. The biological intuition isn't something I can hand off in a README. It's a way of thinking about systems that keeps generating the next useful idea, and I'd like to keep doing that.

If you look at this and see something wrong, I want to hear that too. And if you build something with it, I'd appreciate the credit, but mostly I just want to see it land somewhere real.

Repo: https://github.com/steelponymike/dryft

The core files:

  • herd_engine.py — the ecology engine: fitness, decay, bonds, predator, decomposition
  • proxy.py — the integration layer: context injection, signal detection, conflict handling, multi-turn conversation
  • signal_detector.py — extracts memory-worthy signals from conversation
  • conflict_detector.py — identifies contradictory memories
  • temporal_utils.py — time-aware reasoning and supersession detection
  • benchmark_full.json — the 56-query benchmark and results

MIT License. Do what you want with it.

Top comments (0)