Most AI memory systems solve only one problem: storage.
They help an agent remember previous messages, retrieve context, or search old notes. That is useful — but it is not the same as learning.
Human memory is not a database.
It has structure.
It forgets.
It reinforces what matters.
It summarizes patterns.
It turns scattered experiences into reusable judgment.
That is the idea behind DNA Memory.
GitHub: https://github.com/AIPMAndy/dna-memory
The problem
If you have worked with AI assistants or autonomous agents for a while, you have probably seen the same failure mode:
- the agent stores too much low-value information
- old context accumulates without prioritization
- repeated mistakes are not converted into durable lessons
- user preferences are remembered inconsistently
- "memory" becomes a dump, not an evolving system
Most memory layers are built like logs.
Very few are built like cognition.
What DNA Memory tries to do differently
DNA Memory is a lightweight memory evolution system for agents.
Instead of treating memory as a flat key-value store, it treats memory as something that should move across layers and change in importance over time.
1. Three-layer memory architecture
DNA Memory uses three layers:
- Working memory: temporary context for the current task or session
- Short-term memory: recent important facts, preferences, and lessons
- Long-term memory: stable knowledge and high-value patterns
The goal is simple:
not everything deserves to become long-term memory.
2. Reinforcement and forgetting
Memories are weighted.
- used memories can be strengthened
- stale memories can decay
- low-value memories can eventually be removed
- stable, important memories can be promoted into long-term memory
That makes the system closer to how humans actually retain knowledge.
3. Reflection
The most important part is not storage — it is reflection.
DNA Memory includes a reflect step that reviews high-value short-term memories and looks for repeatable patterns.
That means agent experience can gradually become:
- a preference rule
- an operational skill
- a recognized failure pattern
- a reusable long-term memory
4. Better recall
A memory system is only useful if retrieval is practical.
The latest version now supports:
- multi-keyword AND recall
- type-based filtering like
type:errorortype:skill - SQLite FTS5 full-text search
- fallback search when FTS5 is unavailable
So instead of just dumping everything into memory, recall becomes much closer to how an operator actually searches for prior experience.
A small but important engineering choice
One thing I changed recently was unifying the storage around a single SQLite memory store.
That solved a very real systems problem:
- old JSON-based memory artifacts and newer runtime state were drifting apart
- daemon behavior and memory data source were inconsistent
- recall quality suffered because the active store and the historical store were split
After the cleanup, the system now has:
- a cleaner SQLite-based primary store
- FTS5 recall indexing
- a daemon that reads the current SQLite operation history
- safer
.gitignoredefaults so real memory databases are not pushed to GitHub by mistake
Not flashy — but this kind of systems hygiene is what makes memory layers actually usable.
Why this matters for agents
I think the next step for AI agents is not just better reasoning per call.
It is better continuity across calls.
A capable agent should:
- remember what the user repeatedly cares about
- learn from mistakes instead of replaying them
- gradually build operating taste and judgment
- compress noisy experience into durable patterns
That is where memory becomes a true product advantage.
Not “more context.”
But better memory behavior.
Example workflow
A useful loop looks like this:
- Recall related memory before acting
- Execute the task
- Remember newly learned preferences, skills, or errors
- Reflect on repeated patterns
- Promote stable memories into long-term memory
- Decay what no longer matters
That turns raw interaction history into something closer to accumulated experience.
What is already implemented
Current DNA Memory includes:
- remember / recall / stats CLI
- reflect / promote / dedupe flow
- SQLite-backed memory store
- FTS5 recall search
- background daemon for automatic reflect and decay
- launchd-based auto-start support on macOS
What is still missing
There is still a lot to improve:
- better semantic retrieval beyond keyword/FTS recall
- stronger Chinese tokenization and ranking
- richer memory graph visualization
- more robust migration / import / export tooling
- shared memory spaces for multi-agent systems
So this is not a finished “memory platform.”
It is a working direction.
Repo
If you are building AI assistants, autonomous agents, or long-running automation systems, you may find the approach useful.
GitHub: https://github.com/AIPMAndy/dna-memory
If this resonates, I’d love feedback on one question:
What should an agent remember — and what should it deliberately forget?
Top comments (0)