The problem nobody talks about
$100B+ is wasted annually in research duplication.
Economists estimate that the failure to share negative results
burns between 5% and 10% of the entire global research budget
every year.
Not because researchers are lazy or careless.
Because no infrastructure exists to capture what
doesn't work — without asking for extra effort.
The root cause is publication bias.
Labs fund projects based on partial or incorrect assumptions
because they don't know about failed attempts elsewhere.
They end up blindly replicating the same mistakes.
The consequences are concrete:
If lab A doesn't publish the failure of formula X,
labs B, C, and D will spend months of work and precious
rare materials synthesizing and testing the same
useless formula independently.
This happens every day. Across every scientific discipline.
The failed hypothesis at 11pm.
The dead end that took three weeks.
The pivot that would have saved the next person three months.
This knowledge disappears. Into lab notebooks nobody reads,
or nowhere at all.
Every existing tool — Notion, Obsidian, Roam — asks for
a deliberate act of documentation after the work is done.
More work. No reward. No adoption.
The idea
What if the research process left a trace automatically,
as a side effect of thinking?
That's MemoryGraph.
Every unit of thought lives as a typed node in a personal
knowledge graph:
-
Observation— something you noticed -
Hypothesis— something you believe might be true -
Conclusion— something you've established -
DeadEnd— something that didn't work -
OpenQuestion— something you don't know yet
Each node carries a full temporal history — every change
in belief, every pivot, every moment confidence shifted and why.
The graph is never a snapshot. It is a recording.
Git for knowledge. Architecturally.
| Git | MemoryGraph |
|---|---|
| Repository | Personal knowledge graph |
| Commit | NodeState — a belief at a moment in time |
| Fork | SubgraphToken — a signed copy of selected nodes |
| Diff | Semantic delta between two trajectories |
| Pull request | MergeProposal with conflict detection |
When two researchers need to share knowledge, one issues
a SubgraphToken — a signed, scoped selection of nodes.
The recipient gets an isolated fork. They develop it freely.
If they find something valuable, they propose a merge.
The dark matter of research finally has a place to live.
What's built so far
- ✅ Graph store with Kuzu (embedded, zero infra)
- ✅ Full node versioning — nothing ever deleted
- ✅ Memory Agent — LLM-powered entity extraction
- ✅ Quality gate before every graph write
- ✅ Contradiction detection
- ✅ Link Agent — semantic edge suggestion
- ✅ LLM-agnostic — works with Anthropic, OpenAI, or local
Stack
- Language: Python 3.11+
- Graph DB: Kuzu (embedded)
- LLM: agnostic — any model via structured prompting
- License: AGPL-3.0
Looking for
- Researchers who want to try it on a real project
- Builders who want to implement any phase of the roadmap
- Critics who want to find the failure modes
→ github.com/alteavane/memory-graph
Open an issue. Fork the repo. Break the design.
The goal is not consensus — it's the best possible system.
Top comments (0)