LLM agents are powerful reasoners but poor navigators.
They can analyze code, write solutions, and debug problems. But ask them "what does this change affect?" or "what am I missing?" and they're stuck grepping through flat files.
I built m1nd to fix this.
The idea
Instead of searching your data, m1nd activates it. You query a concept, and the graph lights up: connected nodes fire with decaying signal, noise gets cancelled, and the system learns from your feedback.
You ask: "What's related to authentication?"
m1nd: Activates auth -> propagates to session, middleware, JWT
-> detects structural hole (no rate limiter connected)
-> predicts co-change with password reset module
-> all in one query, ranked by multi-dimensional relevance
How it works
1. Ingest - Build a property graph from your data (code, JSON, any domain)
2. Activate - Spreading activation across 4 dimensions:
- Structural: graph topology, PageRank, community structure
- Semantic: label similarity via char n-grams and co-occurrence
- Temporal: recency decay, change velocity, co-change history
- Causal: directed dependency flow along import/call/contain edges
3. Cancel noise - XLR differential processing maintains parallel signal and noise graphs, gating activation through an adaptive sigmoid. Borrowed from balanced audio engineering.
4. Learn - Hebbian plasticity: tell m1nd "this result was correct" and it strengthens the connections (LTP). "This was wrong" weakens them (LTD). Over time, the graph gets smarter.
5. Persist - Graph + plasticity state saves to disk. Next session starts where this one left off.
The 13 tools
m1nd exposes 13 tools over MCP (Model Context Protocol), callable by any LLM agent:
| Category | Tools |
|---|---|
| Discovery | activate, why, missing, fingerprint, resonate |
| Change Analysis | impact, predict, counterfactual |
| Learning | learn, drift, warmup |
| System | ingest, health |
Real example
m1nd analyzing its own codebase (693 nodes, 2007 edges):
> m1nd.activate query="spreading activation"
0.94 fn::propagate_wavefront (structural 0.97)
0.91 fn::score_candidates (semantic 0.93)
0.87 fn::xlr_gate (causal 0.89)
XLR cancelled 12 noise edges
> m1nd.counterfactual remove=["graph.rs"]
CASCADE: 31 nodes lose connectivity
ORPHANED: 8 functions become unreachable
graph.rs is load-bearing
Why not vector search?
Vector search finds things that look similar. Spreading activation finds things that are structurally connected. The difference matters:
- Blast radius requires graph topology, not cosine similarity
- Structural holes are invisible to embeddings
- Co-change prediction needs temporal graph history
- Counterfactual simulation is impossible without structure
- Hebbian learning is online and incremental - no retraining
Try it
Built in Rust. 15.5K LOC. 159 tests. 3.8MB binary.
cargo build --release
./target/release/m1nd-mcp
GitHub: github.com/maxkle1nz/m1nd
Landing: maxkle1nz.github.io/m1nd
Integration guide: INTEGRATION-GUIDE.md
*Max Kleinschmidt *
Top comments (0)