You accumulate knowledge constantly — notes, docs, project decisions, things you'll need to remember later. AI agents could help you work with all of this. But how do you give them access to what you know?
There's a growing industry around "agent memory" — vector databases, embedding pipelines, retrieval systems. But for personal and project knowledge, the answer might be simpler: plain Markdown files.
The Problem with Agent Memory
The amount of knowledge and context we need to operate with grows exponentially. Codebases expand. Documentation multiplies. Every project accumulates decisions, patterns, and tribal knowledge that's hard to keep in your head — or fit in a context window.
AI agents are supposed to help. Every framework now ships with some form of memory management. LangChain has memory modules. CrewAI has knowledge sources. AutoGPT writes to files. The common pattern: agents need persistent, structured storage that survives beyond a single conversation.
The dominant approach uses vector embeddings. Store memories as embeddings, retrieve via semantic similarity, inject into context. It works, but it creates a problem: the agent's knowledge becomes opaque.
When your agent "remembers" something, where does that memory live? In a vector database you can't easily read. In embeddings you can't edit by hand. The agent has knowledge, but you can't see it, verify it, or share it.
A Different Approach
What if your notes and your agent shared the same knowledge base?
This is the idea behind IWE — a tool that treats Markdown as a knowledge graph accessible to both you and your AI agents. You edit in your preferred text editor with full LSP support. Your agent queries the same files through a CLI. Same source of truth, no sync.
How It Works
IWE consists of two components:
-
An LSP server (
iwes) that integrates with VS Code, Neovim, Zed, and Helix -
A CLI (
iwe) for programmatic access — the part AI agents use
The core insight: your text editor already has a protocol for structured document access. The Language Server Protocol gives you completions, go-to-definition, references, and code actions. IWE implements LSP for Markdown knowledge bases.
The CLI as Agent Interface
The iwe CLI exposes the same knowledge graph to command-line tools:
iwe find "authentication"
iwe retrieve -k docs/auth-flow
iwe retrieve -k docs/auth-flow --depth 2
iwe retrieve -k docs/auth-flow --dry-run
An AI agent using Claude Code, Cursor, or any tool that can execute shell commands gets structured access to your knowledge base. No embeddings. No vector database. Just Markdown files with a query interface.
Key flags:
| Flag | Description |
|---|---|
--depth N |
Follow inclusion links N levels deep |
-c N |
Include N levels of parent context |
-e KEY |
Exclude already-loaded documents |
--dry-run |
Check document count and size before fetching |
The --depth flag is particularly useful. It follows inclusion links and inlines child documents, giving the agent transitive context in a single retrieval call.
Inclusion Links: Structure Without Folders
What makes graph traversal work is a simple concept: inclusion links.
An inclusion link is a markdown link placed on its own line:
# Photography
[Composition](composition.md)
[Lighting](lighting.md)
[Post-Processing](post-processing.md)
When a link appears on its own line, it defines structure: "Photography" becomes the parent of the linked documents. Unlike folder hierarchies, a document can have multiple parents:
Frontend Development
├── React Fundamentals
├── Vue.js Guide
└── Performance Optimization
Backend Topics
├── Database Design
└── Performance Optimization ← same document, multiple parents
This is polyhierarchy — structure without the limitations of folders. Context flows from parent to child. When you retrieve a document with depth, IWE follows these links to pull in child content.
Why this matters vs alternatives:
- Folders: Force single placement. "Performance Optimization" can't live in both frontend and backend directories.
- Tags: No structure, no ordering, no hierarchy within categories.
- Inclusion links: Multiple parents, explicit ordering, annotations alongside links.
What This Enables
This approach gives you context engineering — control over exactly what enters the context window.
When an agent needs to understand your authentication system:
iwe retrieve -k docs/auth --depth 2
It gets back structured Markdown containing:
- The auth document itself
- Child documents expanded inline
- Parent context and backlinks
This is deterministic retrieval. No embedding similarity thresholds. No "maybe relevant" results. The agent gets exactly the documents in your knowledge graph that connect to the topic.
Additional benefits:
- Version-controlled knowledge — Git tracks every change
- Transitive context in one command — no recursive API calls
- Readable, editable, portable — it's just Markdown
The Tradeoff
IWE isn't a replacement for every memory approach:
Best for:
- Structured knowledge (technical docs, project specs, reference material, task managenet)
- Developer workflows with text editor/CLI comfort
- Knowledge you want to read, edit, and version control
The key insight: this isn't "agent memory" bolted onto your workflow. It's your knowledge base — the one you already maintain for yourself — made accessible to agents when you want their help.
You remain in control. The files are yours, readable and editable. Agents become collaborators that can navigate your knowledge, not black boxes that store it.
Getting Started
IWE is open source and available on GitHub. See the Get Started guide for installation and setup instructions.
Top comments (0)