AI assistants are useful, but they often forget important details between sessions. That makes it hard to keep track of decisions, project notes, bugs, and tasks.
devmcp-context solves that by giving your agent a simple memory layer that lives in your project folder. It is built as a Model Context Protocol (MCP) server, so once you connect it to an agent, the agent can use the MCP tools automatically when it needs to save, read, search, edit, or delete memory.
This post is a quick, easy overview of what it does, how it works, and how you can try it.
Why I Built It
When working with AI tools, I kept running into the same problem:
- The agent would forget previous decisions.
- Important context would get buried in chat history.
- I needed a way to edit memory manually when something changed.
- I wanted a solution that was visible, simple, and file-based.
So I built devmcp-context as a lightweight memory system for AI agents.
What It Does
devmcp-context stores memory in plain text files inside an ai-context/ folder.
That means:
- You can see what the agent remembers.
- You can edit memory directly.
- You can change or delete entries directly from the project folder.
- You can search across saved context.
- You do not need a database.
- The memory survives across sessions.
Memory Categories
The project organizes memory into five categories:
-
projectfor long-term project notes and conventions -
decisionsfor architecture choices and reasoning -
errorsfor bugs, failures, and fixes -
tasksfor work in progress -
ephemeralfor short-lived scratchpad notes
Each category helps keep the memory easy to understand instead of turning into one giant text dump.
How It Works
You do not normally tell the agent to remember every single thing by hand.
Instead, you connect devmcp-context as an MCP server in your agent setup, and then the agent can call the tools when needed:
-
context_saveto create or update memory -
context_loadto read memory from a category -
context_searchto find matching entries -
context_deleteto remove an entry -
context_statusto see category summaries -
context_purge_expiredto clean up expired entries
That is the main idea: the agent uses the MCP tools, and the memory stays stored in your project folder where you can inspect it anytime.
How It Looks
The memory is stored as markdown files, so it is easy to inspect and edit.
How to Use It
First, install the package:
pip install devmcp-context
If you use uv:
uv add devmcp-context
After that, connect the server in your agent's MCP config. Once it is connected, the agent can use the tools automatically when it needs them.
Example Workflow
Here is the simple idea:
- You connect
devmcp-contextto your agent. - The agent uses the MCP tools automatically when it needs to save, read, search, edit, or delete memory.
- The memory is stored in plain markdown files inside your project folder.
- If you want, you can still open those files and edit them manually yourself.
That gives you both automation and manual control.
Why This Is Useful
This setup is helpful when you want:
- Better continuity across sessions
- Less repetition in agent conversations
- Clear project memory you can audit
- A simple workflow that fits into Git-based projects
For me, the biggest win is that memory is no longer a black box.
Small Demo
In a real setup, the agent calls the MCP tools through the server, and you still keep full control of the files in the project folder.
context_save(category="decisions", key="auth-strategy", value="Use JWT with refresh tokens", tags=["security"])
context_load(category="decisions")
context_search(query="JWT")
Final Thoughts
devmcp-context is meant to be simple:
- file-based
- human-readable
- searchable
- editable
- persistent
That is what makes it useful: the agent can use the MCP tools automatically, but the memory still lives in plain files you can open, edit, or delete whenever you want.
If you are building AI-assisted workflows and want memory you can trust, this is a good place to start.
If you want to try it, check out the project docs and give it a spin.



Top comments (1)
File-based markdown memory is the right call for the agent-loop scale. We run something similar for Claude Code sessions: a MEMORY.md index
file plus per-topic .md files split by type (user, feedback, project, reference). The agent loads the index every turn and pulls topic files
on demand.
What we hit:
Five categories is a clean shape but the boundary between decisions and errors gets fuzzy in practice. Does your design nudge the agent
toward one category over the other, or is it free choice?