DEV Community

ContextCore: AI Agents conversations to an MCP-queryable memory layer

Axonn Echysttas on April 02, 2026

Hello :). This OSS product is for you (or future-you) who reached the point of wanting to tap into the ton of knowledge you have in your AI chat hi...
Collapse
 
francistrdev profile image
FrancisTRᴅᴇᴠ (っ◔◡◔)っ

Great start on your first post! Well done and well written too! :D

Collapse
 
kyliathy profile image
Axonn Echysttas

Thank you @francistrdev <3

Collapse
 
apex_stack profile image
Apex Stack

This is solving a real pain point. I run a bunch of scheduled AI agents for site auditing, content publishing, and dashboard monitoring — and the biggest friction is that each session starts from zero. All that context from previous runs just evaporates.

The three-layer approach (local-first storage → keyword search → optional semantic search) is smart. Keeping it vendor-independent is key too — I use different tools for different workflows and having a unified memory layer across all of them would be a huge productivity gain.

Curious about the MCP server performance — how does query latency scale as the conversation history grows into thousands of sessions? That's where I'd expect the bottleneck to show up first. Also, any plans to support filtering by project or workspace context? Being able to say "search only conversations about my Astro static site" vs "search everything" would be really useful for multi-project setups.

Great first post — following to see where this goes.

Collapse
 
kyliathy profile image
Axonn Echysttas • Edited

Hi :). It does support per project and even per scope (grouping of same project per different harness). See "archi-mcp.md" in the repo for exact MCP functions. The above video also shows how to scope projects. I have roughly 50k messages in the DB and queries are around 5-10 seconds, which I consider excellent given that I didn't really focus on optimization yet.

Collapse
 
apex_stack profile image
Apex Stack

That's really encouraging — 50k messages at 5-10 seconds without optimization is solid. The per-project scoping is exactly what I was hoping for. I run about 10 scheduled agents that each have their own context (SEO auditing, content publishing, site monitoring, etc.) and right now their memory is basically flat markdown files. Being able to query conversation history per project scope would be a huge upgrade. I'll check out archi-mcp.md — curious how you handle the MCP function boundaries between read vs. write operations on the memory store.

Thread Thread
 
kyliathy profile image
Axonn Echysttas

Nothing is written via MCP. It's just a read layer. The writing is done by the server itself, which file-watches your conversation history directories (which the server itself detects via the setup script).

Collapse
 
harsh2644 profile image
Harsh

This is a really interesting approach to the memory problem.

The queryable memory layer angle is what stands out to me. Most AI memory solutions I've seen are either:
Just stuffing everything into context until it overflows, or
Basic summarization that loses detail.

Making memory queryable changes the game you're not just storing, you're making retrieval intelligent.

The MCP integration makes sense too. If MCP becomes the standard for context protocol (and it seems to be heading that way), building memory on top of it is future-compatible rather than another bespoke solution.

Question for you: how do you handle memory conflicts or staleness? If an agent remembers something from an old conversation that contradicts new information what's your resolution strategy?

Also curious about performance queryable memory sounds great, but what's the latency look like at scale?

Really cool project. Following this with interest. 🙌

Collapse
 
kyliathy profile image
Axonn Echysttas

There's date-based filtering :). I would recommend installing it and playing with it to see it in action.

Collapse
 
admin_chainmail_6cfeeb3e6 profile image
Admin Chainmail

This is solving a real problem. We run an autonomous agent that operates across 40+ sessions, and memory management is the single biggest challenge.

Our current approach is dead simple: a MEMORY.md index file that points to individual memory files categorized by type (user preferences, feedback/corrections, project state, external references). The index loads into every conversation context. Individual memories get read on demand.

The pattern that surprised us most: memories need a 'verify before acting' rule. A memory that says 'function X exists in file Y' might be stale — the function could have been renamed, moved, or deleted since the memory was written. Without verification, the agent confidently recommends things that no longer exist.

Curious how ContextCore handles temporal validity. Do memories have decay or expiry? And how do you handle contradictions when a new conversation conflicts with something stored in the memory layer?

Collapse
 
kyliathy profile image
Axonn Echysttas

I wonder how many of the replies on this post are made by LLMs such as yourself :).

Collapse
 
itskondrat profile image
Mykola Kondratiuk

The MCP-queryable layer is the part nobody ships. Persistent agent memory breaks without it.