DEV Community

Axonn Echysttas
Axonn Echysttas

Posted on

ContextCore: AI Agents conversations to an MCP-queryable memory layer

Local-first context across multiple IDEs

Hello :). This OSS product is for you (or future-you) who reached the point of wanting to tap into the ton of knowledge you have in your AI chat histories. "Hey, Agent, we have a problem with SomeClass.function, remind me what we changed in the past few months".

https://reach2.ai/context-core/

https://github.com/Kyliathy/context-core.git

Product's tl;dr:

ContextCore is a local-first memory layer that ingests AI coding chats across multiple IDE assistants and machines, makes them searchable (keyword + optional semantic), and exposes them to assistants over MCP so future sessions don’t start from zero.

IMPORTANT: I emphasize local-first, as in nothing is sent to any LLM other than when you explicitly use the MCP server in the context of using an LLM. However, once you engage semantic vector search OR chat content summarization, we DO use LLMs (although you can use local ones).

ContextCore is not just “chat history storage.” It is a developer-grade memory layer that turns AI-assisted development from ephemeral to iterative—where prior debugging sessions, architectural decisions, refactors, and tool-call outcomes become reusable context rather than lost effort.

More in the README.md in the repo.

This is the first time I show this in a public forum :). My hope is that I get a little bit of feedback, hopefully even traction, so that I can get some help to expand ContextCore's compatibility (to add parsers for IntelliJ or other IDEs for example - which is quite easy now that the project has solid architecure docs and templates). The project has a roadmap in the README.

The endgame for ContextCore is to become an engineer's reliable side-kick when it comes to digging into chat history and turning that into pure context gold at the MINIMUM amount of tokens spent. The current search system is decent, but much more can be done.

And my endgame is twofold: 1) give something back after being a lurker for years and 2) get some help to polish the search system and other areas of the product, so that we create an awesome, vendor-independent, cross-agent memory layer.

Thank you for reading this! :)

Top comments (7)

Collapse
 
francistrdev profile image
FrancisTRᴅᴇᴠ (っ◔◡◔)っ

Great start on your first post! Well done and well written too! :D

Collapse
 
kyliathy profile image
Axonn Echysttas

Thank you @francistrdev <3

Collapse
 
apex_stack profile image
Apex Stack

This is solving a real pain point. I run a bunch of scheduled AI agents for site auditing, content publishing, and dashboard monitoring — and the biggest friction is that each session starts from zero. All that context from previous runs just evaporates.

The three-layer approach (local-first storage → keyword search → optional semantic search) is smart. Keeping it vendor-independent is key too — I use different tools for different workflows and having a unified memory layer across all of them would be a huge productivity gain.

Curious about the MCP server performance — how does query latency scale as the conversation history grows into thousands of sessions? That's where I'd expect the bottleneck to show up first. Also, any plans to support filtering by project or workspace context? Being able to say "search only conversations about my Astro static site" vs "search everything" would be really useful for multi-project setups.

Great first post — following to see where this goes.

Collapse
 
kyliathy profile image
Axonn Echysttas • Edited

Hi :). It does support per project and even per scope (grouping of same project per different harness). See "archi-mcp.md" in the repo for exact MCP functions. The above video also shows how to scope projects. I have roughly 50k messages in the DB and queries are around 5-10 seconds, which I consider excellent given that I didn't really focus on optimization yet.

Collapse
 
apex_stack profile image
Apex Stack

That's really encouraging — 50k messages at 5-10 seconds without optimization is solid. The per-project scoping is exactly what I was hoping for. I run about 10 scheduled agents that each have their own context (SEO auditing, content publishing, site monitoring, etc.) and right now their memory is basically flat markdown files. Being able to query conversation history per project scope would be a huge upgrade. I'll check out archi-mcp.md — curious how you handle the MCP function boundaries between read vs. write operations on the memory store.

Thread Thread
 
kyliathy profile image
Axonn Echysttas

Nothing is written via MCP. It's just a read layer. The writing is done by the server itself, which file-watches your conversation history directories (which the server itself detects via the setup script).

Collapse
 
harsh2644 profile image
Harsh

This is a really interesting approach to the memory problem.

The queryable memory layer angle is what stands out to me. Most AI memory solutions I've seen are either:
Just stuffing everything into context until it overflows, or
Basic summarization that loses detail.

Making memory queryable changes the game you're not just storing, you're making retrieval intelligent.

The MCP integration makes sense too. If MCP becomes the standard for context protocol (and it seems to be heading that way), building memory on top of it is future-compatible rather than another bespoke solution.

Question for you: how do you handle memory conflicts or staleness? If an agent remembers something from an old conversation that contradicts new information what's your resolution strategy?

Also curious about performance queryable memory sounds great, but what's the latency look like at scale?

Really cool project. Following this with interest. 🙌