<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Axonn Echysttas</title>
    <description>The latest articles on DEV Community by Axonn Echysttas (@kyliathy).</description>
    <link>https://dev.to/kyliathy</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kyliathy"/>
    <language>en</language>
    <item>
      <title>ContextCore: AI Agents conversations to an MCP-queryable memory layer</title>
      <dc:creator>Axonn Echysttas</dc:creator>
      <pubDate>Thu, 02 Apr 2026 13:31:35 +0000</pubDate>
      <link>https://dev.to/kyliathy/contextcore-ai-agents-conversations-to-an-mcp-queryable-memory-layer-4h1p</link>
      <guid>https://dev.to/kyliathy/contextcore-ai-agents-conversations-to-an-mcp-queryable-memory-layer-4h1p</guid>
      <description>&lt;p&gt;Hello :). This OSS product is for you (or future-you) who reached the point of wanting to tap into the ton of knowledge you have in your AI chat histories. "Hey, Agent, we have a problem with SomeClass.function, remind me what we changed in the past few months".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://reach2.ai/context-core/" rel="noopener noreferrer"&gt;https://reach2.ai/context-core/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Kyliathy/context-core.git" rel="noopener noreferrer"&gt;https://github.com/Kyliathy/context-core.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Product's tl;dr:&lt;/p&gt;

&lt;p&gt;ContextCore is a local-first memory layer that ingests AI coding chats across multiple IDE assistants and machines, makes them searchable (keyword + optional semantic), and exposes them to assistants over MCP so future sessions don’t start from zero.&lt;/p&gt;

&lt;p&gt;IMPORTANT: I emphasize local-first, as in nothing is sent to any LLM other than when you explicitly use the MCP server in the context of using an LLM. However, once you engage semantic vector search OR chat content summarization, we DO use LLMs (although you can use local ones).&lt;/p&gt;

&lt;p&gt;ContextCore is not just “chat history storage.” It is &lt;em&gt;a developer-grade memory layer&lt;/em&gt; that turns AI-assisted development from ephemeral to iterative—where prior debugging sessions, architectural decisions, refactors, and tool-call outcomes become reusable context rather than lost effort.&lt;/p&gt;

&lt;p&gt;More in the README.md in the repo.&lt;/p&gt;

&lt;p&gt;This is the first time I show this in a public forum :). My hope is that I get a little bit of feedback, hopefully even traction, so that I can get some help to expand ContextCore's compatibility (to add parsers for IntelliJ or other IDEs for example - which is quite easy now that the project has solid architecure docs and templates). The project has a roadmap in the README.&lt;/p&gt;

&lt;p&gt;The endgame for ContextCore is to become an engineer's reliable side-kick when it comes to digging into chat history and turning that into pure context gold at the MINIMUM amount of tokens spent. The current search system is decent, but much more can be done.&lt;/p&gt;

&lt;p&gt;And my endgame is twofold: 1) give something back after being a lurker for years and 2) get some help to polish the search system and other areas of the product, so that we create an awesome, vendor-independent, cross-agent memory layer.&lt;/p&gt;

&lt;p&gt;Thank you for reading this! :)&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>ai</category>
      <category>mcp</category>
      <category>coding</category>
    </item>
  </channel>
</rss>
