<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: wilhurley</title>
    <description>The latest articles on DEV Community by wilhurley (@wilhurley).</description>
    <link>https://dev.to/wilhurley</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/wilhurley"/>
    <language>en</language>
    <item>
      <title>Last Tuesday I switched from Cursor to Claude mid-session and Claude already knew exactly what I was building. Here's how.</title>
      <dc:creator>wilhurley</dc:creator>
      <pubDate>Mon, 13 Apr 2026 19:55:24 +0000</pubDate>
      <link>https://dev.to/wilhurley/last-tuesday-i-switched-from-cursor-to-claude-mid-session-and-claude-already-knew-exactly-what-i-1cof</link>
      <guid>https://dev.to/wilhurley/last-tuesday-i-switched-from-cursor-to-claude-mid-session-and-claude-already-knew-exactly-what-i-1cof</guid>
      <description>&lt;p&gt;That used to cost me ten minutes minimum. Every single time.&lt;/p&gt;

&lt;p&gt;I'd be an hour deep in Cursor, finally at that point where the AI and I were in sync — it knew the auth module layout, the JWT decisions, the middleware path I'd landed on, the approach I'd abandoned and why. Then I'd open Claude Desktop to write the migration plan and it was just gone. Complete blank slate. Re-explain everything from scratch.&lt;/p&gt;

&lt;p&gt;I tried Giga. I tried a few others. Everything stored my memories on their servers. That's a hard no for me — I'm building commercial products in these sessions. I'm not handing two years of architectural decisions and client work to someone else's cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5adxixn49roaoehbhvov.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5adxixn49roaoehbhvov.png" alt=" " width="800" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So I built ArcBrain. It runs entirely on your machine. Your data never leaves. And after using it daily for six months, I genuinely notice when it's off — it's like losing a second brain.&lt;/p&gt;

&lt;p&gt;What the problem actually is&lt;br&gt;
This isn't an AI capability problem. The models are excellent. It's structural: every AI tool runs its own isolated memory system, its own schema, its own storage. VS Code Copilot, Cursor, Claude Desktop, Windsurf — none of them were designed to share knowledge. Your context doesn't carry because there's no shared layer to carry it through.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4loiaoknsuzk7lo9njtv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4loiaoknsuzk7lo9njtv.png" alt=" " width="800" height="923"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're switching between two or three AI tools a few times a day — which most serious developers are at this point — you're losing somewhere around 100–200 minutes a week to re-establishing context. I kept a rough log for a couple weeks when I got frustrated enough to actually measure it. It was worse than I expected.&lt;/p&gt;

&lt;p&gt;How it works — the full picture&lt;br&gt;
Step one: import everything you already have. When you first set up ArcBrain, it finds and imports your existing Cursor workspace databases, ChatGPT exports, Claude conversation history, VS Code Copilot logs, and any markdown docs you point it at. Two years of sessions, all of it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2krmxlchvsqqmbyqv36.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2krmxlchvsqqmbyqv36.png" alt=" " width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But it doesn't just dump that history in flat. It runs a temporal replay — processing sessions day by day, in chronological order, the same way you originally lived them. Each day's sessions are analyzed, facts are extracted, and then compared against the next day. Memories that were reinforced across days gain confidence. Decisions that got reversed decay. By the time the import finishes, your history is already a living knowledge graph — not an archive.&lt;/p&gt;

&lt;p&gt;Step two: stay current automatically. Every five minutes, the Live Watcher reads what's happening across all your active sessions — Cursor, Copilot, Claude Desktop, Windsurf, any MCP client — and extracts structured facts. Not raw transcript. Actual knowledge: what decision was made, what file was involved, what error got resolved, what approach was tried and abandoned. Those facts go into a shared local SQLite database every connected tool reads from via MCP.&lt;/p&gt;

&lt;p&gt;Switch from Cursor to Claude and Claude's startup context already has what you just figured out. Five-minute lag, not infinite lag.&lt;/p&gt;

&lt;p&gt;The memory schema — why it matters&lt;br&gt;
My first version stored strings. It worked okay for retrieval but fell apart fast — you couldn't tell which facts were still true, which were stale, where conflicting information came from.&lt;/p&gt;

&lt;p&gt;Every memory is now a proper first-class object:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5xx6k45koq6rm0yckms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5xx6k45koq6rm0yckms.png" alt=" " width="800" height="429"&gt;&lt;/a&gt;&lt;br&gt;
key / value — the fact itself&lt;br&gt;
category — decision, error, config, snippet, rule, dependency&lt;br&gt;
confidence_score — 0.0 to 1.0, updated over time based on actual usage&lt;br&gt;
source — which tool, which model, which session&lt;br&gt;
provenance_chain — where the fact has been and how it changed&lt;br&gt;
vector_embedding — semantic search via ChromaDB (local Ollama embeddings)&lt;br&gt;
relationship_edges — connections to related facts in the graph&lt;br&gt;
A fact from a live Cursor session and one extracted from a two-year-old ChatGPT export have identical structure. A code snippet, an architectural decision, and a resolved bug all live in the same graph, speak the same schema, age the same way. No second-class memories. No silos.&lt;/p&gt;

&lt;p&gt;Memory metabolism — the system stays sharp&lt;br&gt;
This is the part that makes ArcBrain different from anything I found when I was looking for alternatives.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ubvns50ousxhvov6iew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ubvns50ousxhvov6iew.png" alt=" " width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every retrieved memory gets a confidence boost. Every unused memory decays on a configurable half-life curve — faster for things like calendar notes and emails, slower for architectural decisions and people. When a new memory contradicts an old one, a scoring algorithm runs: which source is more reliable, which is more recent, which has been referenced more often. Winner gains confidence. Loser fades.&lt;/p&gt;

&lt;p&gt;Once a week, a composite memory pass runs — scanning for facts that frequently co-occur across sessions, pulling them together into synthesized canonical memories. The kind of thing your brain does automatically when you sleep.&lt;/p&gt;

&lt;p&gt;Three tiers:&lt;/p&gt;

&lt;p&gt;Working Memory — everything starts here. Live Watcher extractions, imports, manual notes, session summaries. Held lightly, immediately searchable.&lt;/p&gt;

&lt;p&gt;Compiled Memory — facts confirmed across multiple sessions, contradictions resolved, cross-referenced into the graph with explicit edges.&lt;/p&gt;

&lt;p&gt;Canonical Memory — the long-term layer. Earned through repeated use and high confidence. Retrieved first. The stuff the system is most sure about. A decision from 18 months ago you still reference constantly will outrank something from last week you never touched again.&lt;/p&gt;

&lt;p&gt;The constellation — seeing your own knowledge&lt;br&gt;
Once your memories are a graph instead of a list, you can actually look at what you know.&lt;/p&gt;

&lt;p&gt;The Constellation Memory Map renders your knowledge graph as a live visual network. Every fact is a node. Edges show how facts relate — semantic proximity, co-occurrence in sessions, shared entities like file paths and services. The more connections a memory has, the brighter it glows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffin8lseyqiu29qvhik46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffin8lseyqiu29qvhik46.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Red nodes are your project anchors — the things everything else points to. Core architectural decisions. Hardest-won debugging insights. Blue nodes are new arrivals, still light on connections.&lt;/p&gt;

&lt;p&gt;You can spot gaps. You can see clusters. You can trace exactly why ArcBrain surfaced a particular answer by following the edges to the retrieved memory.&lt;/p&gt;

&lt;p&gt;Composite retrieval&lt;br&gt;
When you ask about your auth system, ArcBrain doesn't just find facts tagged "auth." It walks the graph — pulling in connected facts: the bug inside the auth guard, the config decision shaping the middleware, the snapshot from the week you rewrote the whole thing after discovering the original approach had a race condition in refresh token handling.&lt;/p&gt;

&lt;p&gt;The AI gets a connected subgraph, not a ranked list of isolated notes. That's why responses feel different — not more facts, the relationships between facts.&lt;/p&gt;

&lt;p&gt;Everything runs on your machine. SQLite for structured memory, ChromaDB for vector search, Ollama for local embeddings and summarization. No third-party cloud storing your codebase. No vendor with access to two years of your architectural decisions. Your memory graph is a file on your drive — and it stays there.&lt;/p&gt;

&lt;p&gt;The MCP server starts locally, connects to whatever AI clients you have open, and that's it. Your memory graph is a file on your drive.&lt;/p&gt;

&lt;p&gt;arcbrain.dev — Windows, 14-day free trial, no credit card. Mac/Linux in progress.&lt;/p&gt;

&lt;p&gt;If the architecture interests you — the temporal replay on import, the decay model, composite memory generation, or how the constellation graph traversal works — drop a comment. Happy to get into the details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezewdblr2qawrb9aj520.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezewdblr2qawrb9aj520.png" alt=" " width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LinkedIn: &lt;a href="https://www.linkedin.com/in/wil-hurley" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/wil-hurley&lt;/a&gt;&lt;br&gt;
GitHub: &lt;a href="https://github.com/wilhurley" rel="noopener noreferrer"&gt;https://github.com/wilhurley&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="mailto:Wil@arcbrain.dev"&gt;Wil@arcbrain.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>mcp</category>
      <category>devtools</category>
    </item>
  </channel>
</rss>
