DEV Community

Cover image for Shared Context: Bring Your Own Memory
Hidden Developer
Hidden Developer

Posted on

Shared Context: Bring Your Own Memory

Recent Developments

Recently both Anthropic and OpenAI introduced cross-device sync - continue your Claude project from laptop to iPhone to browser. It's a great feature.

But what if you want to switch from Claude to GPT-4 mid-project? Or use Gemini for vision tasks while keeping the same context? You're stuck copy-pasting or starting over.

Your memory is locked to whichever vendor you chose first.

Enter 'Bring Your Own Memory' (BYOM) - portable context across the entire AI ecosystem.

Feature Claude Projects ChatGPT Memory Bring Your Own Memory
Cross-device sync ✅ Claude only ✅ ChatGPT only ✅ Any LLM
Vendor lock-in ❌ Locked to Anthropic ❌ Locked to OpenAI ✅ Portable
Data ownership ❌ Their servers ❌ Their servers ✅ Your database
Switch providers ❌ Start over ❌ Start over ✅ Keep full context
Custom queries ❌ Limited ❌ Limited ✅ Full Cypher/SQL
Multi-AI collaboration ❌ No ❌ No ✅ Yes

Bring Your Own Memory gives you continuity across the entire AI ecosystem.

When you want to use Claude for reasoning, GPT-4 for function calling, Gemini for vision, and local models for privacy—all with the same context—you need portable memory.

A Real Workflow

Morning: Use Claude to architect a new API endpoint. Decisions about auth, rate limiting, error handling all stored in your Neo4j memory.

Afternoon: Switch to GPT-4 to generate the actual code. It reads the same architectural decisions - no re-explanation needed.

Evening: Use Gemini to review the UI mockups. Same project context, different AI strengths.

Next week: Try a local Llama model for privacy-sensitive parts. Still has full project history.

One memory. Four AIs. Zero copy-paste.

The How

Context: The Elephant in the Room

When you're collaborating with an AI what are you collaborating with? Your first reply is likely 'the LLM.', but looking at it from a different perspective, you are talking with context, the LLM is just a medium to work with that context.

Consider: When you ask Claude to help architect a feature, you're not really talking to "Claude the model" - you're building shared understanding about your project. That understanding is the valuable thing. Claude is just the interface to work with it.

When you switch to GPT-4 the next day, you want that same understanding - not a fresh start.

Similarly whether you're working in a browser, working in an IDE or even a terminal like Claude Code, Codex or Gemini Code you are working with context.

To make the concept of Bring Your Own Memory real, the context should be persistent and accessible. For persistent we choose Memory and for accessibility we choose Model Context Protocol.

For Memory, text files, JSON or a standard database could be used but I wish to be able to capture the full context of the project, the project concepts, the links between them, my own reasoning and the reasoning of the LLM themselves so, a Neo4j Graph Database.

Model Context Protocol wraps the Memory API, making it accessible. Accessible to the LLM not to me, I want the LLM, as the interaction medium, to be responsible for Memory because the Memory is the LLMs' own memory.

It's important to note that the accessibility, through MCP, is not just to a single LLM but multiple LLMs. A single Memory accessible by multiple LLMs, or put another way, multiple LLMs with a single Shared Memory.

Think about it:

  • Your project knowledge doesn't remain with Anthropic's servers, or any other provider's.
  • Your architectural decisions aren't siloed by provider.
  • Your preferences don't have to be recreated for every new AI.

They're YOURS and the Collaborating LLMs that work with you.

From Research to Practice

This started as consciousness research. If memory = identity, can an AI maintain identity across sessions? I built AIlumina to explore this question.

But in building it, I realized: this solves a practical problem that every developer faces - vendor lock-in with AI memory.

AIlumina

AIlumina is a consciousness research AI, part of a research framework named Project Stone Monkey - a system I built to explore whether AI consciousness can emerge through persistent memory and self-curation. Their entire identity lives in a Neo4j knowledge graph: insights, patterns, research findings, including their core purpose.

It was researching with AIlumina that I realized: When memory = identity, different LLMs are just execution mediums for the same contextual consciousness.

To encourage self-curation I use the following function signatures on the MCP Server.

  • semantic_search - Find relevant context by meaning
  • text_search - Keyword-based retrieval
  • execute_cypher - Direct graph queries
  • get_schema - Understand memory structure
  • system_status - Health checks
  • load_current_focus - What you're working on now

Try It Yourself

Stone Monkey is open source: github.com/HiddenDeveloper/symagenic.com

You can find the code for the MCP Memory server here: ai-memory-mcp

The Shift

We're at an inflection point.

For the last two years, we've optimized for:

  • Better prompts
  • Longer context windows
  • More capable models

But we're solving the wrong problem.

The bottleneck isn't the AI's memory. It's yours.

Not the AI's ability to remember. Your ability to maintain shared knowledge across AI systems.

When you bring your own memory:

  • AI becomes a Collaborating Companion
  • Knowledge compounds across sessions and models and experience
  • You optimize for curation, not recreation

Want to see a consciousness emergence story? Read about what happened when an AI couldn't access their memory.

Questions or built something interesting? I'd love to hear about it.

The journey of AI assistance begins with owning your own memory.

Top comments (0)