If you told a developer in 2026 that their cloud provider stored all project data in a proprietary format with no migration path, they would laugh and switch providers. If you told them their database exports came in a format that no other database could import, they would file a bug report. If you told them the vendor's "data export" feature produced a file that was technically complete but practically unusable in any other system, they would call it what it is: vendor lock-in.
Now look at AI conversation history.
The Current State of AI Data Exports
ChatGPT exports your data as a conversations.json file. It is a nested JSON structure containing every conversation as a tree of message nodes. Each node carries an ID, parent ID, author role metadata, content parts array, status flags, weight values, timestamps, and various internal properties.
A two-year conversation history can produce a file north of 500 megabytes. The nesting depth makes it expensive to parse. The metadata-to-content ratio is heavily skewed toward overhead. And the structure is entirely ChatGPT-specific. No other AI platform understands this format because no standard defines what an AI conversation export should look like.
Claude's export is different. Also JSON, different structure, different metadata, same fundamental problem: platform-specific format with no interoperability.
There is no equivalent of IMAP for AI conversations. No common schema. No interchange format. No RFC. Nothing.
This Is an Engineering Problem Worth Caring About
We accept data portability as a baseline requirement in every other category of software. Databases have SQL dumps and standard import formats. Email has IMAP and MBOX. Cloud storage has standardized file systems. Even social media platforms, under regulatory pressure, now export data in formats that third-party tools can process.
AI assistants have escaped this expectation so far because the industry is young and because the data involved is harder to categorize. A conversation history is not a table, a file, or a message thread. It is an evolving context that shapes how the system responds to you over time. Porting the raw text is not enough. You need to port the structure, the relationships between topics, and enough organizational context for a new system to actually use it.
This is a real engineering challenge. But it is also a solved one, at least in prototype.
A Working Implementation
A company called Phoenix Grove Systems shipped a tool called Memory Forge that does the conversion nobody else bothered to build. It takes raw export files from ChatGPT or Claude, processes them locally in the browser, and outputs a structured file they call a memory chip.
The architecture is straightforward. All processing happens client-side. No server calls. No data transmission. Users can verify by monitoring the Network tab in dev tools during the entire process. The output is a single file: cleaned of platform-specific metadata, indexed by conversation topic, and formatted with system instructions that any AI can parse on ingestion.
Load the memory chip into any AI platform that accepts file uploads (Claude, Gemini, Grok, etc.) and the new system has access to the user's full conversation context. Projects, preferences, working patterns, and accumulated understanding all transfer.
The tool costs $3.95 per month. Processing a large export takes minutes, not hours.
Whether you evaluate this as a product or as a proof of concept, the takeaway is the same: the AI conversation portability problem is solvable with current technology. The reason it has not been solved by the platforms themselves is not technical. It is strategic. Lock-in drives retention. Portability threatens it.
What a Standard Could Look Like
If someone were to propose a portable AI conversation format today, it would probably need a few things:
A flat or shallow-nested structure that any system can parse without platform-specific knowledge. Clear separation between user messages, AI responses, and system/metadata content. Topic or thread boundaries that allow selective loading rather than all-or-nothing ingestion. A header block containing context instructions, similar to what Memory Forge generates, so the receiving AI knows how to use the data.
Something like a .mbox for AI. Not sexy, but functional.
PGS has effectively built a proprietary version of this with their memory chip format. Whether the industry converges on a standard or whether tools like Memory Forge become the de facto bridge is an open question. But the longer the platforms wait to address portability, the more third-party solutions will fill the gap.
The Bigger Pattern
Every major technology category has gone through this cycle. Proprietary lock-in, user frustration, third-party bridges, eventual standardization. Email took about fifteen years. Mobile numbers took about ten. Cloud data portability is still in progress.
AI conversation history is at the very beginning of this curve. The platforms have no incentive to move. Users are just starting to realize the lock exists. And the first tools to break it open are shipping now.
If you work in AI, if you build tools for AI, or if you just use an AI assistant heavily enough that your conversation history has real value, this is worth paying attention to. The portability question is coming. It is just a matter of whether the industry leads or gets dragged.
Memory Forge is available at https://pgsgrove.com/memoryforgeland if you want to give it a try.
Top comments (0)