I told Ollama "I just moved to Austin." Then I opened Claude Desktop and asked "where do I live?" Claude said Austin. I never told Claude anything.
Both apps were reading and writing to the same memory daemon on my machine. That's BubbleFish Nexus. This post is about what it does, why I built it, and what I learned shipping it solo over the past few months.
The problem
Every AI app keeps its own memory silo. ChatGPT doesn't know what you told Claude. Claude doesn't know what you told Ollama. OpenClaw doesn't know what any of them know. Switch tools and you re-explain everything.
There are hosted memory services that solve this, but they all want your data on their servers. I wanted something different: one daemon you run yourself that any AI client can connect to, with the same memory shared across all of them.
What Nexus is
A single Go binary. AI clients connect over HTTP, MCP, or OAuth 2.1. Every write goes through the same pipeline — auth, policy check, durable write, queue dispatch, destination commit. Every read goes through a multi-stage retrieval pipeline that combines metadata filtering, semantic search, and time-aware reranking so newer facts outrank older contradictions.
Seven AI clients are verified working today: Claude Desktop, ChatGPT (through a real OAuth 2.1 authorization code flow), Perplexity Comet, Ollama, Open WebUI, OpenClaw, and anything that speaks HTTP.
The crash safety story
This is the part I cared about most. If a memory daemon loses data on a crash, it's worthless. I wanted to be able to kill the process mid-write and have zero data loss.
I verified this by hand before I put the claim in the README. Wrote real memories, force-killed the process, restarted. Every memory came back with full content. There's a built-in bubblefish demo command that does this automatically with 50 memories — writes them, kills the daemon, restarts, queries, and asserts all 50 are present with zero duplicates. It's also the demo I use when people ask "but does it actually work?"
The architecture is documented at a high level in the README. I'm keeping the deeper internals out of public writeups for now, but the externally observable guarantee is simple: if Nexus returns 200 on a write, that memory is durable. If it returns 429 because the queue is full, the memory is still durable — the destination just hasn't caught up yet. There is no window where data is at risk.
What I learned shipping solo
The hardest part wasn't any single feature. It was the integration work. Connecting seven different AI clients meant seven different transport protocols, seven different auth models, seven different payload shapes. The real value of Nexus isn't any one of those integrations — it's that they all share one memory backend, so a fact written by one client is immediately visible to the others.
The second hardest part was getting the testing right. 607 tests across 31 packages, all green under the Go race detector, on Windows. CGO_ENABLED=1 for race testing, pure Go for normal builds. I shipped nothing until every test passed three times in a row.
Try it:
bashgit clone https://github.com/bubblefish-tech/nexus.git
cd nexus
go build -o bubblefish ./cmd/bubblefish/
./bubblefish install --mode simple
./bubblefish start
That's the setup. SQLite backend, single source, auto-generated API key, bound to localhost. The installer prints your key and an example curl command. Or grab a pre-built binary from GitHub Releases — no Go installation required.
Repo
AGPL-3.0. Solo dev. Built at my desk in Prescott, Arizona over the past few months with Claude Code as my pair programmer.
[(https://github.com/bubblefish-tech/nexus)]
Happy to answer questions about the architecture at the level the README covers, or about the integration work for any of the seven AI clients. If you've built anything in this space, I'd love to hear how you approached durability — it's the design constraint I spent the most time on.
Top comments (0)