I kept running into the same thing with AI tools:
- great context disappears
- I repeat myself constantly
- Every tool remembers different stuff (or nothing)
- Moving between tools my context doesn't follow me
So I built Empirical.
It started in a pretty common place: I was iterating on a Philly-style hoagie roll recipe.
I wanted the AI to remember what I liked, what failed, and what I wanted to try next without re-explaining it every time.
I originally thought Empirical would be its own chatbot. I started down that path, then realized I was solving the wrong problem. Reinventing the wheel.
I didn’t need another chat interface.
I needed a memory layer I could use everywhere.
So I changed lanes and focused on MCP tools.
Now I use Empirical memory across:
- Coding CLI's
- ChatGPT
- Claude Web
- Claw Agents
Same memory, different interfaces. Now if ChatGPT is no longer _cool _ or Claude leaks it's entire codebase, I can switch to the latest hot thing and all my context and memories move with me.
Real examples that made this click for me
I can take a pic of a bourbon, say “I like this,” and that preference is saved as persistent memory.

I can send health data and query/chat over it later to help spot patterns.

I can write a PRD while going on a walk with ChatGPT, then pull it back up in a CLI session at my desk.
What’s next
I’m now working on connecting Empirical to more sources so memory reflects more of my actual life/workflow.
Current focus:
- better pattern recognition over time
- stronger multimodal memory (text + image + structured data)
- cleaner memory workflows for agents

Top comments (1)
I had that aha moment when I added the tools to a different agent and it immediately recalled from memories. That was really slick.