The problem
Like a lot of developers, I live in several surfaces at once: an editor, a terminal, notes, task lists, and an AI assistant. Each one is fine on its own. Together, they’re awful at one thing: keeping a single, trustworthy slice of project context in sync.
I was tired of:
- Re-explaining the same background to Claude
- Copy-pasting chunks of notes or repo context
- “Knowing” something lived in some doc or ticket, but not being able to pull the exact passage when I needed it
I wanted one local place for workspace + knowledge, but I also wanted that data everywhere on my Mac—not trapped in a single app.
What I built
DeepThink is a local-first workspace for macOS: projects, notes, tasks, reminders, and a personal knowledge base, with a native SwiftUI UI so you can actually browse and edit your data like a real app.
On top of that, it ships:
- An MCP server so MCP-capable clients can read and update what the app manages
- A
deepthinkCLI for terminal workflows - Hybrid RAG (keyword + semantic retrieval) so assistants can attach tight excerpts instead of whole directories
Persistence is on your machine (SwiftData plus on-disk markdown and embeddings under ~/DeepThink/). The goal isn’t “another notes app”—it’s one workspace that other tools can see and reuse through MCP and CLI, with a GUI when you want to think in screens instead of prompts.
Why MCP + CLI matter
MCP is the difference between “my data lives in an app” and “my data is a capability my editor and agents can call.” If your assistant can query your workspace the same way your UI does, you stop treating AI as a separate silo and start treating it as another client over the same source of truth.
The CLI covers the cases where you’re not in an MCP-aware client at all: scripts, SSH-ish flows, quick captures, or “just give me context for this query” from the shell.
Together, they’re how I get local data onto the rest of my machine without turning copy-paste into a second job.
Hybrid RAG (and why “semantic only” wasn’t enough)
Dumping large folders into Claude “works” until it doesn’t: you burn tokens, dilute signal, and the model still might miss the one paragraph that mattered.
DeepThink uses hybrid retrieval: BM25-style keyword search plus semantic search (on Apple’s embedding stack via NLEmbedding). In practice that means:
- Exact symbols, filenames, and rare terms still surface via lexical matching
- Paraphrases and “I can’t remember the wording” queries still surface via semantic matching
- You can ground the assistant on the passages you mean, not the whole haystack
That’s the same underlying idea as good RAG in production systems: retrieve narrow, generate broad.
Claude in the loop
Conversational AI routes through Anthropic’s Claude CLI (claude login) alongside that retrieval layer. The point isn’t to replace your editor; it’s to make workspace-aware conversations cheap enough to use all day because you’re not re-sending the world on every turn.
There’s also a /deepthink command path in the workflow so common actions stay fast once you’re set up.
Who this is for
You might care if you:
- Want local-first control over notes, tasks, and knowledge
- Use Claude and want MCP as the bridge into a real workspace
- Prefer native macOS UI for browsing and editing, not only chat
You probably don’t need this if you’re happy with a single SaaS silo and don’t care about local storage or MCP integration.
Get DeepThink
- Try: try-deepthink.vercel.app
- Repo: deepthink
- Mac app: v1.0.0 release (Full install and troubleshooting are on that release page.)
Feedback and PRs are welcome, especially on MCP, retrieval, and the CLI.
Top comments (0)