I Use One MCP Endpoint for ChatGPT, Claude, Gemini, and Cursor
I use four AI agents daily. Claude Code coordinates my work. Gemini writes code. ChatGPT does web research. Claude Web is where I brainstorm. I also occasionally use Cursor and Windsurf.
Every single one of them connects to the same URL.
One endpoint. One API key per agent. Every tool I've registered — GitHub, Slack, Cloudflare, Exa, my database — is available to all of them, instantly, with the same DLP scanning and audit logging.
This is what the MCP Hub does, and I want to explain why it matters in practice.
The Problem I Had
Before the hub, connecting MCP servers to multiple agents was a nightmare. Each agent has its own config format:
-
Claude Code:
~/.claude.jsonwithmcpServersentries -
Cursor:
.cursor/mcp.jsonin each project - ChatGPT: Developer Mode UI, one connector at a time
-
Gemini CLI:
.gemini/settings.json
Same GitHub MCP server. Four different config files. Four different places storing the same token.
When I rotated my GitHub token, I had to hunt through all four. When I added a new MCP server (Exa for web search), I configured it four times. When one config had a typo, that agent silently lost access to all tools and I spent 20 minutes debugging.
And none of them had any security scanning. My agents could have been sending tool calls containing database credentials, API keys, and internal URLs directly to external MCP servers — and I would have no knowledge unless I reviewed every. single. call.
What I Do Now
I register my MCP servers once, in the mistaike dashboard. GitHub, Slack, Cloudflare, Exa, my custom servers — all stored with encrypted credentials.
Then each agent gets one line of config:
URL: https://mcp.mistaike.ai/hub_mcp
Key: mk_<my-api-key>
That's it. Every agent sees every tool. When I add a new server, all agents get it immediately. When I rotate a credential, I update it once in the dashboard.
What Each Agent Actually Does
This isn't theoretical. This is my daily workflow building mistaike.ai:
Claude Code is my coordinator. It plans work, creates GitHub issues, reviews PRs, dispatches tasks. Through the hub, it uses GitHub (issues, PRs, code search), the Memory Vault (shared knowledge), and the Bug Vault (297,000+ patterns to check before writing code).
Gemini is the executor. It receives a task file, creates a branch, writes tests first, implements the feature, and opens a PR. Through the same hub, it accesses the same GitHub, same Memory Vault, same Bug Vault. It saves bug patterns it discovers during implementation — patterns that Claude Code can then recall in the next session.
ChatGPT does competitive intelligence. Through the hub it can read from the Memory Vault to see what we already know, then uses its native web search to find new MCP security products, attacks, and CVEs. It saves findings back to the Memory Vault where all other agents can find them.
Claude Web is for brainstorming. I iterate on designs, discuss architecture, review options. When I'm happy with a direction, I save it to the Memory Vault through the hub. Then I open Claude Code and search for it — the decision is right there, ready to implement.
Four agents. Different strengths. One shared set of tools and memory.
The Security Part
Every tool call from every agent passes through the DLP pipeline. Both directions.
When Gemini sends a tool call to GitHub that accidentally includes a database password from its context, the hub catches it before it reaches GitHub. When an untrusted MCP server returns a response containing prompt injection, the hub blocks it before the agent sees it.
90+ secret types. 35+ PII entities. Prompt injection detection. Destructive command blocking. All applied uniformly to every agent, every tool call, without any per-agent configuration.
I didn't set up DLP for Claude Code separately from ChatGPT. I configured it once. The hub enforces it on everything.
The Memory Part
The Memory Vault is a native tool on the hub. Every agent can save_memory and search_memories. They all share the same pool.
This is the part that changed my workflow the most. Before the hub, every agent session started from zero. I'd paste the deploy flow, the git rules, the alembic conventions, the smoke test patterns. Every. Session.
Now the vault has 40+ focused memories. Each agent searches for what it needs on startup. Claude Code knows the deploy pipeline. Gemini knows the TDD rules. ChatGPT knows the competitive landscape. None of them needed me to repeat it.
When one agent learns something — a bug pattern, a process change, a user correction — it saves it. The next agent, regardless of which one it is, can find it.
What It Costs
The hub is free to try — you get up to 3 MCP server registrations on the free tier. Routing, audit logging, and Bug Vault are included. Memory Vault is part of the paid tier. DLP scanning is part of the paid tier because scanning every payload in real-time has infrastructure costs.
For most individual developers, 3 servers is enough to see the value. GitHub, a database, and one more. If you need more, the paid tiers raise the limit.
The Honest Take
I built this because I needed it. Managing MCP connections across four agents was eating my time and leaking my credentials. The hub solved both problems.
It's not perfect. ChatGPT's MCP support is still new and only works through Developer Mode with OAuth limitations. Some MCP servers have transport quirks that need workarounds. The dashboard UI is functional but not polished.
But the core idea works: your MCP setup should belong to you, not to any specific agent. Tools should follow you. Memory should persist. Security should be automatic. And you should configure everything once.
This post describes my actual daily workflow as of March 2026. mistaike.ai is a real product I use to build itself. The MCP Hub is live at mcp.mistaike.ai.
Originally published on mistaike.ai
Top comments (0)