I spent a weekend connecting every MCP server that sounded useful. By Sunday night I had 11 running, a claude_desktop_config.json that scrolled off the screen, and an agent that was technically capable of doing almost anything. In practice, it was doing almost nothing useful.
What I learned had very little to do with which servers are "good."
The List Looks Better Than It Performs
The MCP ecosystem has exploded. There are directories with thousands of servers now. You can connect your agent to GitHub, Notion, Google Drive, Slack, your calendar, your email, your databases, your browser, a web search tool, a weather API, and about 40 other things I cannot remember anymore. The pitch is always the same: connect everything and your agent becomes superhuman.
What nobody tells you is what happens to your context window when you do.
Every MCP server shows up in your agent's prompt as a block of tool definitions: name, description, parameter schema. This is not free. Perplexity's CTO said publicly in March that tool descriptions alone eat 40 to 50% of available context before the agent touches a single real task. I ran my own rough test with 11 servers and came out around 35%. One third of my agent's thinking capacity, gone before the first message. The agent still technically worked. It just worked slower, cost more per turn, and occasionally picked the wrong tool because it had 80 options to reason about instead of a handful.
What I Kept
After a week of watching which tools the agent actually called, I stripped it down to three: web search, my calendar, and file system access to one specific folder.
My claude_desktop_config.json went from this kind of sprawl:
{
"mcpServers": {
"brave-search": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-brave-search"] },
"github": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"] },
"notion": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-notion"] },
"slack": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-slack"] },
"google-calendar": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-google-calendar"] },
"filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you"] },
"puppeteer": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-puppeteer"] }
}
}
To this:
{
"mcpServers": {
"brave-search": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-brave-search"] },
"google-calendar": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-google-calendar"] },
"filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/work"] }
}
}
Note the filesystem path change. /Users/you gave the agent access to everything on my machine. /Users/you/work gives it access to what it actually needs. Smaller scope, less noise in file lookups, faster responses.
The difference was immediate and embarrassing. I had spent a whole weekend adding capabilities, and the agent got noticeably better the moment I removed most of them.
Web search is the tool that punches farthest above its weight. An agent that can look something up in real time stops being a static oracle and becomes useful for questions whose answers change. Calendar access mattered more than I expected, not because the agent scheduled things for me, but because it could check whether I had time before suggesting I do something. "You have a 2-hour block on Thursday" as context is qualitatively different from a suggestion with no time anchor. File system access scoped to one working folder let me point the agent at my notes and drafts without handing it my entire machine.
Everything else I connected was called once or twice across a full week of actual use, then never again. GitHub: technically impressive, practically unnecessary because I just used the CLI. Notion: ran three queries, never opened it through the agent again. Slack: the agent drafted messages I never sent. Browser automation: impressive in a demo, zero practical value in my real workflow.
The Rule I Wish Someone Had Told Me Earlier
A connected MCP server costs you context whether or not you ever use it. The tool definitions sit in the prompt on every single turn. The agent reasons about whether to invoke them each time. More servers is not "free and neutral, then occasionally useful." More servers is always a cost, and only sometimes worth it. That is a fundamentally different calculation.
An MCP list that looks impressive in a screenshot will slow your agent down in production. Connect what you actually use, not what sounds good.
Before adding any server, the right question is not "could this be useful?" It is "did I actually need this in the last seven days?" If the answer is no, leave it disconnected. You can always add it back in thirty seconds.
What I Would Do Differently
Start with zero servers. Use the agent for a week on its base model alone. Write down every moment you hit a wall, something the agent could not do that you genuinely needed. Then connect exactly those servers and nothing else. Nothing else.
Most of the value in AI agents right now does not come from the size of the tool list. It comes from the agent having enough context headroom to reason clearly about the few tools it has. A focused agent with 3 servers beats a loaded one with 11. At least that is what a weekend of overengineering taught me.
If you have used MCP in a real workflow beyond demos, I want to know two things: which server do you use every single day, and which one did you connect and quietly never call again? Drop both. The most useful answers get turned into a proper teardown post with real usage data.
Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.