A field report from CuratedMCP — what we've learned shipping infrastructure for 10,000+ MCP servers, and what the protocol still can't do at production scale.
Reading time: 9 minutes
Audience: Engineers, security leaders, and AI infrastructure builders
The Model Context Protocol turned one year old in November 2025. In that year it went from a niche Anthropic experiment to the integration layer underneath Claude Desktop, Cursor, Windsurf, GitHub Copilot, OpenAI Agents, and Gemini CLI. There are now over 10,000 active MCP servers and 97 million monthly SDK downloads. The protocol won.
What no one writes about is what happened to the people who tried to build serious infrastructure on top of it.
We've spent the last 90 days running CuratedMCP — a curated marketplace, security auditor, and governance layer for MCP servers. We've shipped npm packages that thousands of developers have installed. We've seen what works at protocol level and what falls apart the moment a real enterprise tries to deploy it. This is not a marketing post. This is a field report on five problems the MCP community is going to have to solve in 2026, and what we've built so far to address each one.
If you're a developer wondering why your MCP setup keeps breaking, a CISO trying to figure out how to govern this, or an investor evaluating the space — read on. The protocol is not the moat anymore. What's built on top of it is.
Problem 1: The discovery layer is broken, and it's getting worse
In early 2025 there were a few hundred MCP servers, scattered across GitHub repos. Today there are more than 10,000. The Anthropic-blessed registry alone passed 2,000 entries in three months. Most of these servers are unmaintained forks, abandoned proofs-of-concept, or duplicate implementations of the same integration.
The problem this creates is invisible until you try to use it. A developer wanting to "connect Claude to Postgres" today has to evaluate roughly seven different Postgres MCP servers, three of which have working install instructions, two of which support the latest spec, and exactly one of which has been updated in the last 60 days. The signal-to-noise ratio is collapsing in real time.
Auto-scraping directories made this worse, not better. They optimize for breadth — "we have the most servers" — and the result is a search experience that surfaces junk alongside production-grade tools, with no way for a developer or a security team to tell the difference.
What we've learned: Curation is the only fix. Every server in our catalog passes a manual review — does it install cleanly, does it have working documentation, does its security posture make sense, is it maintained. That review takes about 20 minutes per server. We list around 70 servers, not 10,000. The customers we have asked us to do less, not more — they wanted "the five postgres MCPs that actually work," not "every Postgres MCP on GitHub."
This is the same insight that got Hugging Face to a $4.5B valuation: Hugging Face hosts roughly a million models, but its product-market fit came from the Transformers library — a curated set of 30 pre-trained models with consistent APIs that actually worked. Breadth is not the moat. Trust is.
The harder problem: curation does not scale linearly with team size. We are working on a hybrid model — community-submitted servers with automated verification (install test, dependency scan, behavior analysis) plus a "Verified by CuratedMCP" badge for human-reviewed entries. The badge is the trust signal; the rest is searchable but unranked.
Problem 2: Distribution lives inside the AI client, not on a website
Here is something the MCP marketplace category got fundamentally wrong, including us, for our first month: developers do not browse marketplaces.
We can prove this with our own analytics. Our marketplace pages get a fraction of the traffic our developer-tool downloads do. The MCP Auditor — a free CLI that scans developer machines for vulnerable MCP configurations — has hundreds of organic npm installs. Nobody promoted it. The traffic to its marketplace landing page is a tenth of those installs.
The lesson: developers find tools by searching npm, by typing into Claude, by reading what their tech lead pasted into Slack. They don't open a tab and go marketplace-shopping. The distribution channel for MCP infrastructure isn't the marketplace website — it's the AI client itself.
That's why we shipped the Launcher: an MCP server whose only job is to find and install other MCP servers. You add one line to your Claude config, restart, and then ask Claude in plain English: "Find me an MCP server for Postgres." The launcher returns a curated config snippet. You paste it. Done.
This pattern — meta-MCP, an MCP server that helps you install MCP servers — is the right shape of distribution for this category. It puts our catalog directly inside the workflow of every Claude, Cursor, and Windsurf user, where they actually live. The marketplace website becomes a dashboard for something that lives in their environment, the same way the GitHub website is a dashboard for something that mostly happens in git.
Problem 3: Shadow MCP is the new shadow IT, and security teams are about to find out
Every developer in your organization is now adding MCP servers to their Claude or Cursor config. Most of those servers are pulled from GitHub or npm with zero security review. They run as local processes with full access to whatever credentials the developer has on their machine — production database keys, AWS secrets, Stripe tokens.
Three things make this uniquely difficult to govern:
It's invisible to network monitoring. MCP servers communicate with the AI client over stdio, not HTTP. There is nothing for a firewall or DLP to inspect.
It's invisible to procurement. A developer doesn't fill out a form to install an npm package. There is no contract, no DPA, no terms-of-service review. The "vendor" is an individual on the internet.
It bypasses endpoint posture. EDR sees node.js running and moves on.
We've talked to half a dozen platform engineering leads at mid-sized companies in the last month. None of them know how many MCP servers their engineers have installed. All of them suspect the answer is "more than fifty, probably some are bad." None have a remediation plan. This is the procurement crisis of mid-2026, and it will be the dominant security conversation for the second half of the year.
What we've built: the MCP Auditor — a free, open-source CLI that scans Claude Desktop, Cursor, and Claude Code configurations for credential leaks, filesystem exposure, and unverified servers. It runs locally. It generates a markdown report. It is the smallest possible useful tool in this space, and it's the one that has had real organic traction.
What we're building: a private MCP registry that gives security teams a single approved catalog their developers can install from. RBAC, scoped API keys, audit logs, instant revocation. The same architectural pattern GitHub Enterprise applied to source code, Okta applied to SaaS auth, and Snyk applied to open-source dependencies. The MCP equivalent does not yet exist at scale, and the demand is about to compound.
Problem 4: The protocol has no answer for production-scale identity
This is the technical problem that gets discussed least and matters most. The MCP team's own 2026 roadmap calls out three protocol-level gaps: identity propagation, adaptive tool budgeting, and structured error semantics.
In practice, here's what that means: when a Claude agent on a developer's machine calls an MCP server that calls a downstream API, who is the request actually being made as? Today, the typical answer is "whatever long-lived API key the developer pasted into the config file when they installed the server six months ago." That key has the developer's full permission scope. It is rarely rotated. It often outlives the project.
A 2025 survey found that 53% of community-built MCP servers rely on static API keys with no rotation discipline. That's not a problem in a hobby setup. In a 500-person engineering org, it's the next major credential-leak incident waiting to happen.
The fix the protocol should support — and is finally moving toward — is OAuth-style authorization with scoped, short-lived tokens, resource indicators that limit token reuse, and explicit consent flows when an MCP server is asked to do something privileged. The 2025-06-18 spec revision adds the framework. Most servers still don't implement it. The migration will take 12-18 months across the ecosystem, and during that window, the surface area for credential abuse grows every week.
What we've built: a configuration scanner that flags every static API key in a user's config and recommends OAuth alternatives where they exist. We're also working on a hosted credential broker — short-lived, scoped tokens issued just-in-time when an MCP server needs them, rather than long-lived secrets sitting in plain text on developer machines. This is the kind of infrastructure that exists for SaaS apps (Vault, Doppler) and doesn't exist yet for MCP. It will.
Problem 5: There is no monetization story for the people building MCP servers
The dirty secret of the MCP ecosystem is that almost no one building MCP servers is making money from them.
Let's actually count. There are 10,000+ active MCP servers. Of those, maybe 200 are built by companies (Stripe, Cloudflare, GitHub, Hugging Face, Notion, etc.) where the MCP server is a free distribution channel for their core SaaS product. The other 9,800 are individuals, side projects, and small teams who shipped something useful and have no path to revenue from it.
This is bad for the ecosystem in two ways. First, the people doing the most interesting niche work — a Datadog MCP, a Splunk MCP, a Salesforce MCP — burn out within six months because nobody is paying them. Second, no monetization means no commitment, which means abandoned servers, broken installs, and the discovery problem from the top of this article.
The Apify model — a marketplace that takes a 20% platform fee and gives developers 80% of every subscription — is what should exist for MCP. It is what we have built. But the harder problem is not the platform mechanics; it's that buyers don't yet understand they should pay for an MCP server, and sellers don't yet understand they can charge.
This is a category-creation problem, not a product problem. Hugging Face had to teach the world that researchers should pay $9/month to host private datasets. Stripe had to teach the world that developers should pay 2.9% to accept payments instead of doing it themselves. Someone in the MCP ecosystem has to teach the world that an MCP server which saves an engineering team 10 hours a week is worth $29/month.
We're trying. Our seller program offers an 80/20 split with automatic Stripe Connect payouts. The number of paid subscriptions is small but growing. The first paid subscription happened three weeks ago. The second happened last week. This is the slow part of building infrastructure.
What this means for the next 12 months
The MCP ecosystem is in a strange phase. The protocol has reached escape velocity. The clients have all adopted it. The number of servers is exploding. And almost everything around the protocol — discovery, security, governance, monetization — is still being built by hand by a small number of teams, in real time, in public.
If you are a developer reading this: install the Auditor, scan your config, see what's actually running on your machine. We give it away because the security data we get back makes the catalog better.
If you are a security leader reading this: assume your engineering org has more MCP servers than you think it does, and that none of them have been reviewed. The cheapest first action is the same Auditor scan, run across your developer machines.
If you are a builder reading this: the MCP server you've published on GitHub is reaching almost no one through that channel. List it on a curated marketplace. Try charging for it. Find out whether anyone will pay. The information from that experiment is worth more than the GitHub stars.
If you are an investor or strategic partner reading this: the layer between the AI client and the actual tools is the next great infrastructure category. MCP is the protocol layer. What gets built on top of it is the thing that compounds.
We are CuratedMCP. We're building the system of record for MCP. We're early. Most of the work above is still in progress. But these are the five problems we wake up thinking about every day, and we'd rather write about them honestly than pretend we have everything figured out.
Find us at curatedmcp.com. The Auditor is at npx @curatedmcp/auditor. The Launcher is at npx @curatedmcp/launcher. The enterprise pilot is open. The conversations we're not having yet are the ones we'd most like to have — drop a line at founders@curatedmcp.com if you've hit any of the problems above.
The protocol is built. The infrastructure on top of it is what we're racing to figure out. Same as 1995. Same as every important technology shift since.
About the author: The CuratedMCP team builds the system of record for the Model Context Protocol ecosystem — a curated marketplace of 70+ verified MCP servers, free open-source security tooling (@curatedmcp/auditor), and an enterprise governance layer for organizations standardizing AI agent tools at scale.
Try it:
-
npx @curatedmcp/auditor— free MCP security scan -
npx @curatedmcp/launcher— install MCP servers from inside Claude - curatedmcp.com/marketplace — verified catalog
- curatedmcp.com/enterprise/pilot — 30-day enterprise pilot
Published 2026 · MIT-style content license — feel free to repost with attribution.
Top comments (0)