MCP Hit 97 Million Installs. Here's What That Means for Your Agent Stack.
Model Context Protocol crossed 97 million installs in March 2026. In April, Cursor, Claude Code, and OpenAI Codex all shipped MCP v2.1 support. The experiment is over — MCP is infrastructure.
Here's what changed, what it unlocks, and the one pattern that makes it click.
What MCP v2.1 Actually Shipped
The headline additions in v2.1:
Structured tool discovery — Agents can now query an MCP server for its full tool manifest at runtime, including descriptions, input schemas, and capability tags. No more hardcoded tool lists in system prompts.
Streaming tool results — Long-running tools can stream partial results back to the client. This is huge for search, code execution, and database queries.
Transport agnostic — v2.1 standardizes HTTP, stdio, and WebSocket transports. Your MCP server works the same whether the client is Claude Code, Cursor, or your own custom agent.
Auth delegation — OAuth 2.0 flows are now part of the spec. An MCP server can request user authorization without the calling agent needing to know your OAuth provider.
The Convergence That Matters
Before April 2026, the three major AI coding environments each had their own tool integration story:
- Claude Code had skills and hooks
- Cursor had extensions
- Codex had custom functions
Now all three are MCP-native. A tool you build once as an MCP server works across all three. This is the compound win:
Your MCP Server
├── Claude Code (full MCP v2.1)
├── Cursor (full MCP v2.1)
├── Codex (full MCP v2.1)
└── Your own agents (via SDK)
One implementation, every client.
Building a v2.1 MCP Server in 50 Lines
import { MCPServer, tool, z } from "@modelcontextprotocol/sdk";
const server = new MCPServer({
name: "my-tools",
version: "1.0.0",
});
server.addTool(
tool({
name: "search_codebase",
description: "Search the codebase for a pattern",
input: z.object({
query: z.string().describe("Search query or regex"),
path: z.string().optional().describe("Limit to this directory"),
}),
handler: async ({ query, path }) => {
const results = await ripgrep(query, path);
return { content: results.slice(0, 20) };
},
})
);
server.addTool(
tool({
name: "run_tests",
description: "Run the test suite and stream results",
input: z.object({
pattern: z.string().optional(),
}),
streaming: true, // v2.1 streaming
handler: async function* ({ pattern }, { emit }) {
const proc = spawnTests(pattern);
for await (const line of proc.stdout) {
await emit({ type: "progress", content: line });
}
return { summary: await proc.result() };
},
})
);
// Works via HTTP, stdio, or WebSocket
server.listen({ transport: "http", port: 3001 });
That server is now available in Claude Code, Cursor, and Codex — no extra config per client.
The Pattern That Makes It Click
The mistake most teams make is building MCP servers as thin wrappers around existing APIs. The right pattern is building capability servers — tools that understand your domain and surface high-level operations, not low-level API calls.
Instead of:
- create_github_issue(title, body, labels)
- update_github_issue(id, body)
- close_github_issue(id)
Build:
- manage_issue(intent: "create" | "update" | "close", context: string)
Let the tool figure out the right API calls. The agent's job is intent, not API choreography.
What 97M Installs Actually Means
At 97M installs, MCP has crossed the network effect threshold. Every MCP server someone else publishes is now a tool your agents can use. Every AI client your users already have supports your tools natively.
The parallel is npm. Not every package is good, but the ecosystem compounds. Start thinking about MCP servers the way you think about npm packages — small, composable, versioned, publishable.
The MCP registry is at mcp.run. The v2.1 spec is at modelcontextprotocol.io. Both are worth a read before you build your next integration.
Top comments (0)