Remember when every AI company was shipping its own proprietary tool-calling format? OpenAI had function calling. Anthropic had tool use. Google had function declarations. Every integration you built was locked to one provider, and switching meant rewriting everything.
That era is over. The Model Context Protocol crossed 97 million monthly SDK downloads in March 2026. For context, it launched in November 2024 with roughly 2 million downloads. That's a growth curve that makes Kubernetes look sluggish -- and Kubernetes took nearly four years to reach comparable deployment density.
Why MCP Won
The simplest explanation is usually the right one: MCP won because it solved a real problem with a clean abstraction.
Before MCP, connecting an AI agent to a database meant writing custom code for each model provider. Want Claude to query your Postgres? Build an Anthropic tool. Want GPT-4 to query the same Postgres? Build a different OpenAI function. Want Gemini to do it? Yet another integration. Multiply that across every tool, database, API, and SaaS product your team uses.
MCP flipped this around. You build one server that exposes your tool's capabilities, and any MCP-compatible client can use it. The analogy everyone reaches for is "USB for AI agents" -- one plug, every device. It's a bit reductive, but it captures the core value proposition.
// An MCP server is just a description of capabilities
// plus handlers for when an agent invokes them
// Build once, works with Claude, GPT, Gemini, Llama, anything
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "inventory-server",
version: "1.0.0",
});
// Define a tool that any AI agent can discover and invoke
server.tool(
"check_stock",
"Check current inventory levels for a product",
{
sku: z.string().describe("Product SKU to check"),
warehouse: z.string().optional().describe("Specific warehouse ID"),
},
async ({ sku, warehouse }) => {
const stock = await db.query(
"SELECT quantity, location FROM inventory WHERE sku = $1",
[sku]
);
return {
content: [{
type: "text",
text: JSON.stringify(stock.rows, null, 2),
}],
};
}
);
const transport = new StdioServerTransport();
await server.connect(transport);
That server works with Claude Code, ChatGPT, Gemini, Copilot, Cursor, Windsurf, and every other MCP-compatible client. You wrote it once. It's done.
97 Million Means the Network Effect Kicked In
Numbers like 97 million monthly SDK downloads don't happen because the spec is elegant. They happen because of network effects. Every MCP server that gets published makes the ecosystem more valuable for client developers. Every client that adds MCP support makes the ecosystem more valuable for server developers. At some point, the flywheel becomes self-sustaining.
We're way past that point. The MCP server ecosystem now includes over 5,800 community and enterprise servers covering databases, CRMs, cloud providers, productivity tools, dev tools, e-commerce platforms, analytics services -- basically every category of software that an AI agent might need to interact with.
And the backing is unanimous. OpenAI, Google, Microsoft, AWS, and Cloudflare all support MCP through the Linux Foundation's Agentic AI Foundation. When every major player endorses the same standard, the protocol war isn't a war anymore. It's a consensus.
What This Means Practically
If you're building AI-powered features today and you're not using MCP, you're creating technical debt. Here's why.
Without MCP, your AI integrations are tightly coupled to specific model providers. If OpenAI changes their function-calling format (which they've done multiple times), you rewrite. If you want to switch from Claude to Gemini for cost reasons, you rewrite. If you want to run a local model for development, you rewrite.
With MCP, your tools are defined independently of any model. Switching providers is a client-side configuration change, not a tool-side code change.
# MCP client configuration -- switch providers without touching tool code
# claude_desktop_config.json, cursor settings, or any MCP-compatible client
{
"mcpServers": {
"inventory": {
"command": "node",
"args": ["./mcp-servers/inventory/index.js"],
"env": {
"DATABASE_URL": "postgresql://localhost:5432/inventory"
}
},
"analytics": {
"command": "npx",
"args": ["-y", "@your-org/analytics-mcp-server"],
"env": {
"API_KEY": "${ANALYTICS_API_KEY}"
}
},
"deployment": {
"command": "python",
"args": ["-m", "deployment_mcp_server"],
"env": {
"CLOUD_PROVIDER": "aws",
"REGION": "us-east-1"
}
}
}
}
Three servers. Three completely different tools. Any MCP client can discover and use all of them. The agent decides which tool to call based on the user's intent. The tool implementations know nothing about which model is driving the requests.
What Developers Should Build With MCP Now
The obvious targets are already covered -- Postgres, Slack, GitHub, Jira, AWS, etc. But the 5,800 number, while large, still leaves enormous gaps. Here's where the opportunity is.
Internal tools. Every company has bespoke internal systems that AI agents can't reach. Your deployment pipeline, your feature flag system, your internal metrics dashboard, your customer support ticketing system. Wrapping these in MCP servers turns them into capabilities that any AI coding agent or assistant can use. This is where MCP delivers the most immediate ROI -- not in building the next Postgres MCP server, but in exposing your proprietary systems to AI.
Domain-specific tools. Medical record lookups, financial compliance checks, manufacturing QC data, legal document retrieval. These verticals have barely been touched by the MCP ecosystem. If you have domain expertise and can build a well-designed MCP server for your industry, the demand is there.
Composite servers. Instead of one server per tool, build servers that coordinate across multiple data sources. An "order management" server that combines inventory, shipping, and payment APIs into a coherent set of tools is more useful to an AI agent than three separate servers that each expose raw API calls.
The Kubernetes Comparison Is Instructive
When Kubernetes hit critical mass, the companies that benefited most weren't the ones who built Kubernetes. They were the ones who built on top of it -- Datadog, Istio, Helm, ArgoCD. The infrastructure layer became a given, and the value moved up the stack.
MCP is following the same trajectory. The protocol layer is settled. The value is now in what you build on top of it. The developers and companies that move quickly to expose their unique capabilities through MCP servers will have a structural advantage as AI agents become the primary interface for software interaction.
97 million monthly downloads isn't the finish line. It's the starting gun for everything that gets built on top of a universal agent protocol. The protocol war is over. The platform war just started.
Top comments (0)