DEV Community

Cover image for MCP Wins, But the War Just Started: Why Standardization Won't Save Us from Lock-in
Masaaki Hirano
Masaaki Hirano

Posted on

MCP Wins, But the War Just Started: Why Standardization Won't Save Us from Lock-in

December 9, 2025 marked a historical turning point.

Anthropic announced that it has donated its Model Context Protocol (MCP) to the Linux Foundation.
Concurrently, a new management organization, the Agentic AI Foundation (AAIF), has been established under the Linux Foundation umbrella.

This is not merely the transfer of a single technology.
The founding members of this new foundation include not only Anthropic but also OpenAI, Block, Google, Microsoft, and AWS—giant tech companies that should theoretically be competitors.

The chaotic AI industry, where each company had been locking users in with proprietary standards, has finally moved unanimously toward the "standardization of Agentic AI."

The Current State of MCP by the Numbers: Why is it the De Facto Standard?

The momentum of MCP has reached a level that is unstoppable.
Looking at its track record from its introduction a year ago to the present, it is clear why so many companies decided to join.

  • 10,000+: The number of active public MCP servers.
  • 97M+: Monthly downloads of Python and TypeScript SDKs.
  • Full Adoption by Major Platforms: ChatGPT, Cursor, Gemini, Microsoft Copilot, VS Code—virtually all tools used by developers now support MCP.
  • Immediate Infrastructure Support: AWS, Cloudflare, Google Cloud, and Microsoft Azure officially provide deployment environments for MCP servers.

It is safe to say that MCP is no longer "Anthropic's convenient standard" but has completely permeated as the "plumbing for running AI agents."

Why the Linux Foundation?

The greatest significance of this donation lies in the establishment of Vendor Neutrality.

Until now, no matter how open it was, MCP was "Anthropic's standard." For other companies, betting fully on a competitor's technology that could change specs at any time was a risk.
However, by donating it to the Linux Foundation, MCP has become a "public good" rather than the property of a specific company.

Anthropic itself explicitly stated the reason for the donation: "to ensure that MCP remains open source, community-led, and vendor-neutral."
Future development and maintenance will be determined not by the intentions of a specific company but by community input and a transparent governance model. This is why OpenAI and Google were able to participate with confidence.

AAIF: The Movement for "Agent Standardization" Beyond MCP

In addition to MCP, other critical projects were donated simultaneously to the newly established AAIF (Agentic AI Foundation).

  1. AGENTS.md (Donated by OpenAI): A metadata standard for describing tools and capabilities used by agents. The "instructions for agents," which had been disparate across companies, will now be unified.
  2. goose (Donated by Block): A framework for robust AI agent development.

By managing these under the same foundation, interoperability between tools is expected to improve dramatically. A future where "an agent built with goose reads the specs defined in AGENTS.md and executes tools via MCP" is becoming a reality, even with a mix of different vendors.

Technical Updates: Evolution to Practical Use

Beyond standardization, technical progress has not stopped. The spec update released on November 25 added the following features:

  • Asynchronous operations: Agents can now proceed without waiting for time-consuming processes.
  • Statelessness: Reduces the burden of state management on the server side, improving scalability.
  • Server identity: Enhanced verification of connection legitimacy.
  • Official extensions: A mechanism to safely extend standard features.

Also, the Claude directory already lists over 75 connectors, and features like "Tool Search" and "Programmatic Tool Calling" via API are becoming robust. A world where "you just plug it in and it works" is right before our eyes.

Conclusion: Will Anything Change?

We are all too familiar with the current state of AI development—a miserable loop where standards proliferate and code becomes obsolete in six months.
Seeing this news, did you think, "This is the end of hell"?

Let me give you the conclusion. Nothing will change. At least, not for now.

MCP has won. The standardization war is over.
However, this merely means the "shape of the power outlet" has been decided. Just because the outlet is unified doesn't mean the power supply becomes stable or the appliances become high-performance.

As Ilya Sutskever mentioned in his podcast "We're moving from the age of scaling to the age of research", we are still squarely in the "Age of Research."
Models are still "Jagged." They are genius at some tasks but make incredibly stupid mistakes at others. This fundamental problem will not be solved just because the protocol has been unified.

The attitude of the vendors won't change either.
They will standardize the troublesome part of "connection," but will then shift all their resources to locking you in with "Intelligence" and "Ecosystems" beyond that connection.

The evidence is clear.
Almost simultaneously with the MCP donation, Anthropic is heavily pushing Agent Skills. This packages procedural knowledge—"how to make the agent behave"—but its execution environment deeply depends on the Claude ecosystem.
On the OpenAI side, Codex has experimentally started similar "Skills" support.

Even more serious is the "Siloization of Troublesome Operations."
Data, runtime, and long-term memory management. These are massive management costs for humans. That's why vendors whisper sweetly, "We'll handle all the messy stuff on the platform side," and actively push for siloization.
Where is the data generated by the agent? The execution logs? The state?
Before you know it, all of it is imprisoned inside a specific vendor's black box, and we aren't even given the key—such a future is steadily being built.

In other words, even if "anything can connect via MCP," the "Brain (Skills)" to use those connected tools wisely will be described in each company's proprietary specs and locked in.
The structure where "the plumbing is common, but the water (intelligence) flowing through it only comes from our faucet" is actually being reinforced.

But we can't blame them.
Fundamentally, the "Agent-first" paradigm shift itself was a nuclear bomb that filled in the vendors' moats.

In the "Human-first" world, the cost of learning a new tool (learning cost) was the biggest barrier. We couldn't switch OSs or leave Office because relearning was a hassle for humans.
But in the "Agent-first" world, it's the agent, not the human, that uses the tool. The agent adapts to the new environment without complaint, hits the API, and completes the task.

By removing the human from the loop, switching costs have effectively become zero.
If "today's strongest model" changes tomorrow, you can switch your entire workflow to the latest intelligence by rewriting a single line in a config file. Now that connection is standardized with MCP, that fluidity is at its peak.

That is precisely why vendors have no choice but to be desperate. If they don't forcibly tie users down with value-adds like "Skills" and "Ecosystems," they will become mere plumbing providers.

Their desperation is the flip side of their fear of becoming "replaceable commodities."

So, the reality in front of us—the pain of prompt engineering, model hallucinations, the battle against non-deterministic behavior—will still be there tomorrow.

Top comments (0)