In November 2024, Anthropic open-sourced a protocol called MCP. Twelve months later, it had over 6,400 registered servers. Google DeepMind's CEO Demis Hassabis called it "rapidly becoming an open standard for the AI agentic era." In December 2025, Anthropic donated it to the Agentic AI Foundation — a Linux Foundation directed fund co-founded by Anthropic, Block, and OpenAI. OpenAI officially adopted it in March 2025.
I want to tell you why it matters — not from a press release, but from the perspective of someone who uses MCP in production every day to run our content operations.
At Innovatrix, MCP connects our AI systems to Directus CMS, ClickUp, and Gmail. Right now, the AI that runs our AI automation stack can publish blog posts directly to our CMS, update project tasks, manage email workflows — all through standardised MCP connections. Before MCP, each of those integrations required custom API code. With MCP, they're plug-and-play. That's not a marketing claim. That's what my workday actually looks like.
Here's what MCP is, how it works, and why the "HTTP of the agentic web" analogy is both correct and incomplete.
Before MCP: Every Integration Was a Custom Build
If you've built AI applications that connect to external tools — a calendar, a CRM, a database, a code editor — you've felt this pain directly.
Every integration was bespoke. Want your AI to read from Notion and write to Slack? You write a custom connector for Notion, a custom connector for Slack, and wire them together with glue code specific to your application. Switch from GPT-4 to Claude? Rewrite the tool-calling layer. Add a new data source? Another custom integration.
This is how AI tool integrations worked from 2022 through most of 2024. There were vendor-specific tool-calling standards — OpenAI's function calling, Anthropic's tool use — but they weren't interoperable. An integration built for one AI model didn't work with another.
The result was what Anthropic called "the M×N problem": M AI models × N tools = M×N custom integrations. At scale, this was unsustainable. MCP collapses it to M+N.
What MCP Actually Is
MCP (Model Context Protocol) is an open standard for connecting AI systems to external tools, data sources, and services. It defines a client-server architecture where:
- MCP Hosts are AI applications that manage connections (Claude Desktop, Cursor, your custom agent)
- MCP Clients are components that maintain connections to MCP servers on behalf of the host
- MCP Servers are programs that expose specific tools, resources, and capabilities to AI clients
The key architectural insight: a single MCP server can work with any MCP-compatible AI application. A single AI can connect to any number of MCP servers without bespoke integrations.
MCP standardises three types of primitives that servers can expose:
Tools — Functions the AI can call to take actions. Examples: create_task, send_email, query_database. Tools represent executable operations and are the primary way AI agents interact with the world.
Resources — Data the AI can read for context. File contents, database records, API responses. Resources are read-oriented by design.
Prompts — Reusable prompt templates that the server provides to guide specific interactions for its domain.
The protocol also defines a capability handshake: when an AI connects to an MCP server, they negotiate what capabilities each side supports. This is how an AI agent automatically discovers what a new server can do without hardcoded knowledge.
The HTTP Analogy — Where It's Right and Where It Falls Short
The "HTTP of the agentic web" comparison is directionally correct. HTTP standardised how clients and servers communicate on the web — any browser could talk to any web server. Before HTTP, every network protocol was proprietary. After HTTP, the web became interoperable.
MCP is attempting the same thing for AI-tool communication. Before MCP, every AI application had proprietary tool integration formats. After MCP (if adoption continues at current trajectory), any AI agent should be able to connect to any tool that has an MCP server — regardless of which AI company built the agent.
The OpenAPI comparison is actually more technically accurate than HTTP. HTTP is a transport protocol. MCP is more like a description and communication format for AI-to-tool interactions — closer to what OpenAPI does for HTTP APIs, but designed specifically for LLM agents rather than for human developers reading documentation.
The most honest analogy is USB-C: same port, same physical standard, works with anything. Before USB-C, your laptop charger didn't work with your phone, which didn't work with your monitor. The value isn't any individual component — it's universal connectivity. Your AI model is the device, MCP is the port, the tools are the accessories.
One important distinction that most articles miss: MCP is not an agent framework. It's a standardised integration layer. MCP does not decide when a tool is called or for what purpose — the LLM does that. MCP simply standardises the connection. It complements frameworks like LangChain, LangGraph, and crewAI; it doesn't replace them.
Real Production Experience: What Simplified and What Didn't
I want to be direct about this, because most MCP articles read like documentation rewrites.
What genuinely simplified:
Setting up a new integration is dramatically faster. Adding our Gmail MCP server took approximately 20 minutes. The equivalent custom API integration would have taken the better part of a day — OAuth flow, rate limit handling, error management, endpoint mapping. With MCP, the server handles all of that.
Context switching between tools in multi-step workflows is clean. An AI agent that reads a ClickUp task, drafts content, publishes it to Directus, then marks the task complete does all of that through the same MCP client interface. No data marshalling between different API clients.
The ecosystem velocity is real. Over 6,400 MCP servers registered as of February 2026. If a tool matters to someone building AI systems, there's likely an MCP server for it.
What surprised us and what we had to work around:
Early MCP had session identifiers in URLs — a well-known security anti-pattern. This has been addressed in spec updates, but many older MCP server implementations you find in the wild still haven't updated. Always check version and security posture before deploying any third-party MCP server. See our LLM security guide → for the full picture on MCP security.
Tool description quality varies wildly. An MCP server is only as useful as the quality of its tool descriptions. Vague parameter names and unclear return values confuse the AI model and produce unreliable results. We've had to fork and improve descriptions on several MCP servers we use.
Giving an AI too many tools at once hurts performance. Researchers found this; we confirmed it independently. An agent with 100+ available tools spends excessive cognitive overhead on tool selection rather than the actual task. We now compose MCP servers carefully — limiting active tool context to what's needed for the current workflow.
The spec is still maturing. MCP is at spec version 2025-11-25 as of this writing. The ecosystem moves fast, which means breaking changes happen. Pin your MCP server versions in production.
Which Tools Have MCP Servers (The Ones That Matter)
In 15 months, MCP server coverage has expanded to cover most of the development and business tool stack:
Development: GitHub, GitLab, Linear, Jira (via Atlassian), VS Code extensions, Cursor
Content & CMS: Directus, WordPress, Notion, Confluence
Productivity: Google Workspace (Gmail, Calendar, Drive), Slack, ClickUp, Asana, Monday.com
Data: PostgreSQL, MySQL, MongoDB, Supabase, BigQuery
Commerce: Shopify, Stripe, WooCommerce
Infrastructure: AWS, Cloudflare, Vercel
For businesses building agentic AI workflows — exactly what we build for clients across India and the Gulf — MCP is now foundational infrastructure.
Why This Changes the Economics of AI Integration
For D2C brands and enterprises in India and the GCC, MCP matters because it changes what AI integration costs.
Before MCP, adding AI to your existing tech stack meant custom API integrations for every system the AI would touch. Expensive engineering time, and it breaks every time the underlying API changes.
With MCP, the integration layer is standardised. If your Shopify store, CRM, support ticketing system, and ERP all have MCP servers (increasingly, they do), a single AI agent can orchestrate across all of them without bespoke glue code.
This is what multi-agent systems actually look like in practice: specialised agents for different tasks, all communicating through standardised tool interfaces. The practical implication: AI automation is getting significantly cheaper to build and maintain. The bespoke integration tax that made many AI projects cost-prohibitive is declining fast.
For a deeper look at how AI web applications connect all of this together, see our web development services →.
The Governance Story: Why This Isn't Vendor Lock-In
Anthropic created MCP, but Anthropic no longer controls it. The December 2025 donation to the Agentic AI Foundation — under Linux Foundation, co-founded with OpenAI and Block — means MCP governance is now distributed and vendor-neutral.
For enterprises evaluating whether to build on MCP: the risk of one company changing the standard for competitive advantage is now structurally limited. This is a genuine open infrastructure play, not a Trojan horse for Anthropic's ecosystem.
What Comes Next
MCP's trajectory over the next 12 months will be defined by two things: enterprise security hardening (the current limitations around tool poisoning and permissions need production-grade solutions) and server composition patterns (orchestrating many MCP servers into coherent agent workflows at scale).
The agentic web isn't coming. It's here. The question for businesses and developers is whether you're building on standards that compound or on proprietary integrations that fragment.
If you're evaluating AI automation for your business and want to understand how MCP fits into a practical production architecture — talk to us →.
Frequently Asked Questions
What is MCP (Model Context Protocol)?
MCP is an open standard introduced by Anthropic in November 2024 for connecting AI systems to external tools, databases, and services. It defines a client-server architecture where any MCP-compatible AI can connect to any MCP server without custom integration code.
Who created MCP and who controls it now?
MCP was created by Anthropic and open-sourced in November 2024. In December 2025, Anthropic donated it to the Agentic AI Foundation — a Linux Foundation directed fund co-founded by Anthropic, Block, and OpenAI. It is now vendor-neutral open infrastructure, similar to how HTTP is governed by the IETF.
Is MCP the same as OpenAI's function calling or tool use?
No. OpenAI function calling and Anthropic tool use are model-specific APIs enabling a single AI model to use tools. MCP is a protocol standardising communication between any AI model and any tool server. It's one level of abstraction above individual vendor tool APIs.
How many MCP servers exist?
As of February 2026, over 6,400 MCP servers are registered in the official MCP registry. The ecosystem has grown from zero to this in under 15 months — faster than most comparable developer ecosystem expansions.
Is MCP secure to use in production?
MCP has had security issues, including early session token exposure in URLs (since patched). Use only trusted MCP servers, pin server versions, review tool descriptions for anomalies, and apply least-privilege permissions to all tool grants. Read our full LLM security and prompt injection guide →.
Can any AI model use MCP?
Any AI application that implements an MCP client can use any MCP server. Claude, GPT-4o (as of March 2025), and Gemini (mid-2025) all support MCP. The adoption by OpenAI and Google validated MCP as the de facto cross-vendor standard.
What is the difference between MCP and REST APIs?
REST APIs are designed for developer consumption — humans read documentation and write code to call endpoints. MCP servers are designed for AI model consumption — the AI reads natural language tool descriptions and decides which tools to call. MCP typically wraps underlying REST APIs in an AI-readable interface.
How is Innovatrix using MCP right now?
We use MCP in production for our content operations: connecting AI to Directus CMS, ClickUp, and Gmail for autonomous content publishing, task management, and workflow orchestration. It's the backbone of our internal AI automation stack and allows us to deliver faster, more consistent output at scale.
Rishabh Sethia, Founder & CEO of Innovatrix Infotech. Former Senior Software Engineer and Head of Engineering. DPIIT Recognised Startup. Shopify Partner, AWS Partner, Google Partner.
Originally published at Innovatrix Infotech
Top comments (1)
The HTTP analogy is spot on — MCP is essentially doing for AI tool access what HTTP did for web content access. The standardization aspect is what makes it so powerful. Before MCP, every AI tool integration was a custom one-off. Now you build an MCP server once and every compatible client can use it. I've been building MCP servers for client automation workflows and the developer experience has improved dramatically with v2.1. The Server Cards feature (.well-known endpoint for metadata) is especially useful — it means you can build registries that auto-discover what capabilities are available without manually cataloging everything. The biggest gap I still see is around authentication and multi-tenant access control. Most MCP servers right now are either fully open or rely on API keys, but enterprise deployments need finer-grained permission models. Curious to see how the spec evolves there.