If you've seen "MCP" appear three times this week — in a job description, a Slack thread, and a GitHub repo — and nodded along without being entirely sure what it is, this article is for you.
Model Context Protocol is not complicated. It solves a specific problem, it does it cleanly, and once you understand what that problem was, the solution makes immediate sense. Here's everything you need to know.
The Problem MCP Solves
AI models are good at reasoning. They are, by themselves, entirely isolated. A language model trained on text knows a lot of things. It doesn't know what's in your database, what's in your Slack channel, or what tasks are currently open in Jira. It can't send an email, query your CRM, or trigger a deployment.
For AI agents to do useful work — not just answer questions but actually act — they need to connect to external tools and data sources. Before MCP, every one of those connections was custom-built. A team building an AI assistant for their engineering workflow would write a custom integration for GitHub, a different one for Jira, another one for their internal deployment system. None of those integrations transferred to another team. None of them were reusable across different LLMs. If they wanted to switch from OpenAI to Claude, they rewrote the integrations. If another team wanted similar functionality, they built it from scratch.
BCG put a number on this problem: without a standard protocol, integration complexity grows quadratically as AI agents multiply across an organisation. Every new agent needs its own connections to every tool it needs. It compounds quickly.
MCP solves this by standardising the connection. Instead of each team building custom integrations, tools expose themselves as MCP servers using one standard interface. Any MCP-compatible agent can connect to any MCP server without custom code. The integration is built once and works everywhere.
What MCP Actually Is
Model Context Protocol is an open standard — originally released by Anthropic in November 2024, donated to the Linux Foundation in December 2025 as part of the newly formed Agentic AI Foundation — that defines how AI agents discover and call external tools.
At its core, MCP is a communication protocol. It specifies:
How tools are described. An MCP server exposes a list of tools with structured definitions: name, description, input schema, output schema. The LLM reads these definitions to understand what tools are available and how to use them.
How tools are called. When an agent wants to use a tool, it sends a structured request to the MCP server. The server executes the tool and returns a structured response. Everything flows over a standard message format based on JSON-RPC 2.0.
How discovery works. Agents query an MCP server to find out what tools it offers. This means agents can adapt to the tools available to them rather than requiring hard-coded tool definitions.
The analogy that makes the most sense: MCP is to AI agents what USB-C is to devices. Before USB-C, every device used a different connector. Charging cables, data cables, display cables — all different, all incompatible. USB-C standardised the connector. You plug in and it works, regardless of which device or which cable.
MCP standardised the connector between AI agents and tools. An agent that speaks MCP can connect to any tool that speaks MCP, regardless of which LLM powers the agent or which system the tool connects to.
How It Works in Three Steps
Step 1: A tool owner creates an MCP server. This is a lightweight service that exposes one or more tools — a database query function, a Slack messaging capability, a code execution environment — using the MCP interface. The server describes what tools it offers and how to call them.
Step 2: An agent discovers available tools. When an agent initialises, it queries the MCP server and receives a structured list of available tools with their schemas. The agent now knows what it can do.
Step 3: The agent calls a tool. When the LLM decides it needs to use a tool — based on the user's request and the tools it knows are available — it sends a structured tool call to the MCP server. The server executes the tool and returns the result. The LLM incorporates the result into its reasoning and continues.
That's the complete loop. The LLM doesn't need to know the implementation details of the tool. The tool doesn't need to know anything about the LLM. The protocol handles the conversation between them.
Why the Ecosystem Grew So Fast
MCP launched in November 2024. By April 2025, MCP server downloads had grown from roughly 100,000 to over 8 million per month. By late 2025, more than 5,800 MCP servers were publicly available, covering everything from Slack, Confluence, and Sentry to databases, code execution environments, and internal enterprise systems. SDK downloads crossed 97 million per month.
Three things drove adoption that quickly.
First, the major LLM providers endorsed it immediately. Anthropic built it, but OpenAI, Google, and Microsoft adopted it within months. That cross-vendor support meant developers could build MCP integrations once and use them with any LLM.
Second, the integration cost dropped to near zero for tool owners. Exposing an existing API as an MCP server is a small amount of wrapper code. Companies like Slack, Datadog, and Sentry added MCP support quickly because the incremental effort was minimal.
Third, developers were hungry for exactly this. The alternative — building and maintaining custom tool integrations per agent, per team, per LLM — was visibly painful. MCP provided relief that was immediately felt.
What MCP Doesn't Include
MCP defines the connection. It doesn't define the rules around the connection.
The protocol has no built-in mechanism for specifying which agents are allowed to call which tools. It has no audit logging. It has no way to detect if a tool response contains injected instructions designed to manipulate the LLM. It has no concept of per-team access policies.
This isn't a flaw — it's a deliberate scope decision. Protocols stay minimal. The governance layer is built on top.
For teams using MCP in local development or small-scale experiments, this gap is manageable. For teams deploying agents in production with multiple teams, sensitive data, and compliance requirements, the gap between what MCP provides and what enterprise deployment requires is significant.
That gap is what an MCP gateway fills: a governance and security layer that sits in front of your MCP servers and handles authentication, access control, audit logging, and tool scoping in one place, consistently, for every agent that passes through it.
TrueFoundry's MCP Gateway is built specifically for this layer. It connects to your existing identity provider, enforces RBAC at the tool level, logs every tool invocation with full context, and deploys entirely within your own infrastructure — so your data never leaves your environment. Teams already managing significant AI workloads use it to take MCP from working in a demo to working reliably in production, across teams, at enterprise scale.
Top comments (0)