DEV Community

Cover image for What MCP Is: How AI Agents Connect to Real Systems
Gursharan Singh
Gursharan Singh

Posted on

What MCP Is: How AI Agents Connect to Real Systems

Part 2 of 6 — MCP Article Series
Part 1: Why Connecting AI to Real Systems Is Still Hard


The Model Context Protocol — MCP — is an open standard that defines how AI agents communicate with external systems. Not a library, not a framework, not a vendor product. A protocol — the same way HTTP defines how browsers and servers communicate.

The practical result: write the integration once, and Any compliant Host can use it.

MCP standardizes how capabilities are exposed to AI agents. It does not replace the application logic, policy checks, or orchestration that decide when those capabilities should be used and how they should be governed in production. That distinction is easy to miss in demos and hard to ignore once you ship.


The difference in practice

Without MCP: a developer builds a custom connector for each AI-to-system pair — OAuth2, error handling, response parsing — from scratch. That code works for one AI only.

With MCP: the system exposes itself as an MCP Server. Any compliant Host — Claude, GPT, a custom agent — can discover and use it. One integration, any AI.


Start here — what an MCP Server looks like from the outside

What an MCP Server looks like — AI Agent connects via MCP to a Server containing Tools (actions), Resources (data), and Prompts (templates), which wraps a real system internally

An MCP Server typically exposes three types of capabilities:

  • Tools: actions the model can call
  • Resources: server-exposed context such as files, schemas, docs, or app data
  • Prompts: reusable templates, usually user-triggered, for guiding common interactions

Build the Server once. Any AI that implements the protocol can connect to it.


MCP as USB-C for AI

Before USB-C, every device manufacturer chose its own connector. Laptops, phones, cameras, drives — each needed a different cable for each pair. USB-C defined a shared spec so that compatible devices could connect without a custom cable per combination.

MCP does the same for AI and tools. Before MCP, every AI needed custom code to talk to every system. After MCP, an AI that implements the protocol can connect to any system that also implements it — without either side needing to know anything specific about the other in advance.

Where the analogy holds

USB-C standardizes the physical interface; MCP standardizes the communication interface. Both let one side advertise capabilities to the other. Both work across manufacturers and vendors.

Where the analogy has limits

USB-C is binary — the plug fits or it does not. MCP is a communication protocol, so implementations vary in depth. A system can expose one tool or fifty. A host can implement sampling or skip it.

Where USB-C connects two devices, MCP connects an AI agent to many systems simultaneously, each negotiated independently. It is less like a cable and more like a shared language.


Where MCP came from: the LSP connection

MCP took direct inspiration from the Language Server Protocol (LSP) — the standard that decoupled code intelligence from editors.

Before LSP, every editor had to implement Go-to-Definition, autocomplete, and error highlighting for every language separately. After LSP, any editor that speaks the protocol gets those features from any language server that implements it. One standard, any combination.

MCP applies the same logic one layer up: where LSP decoupled language intelligence from editors, MCP decouples tool and data access from AI agents. If you have used VS Code with TypeScript, you have already used this pattern without knowing it.


The three components: Host, Client, Server

Host

The AI application the user interacts with — Claude Desktop, VS Code with Copilot, a custom chat interface. It contains the LLM and manages connections to MCP Servers through Client objects.

Think of it as the application layer — what the user sees.

Client

A connection object inside the Host — one per MCP Server. Claude Desktop connecting to a filesystem server and a database server creates two Clients, not one. Each Client handles capability negotiation, tool discovery, and request routing for its own Server.

One Client per Server — not one Client for everything. Most tutorials miss this.

Server

The bridge between the AI world and a real system. It wraps your order database, Stripe integration, or shipping API in a standard format any MCP Host can understand. No AI logic — just system logic in the MCP contract.

In practice, teams typically build one Server per system — keeping concerns separate and Clients independent. A Server can wrap multiple systems, but the one-Client-per-Server model means each system's capabilities stay cleanly isolated.

Built once. Used by any Host that speaks MCP.


Three primitives, three control planes

MCP Servers expose capabilities through three primitives. The distinction is not just organizational — it determines who controls when each one is used.

Three control planes: Tools (model decides when to call), Resources (app decides what to expose), Prompts (user triggers directly) — each with real examples

For example, an order system might expose:

  • a Tool: get_order_status(order_id) — the AI calls this to fetch live data
  • a Resource: order history for this user — the app surfaces this as context
  • a Prompt: "summarize this order for customer support" — the user triggers this directly

The protocol defines how these are described and invoked — not how the business logic works behind them.

Tools

Actions the model can invoke — query a database, trigger a refund, send a notification. The model decides when to call a Tool based on the conversation context and the Tool's description.

That description is the most important line in any Tool implementation.

The model is not executing your code — it is selecting from descriptions.


How the LLM decides which tool to call

The LLM never sees your code. It only reads the description string attached to each Tool. It pattern-matches the user's intent against those descriptions and calls whichever one fits best.

User asks: "Where is my order?"

get_order_status — "Retrieves current order status and tracking info" → MATCH

process_refund — "Issues a refund for a completed order" → skip

get_product_info — "Returns details about a product by ID" → skip

A poorly worded description means the tool never gets called — regardless of how well it is implemented.

Part 5 covers Tool description best practices in full. For now: think of the description as the documentation your LLM reads before deciding whether to call the function.


Resources

Data the application makes available to the AI — order history, API schemas, config files. The application controls what is exposed and when. The model can read Resources but cannot decide when they appear in context. This keeps sensitive data under application control, not model control.

Prompts

Reusable templates users can invoke — "Summarize this order," "Draft a complaint reply." Defined on the Server, surfaced in the Host UI. This keeps prompt engineering close to the system that knows the data.

The split matters in production: Tools give the model autonomy to act. Resources give the application control over context. Prompts give the user a direct interface. Each has a different risk profile.


Local vs remote MCP

Two transport modes. The protocol is identical on both — only the deployment changes.

stdio (local): Host and Server on the same machine. How Claude Desktop connects to local tools. Simple, no network required.

Streamable HTTP (remote): Server as a separate process over HTTP, accessible to multiple Hosts. The model for shared production infrastructure — one Stripe Server your whole team's AI tools connect to.

A Server built locally can be promoted to remote without changing its MCP implementation. Part 6 covers remote deployment, auth, and multi-server architecture.

The local setup is where most teams start. The remote model tends to follow once a Server is stable enough to share.


What MCP is not

MCP defines the interface — not the system behind it.

  • Not a framework. You implement the protocol in whatever language fits your system. The Python and TypeScript SDKs are convenience wrappers, not a framework with opinions about your architecture.

  • Not a REST replacement. Stripe still uses the Stripe REST API. Your database still speaks SQL. MCP sits above those layers as the coordination protocol AI agents reason about. REST handles the internal connection; MCP handles the AI-to-Server conversation.

  • Not a vendor product. Anthropic created MCP, but donated it to the Linux Foundation in December 2025 — co-governed by Anthropic, Block, and OpenAI. Any compliant Host can implement it. A Server you build today works with any compliant Host, regardless of who built the AI.

  • Not a silver bullet. MCP standardizes how systems communicate — it does not make them scalable, secure, or intelligent. Auth, rate limiting, input validation, observability — all of that still needs to be designed. MCP gives you the interface; you still build everything behind it.


What this means for the integration tax

MCP changes the economics of the N×M problem. Each system exposes its capabilities once as an MCP Server. Each AI implements the protocol once as a Host. The grid of N×M custom integrations collapses — systems and AI agents connect through a shared contract rather than bespoke code for every combination.

Part 3 goes inside the protocol: the JSON-RPC message flow, capability negotiation, and how tool discovery works at runtime — step by step.


MCP reduces the cost of connecting systems.
It does not reduce the responsibility of designing them correctly.


Key takeaways

  • MCP is a protocol, not a library. It standardizes the conversation between AI and systems. Once exposed through MCP, any compliant Host can discover and use it.

  • MCP extends the LSP model to AI agents. LSP decoupled language intelligence from editors. MCP decouples tool access from AI agents. The pattern predates MCP — it is just applied at a different layer.

  • Tools, Resources, and Prompts have different control planes. The model controls Tools. The application controls Resources. The user controls Prompts. This is not cosmetic — it determines what the AI can do autonomously and what requires explicit human or application approval.


MCP Article Series · Part 2 of 6
Next: How MCP Works — the complete request flow (link will be added when Part 3 is published)

Top comments (1)

Collapse
 
williamwangai profile image
William Wang

Great breakdown of the protocol vs framework distinction — that's the part most MCP explainers gloss over. The "write once, any AI" promise is real but the governance gap you mentioned is where it gets interesting in production.

The part I keep coming back to is capability discovery. MCP lets agents discover what tools are available, but there's no standardized way to express trust boundaries — which tools are safe to call autonomously vs which need human approval. Right now that's left to the host implementation, which means every Claude/GPT integration handles it differently.

Looking forward to the rest of the series, especially if you cover auth patterns and how MCP servers should handle multi-tenant scenarios.