DEV Community

rinchu
rinchu

Posted on

MCP Through the Lens of Integration: An Enterprise Perspective

The Integration Problem Nobody's Talking About
If you've spent any time in enterprise integration — ESBs, API gateways, service meshes, message brokers — you've seen this pattern before. A new technology emerges. Everyone rushes to adopt it. And the integration layer becomes an afterthought until it becomes a crisis.
We're watching it happen again with AI agents.
Across every enterprise I've worked with, teams are building AI agents — Copilot extensions, LLM-powered assistants, RPA bots with natural language interfaces. These agents need access to backend APIs, databases, and services. And right now, every team is solving that problem independently, with bespoke integrations, inconsistent security, and no governance.
Sound familiar? It should. This is the same N×M integration problem that gave us ESBs in the 2000s, API gateways in the 2010s, and service meshes in the 2020s.
The Model Context Protocol (MCP) is the answer for the AI era. And if you're an integration architect, you already understand most of it — you just don't know it yet.

What is MCP?
MCP is an open standard introduced by Anthropic in November 2024, now governed by the Agentic AI Foundation under the Linux Foundation (co-founded by Anthropic, Block, and OpenAI). It provides a universal protocol for AI agents to discover and invoke external tools, read contextual data, and follow structured workflows.
Think of it as OpenAPI for AI agents — but richer. Where OpenAPI describes what an API can do, MCP describes what an AI agent can do with an API, including the context it needs to make good decisions and the workflows that guide its behaviour.
At the wire level, MCP is JSON-RPC 2.0 over HTTP. If you've worked with any RPC-based integration, you're already comfortable with the messaging model.

The Architecture: It Maps to What You Already Know
MCP uses a client-server architecture with three roles:

Host = the AI application (Claude, Copilot, a custom LLM agent). This is the consumer's responsibility.
Client = protocol handler inside the host. One client per server connection. Also the consumer's responsibility.
Server = exposes capabilities to clients. This is where the integration team lives.
If you're an integration architect, the MCP server is your new integration endpoint. It's the equivalent of your API gateway, your ESB adapter, your service mesh sidecar — but for AI consumers instead of human-authored code.

The Three Primitives: Tools, Resources, Prompts
This is where MCP gets interesting compared to what integration teams are used to. MCP servers expose three types of capabilities, each with a different control model:
Tools — The Verbs
Tools are actions the agent can perform. They map directly to API operations — get_account_balance(), search_customers(), submit_payment(). Each tool has a name, description, and JSON Schema for input parameters.
The LLM decides when to call a tool based on the user's request and the tool's description. This is the closest analogue to what API gateways already do — expose operations for consumers to invoke.
Integration parallel: API operations behind a gateway. Same concept, different consumer (an LLM instead of application code).
Resources — The Nouns
Resources are read-only data the agent uses for context before deciding what to do. They're identified by URIs and return content — text, JSON, binary.
This is where MCP goes beyond what traditional API gateways offer. Resources let you expose:

API specifications (so the agent understands your operations before calling them)
Data dictionaries (so the agent knows what fields mean)
Onboarding guides (so the agent can help consumers integrate)
Runbook excerpts (so the agent knows operational constraints)

Integration parallel: Think of resources as the integration team's Confluence pages — but machine-readable and served dynamically to AI consumers at runtime.
Prompts — The Recipes
Prompts are reusable interaction templates that structure multi-step workflows. They can pull in resources and guide the agent through a complex task.
For example, an "api-comparison" prompt takes two API names as arguments, automatically loads both API specifications as resources, and structures a comparison workflow for the LLM.
Integration parallel: This is the closest thing to a BPEL process definition or an API composition pattern — a pre-defined workflow that orchestrates multiple operations.
The Control Model Matters
Here's the subtle but important distinction:

Tools — the LLM decides when to call them (model-controlled)
Resources — the application decides when to load them (application-controlled)
Prompts — the user decides when to invoke them (user-controlled)

This three-tier control model is what makes MCP more than just "function calling with extra steps." It separates action, context, and orchestration — which is exactly what good integration architecture does.

Where Should the MCP Server Sit? A Decision Framework
This is the question every enterprise will face. I've evaluated five patterns:
Pattern 1: Integration-Team-Owned MCP Server
The integration team builds and operates a single MCP server (or a small set grouped by domain) that exposes the API estate as a platform capability. Consumer-specific scoping is driven by identity and role mapping.
When to choose this: You have a large API estate (50+ APIs), multiple consumer teams, and need centralised governance. This is the platform play.
Trade-off: Initial build effort, but low ongoing operational cost because consumer onboarding is configuration, not code.
Pattern 2: API-Provider-Only, Consumers Build Their Own
The API team exposes its APIs as MCP tools. Consumer teams build their own MCP servers for anything beyond direct API access.
When to choose this: Small API estate, few consumers, or when consumer teams have strong technical capability and need full autonomy.
Trade-off: Low investment for the API team, but fragmentation risk — multiple teams reinventing auth, RBAC, and tool descriptions independently.
Pattern 3: Agent-Optimised API Facade
Create new API contracts designed specifically for AI consumption — simplified, agent-safe, with curated operations. Expose these via MCP.
When to choose this: Your existing APIs have complex contracts that don't map well to LLM tool descriptions (e.g., SOAP backends, heavily parameterised REST APIs).
Trade-off: Best agent experience, but significant build and maintenance burden. Every backend change requires updating the facade.
Pattern 4: API Gateway Native MCP
Use your API gateway's built-in MCP capability (Azure APIM, for example, now supports this). Import existing APIs, expose operations as MCP tools through configuration.
When to choose this: Quick proof of concept, simple use cases, or when you need to demonstrate MCP capability fast.
Trade-off: Fastest path to "working," but limited capability. Most gateway implementations only support tools — not resources or prompts. Per-consumer filtering, tool description enrichment, and dynamic discovery are typically absent. Preview-level maturity.
Pattern 5: Hybrid
Complex use cases go through a custom MCP server. Simple, single-API access goes through gateway-native MCP. Two tiers of service.
When to choose this: Large enterprise with varied use case complexity. Pragmatic approach that avoids gatekeeping simple requests.
Trade-off: Two systems to maintain, unclear boundary between tiers, and consumer confusion from different behaviours.
My Recommendation
For most enterprise integration teams, Pattern 1 is the right long-term play. The MCP server is the AI equivalent of the API gateway. You wouldn't ask every consumer team to build their own gateway — the same principle applies here.
The key insight: onboarding a new consumer agent should be a configuration change, not a build. Register their identity, define their role mapping (which tools they see, what data filters apply, which resources they can read), reload config. No code change, no deployment, no sprint planning.

The Consumer Scoping Model
One MCP server serving many consumer agents with different needs requires a scoping model. I use five layers:
Layer 1 — Tool Visibility: When a consumer agent calls tools/list, the server checks their identity and returns only tools their role permits. A mortgage agent sees mortgage tools. An admin agent sees everything.
Layer 2 — Parameter Injection: The server silently adds filters before calling the backend API. The mortgage agent calls search_accounts — the server injects product_type=mortgage. The agent doesn't know the filter was applied.
Layer 3 — Response Filtering: The backend returns the full payload. The server strips or masks fields before returning to the agent. Consumer A gets {name, balance}. Consumer B gets {name, balance, risk_score, address}.
Layer 4 — Resource Visibility: The agent only sees documentation and specs for APIs they're entitled to use.
Layer 5 — Prompt Availability: Different workflows available per consumer role.
This is config-driven, not code-driven. A YAML file or database table maps consumer identities to scoping rules. Adding a new consumer is adding a config block — not writing code.

Agent-to-Agent Integration: The Next Frontier
Here's what nobody's talking about yet, but integration architects should be thinking about: what happens when agents need to call other agents?
Today's MCP model is agent → MCP server → backend API. But the emerging pattern is agent → MCP server → another agent's MCP server. This creates questions that integration teams are uniquely qualified to answer:
Trust boundaries. When Agent A invokes a tool on MCP Server B, and that tool triggers Agent B to call tools on MCP Server C — who's authorising what? This is the same distributed authorisation problem we solved with OAuth token propagation in microservices, but now with AI agents in the chain.
Tool composition. An agent needs to "get customer details and then check their credit score." These are tools on different MCP servers owned by different teams. The composing agent needs to understand the data flow between tools — input of tool B depends on output of tool A. This is service choreography vs orchestration, revisited for the agent era.
Observability. Distributed tracing across agent-to-agent calls. Correlation IDs that flow through multiple MCP servers. This is OpenTelemetry for AI — and it doesn't exist yet in the MCP spec.
Circuit breaking and resilience. What happens when a downstream agent's MCP server is slow or failing? The calling agent needs timeout handling, retry logic, and fallback behaviour. Integration teams have been solving this with Hystrix, Istio, and resilience patterns for years.
The MCP 2026 roadmap explicitly calls out enterprise readiness — audit trails, SSO-integrated auth, gateway behaviour — as a top priority. The protocol is evolving toward the needs integration architects already understand. Getting in early means shaping the standard rather than adapting to it.

What Integration Teams Should Do Now

Understand the protocol. MCP is JSON-RPC 2.0 over HTTP. If you've built REST APIs, SOAP services, or gRPC endpoints, the messaging model is familiar. Read the spec at modelcontextprotocol.io.
Evaluate your API gateway's MCP support. Azure APIM, AWS API Gateway, and others are adding MCP capabilities. Understand what they offer and — critically — what they don't (resources, prompts, per-consumer filtering).
Build a proof of concept. Take 2-3 of your APIs, expose them as MCP tools with a simple Python or TypeScript MCP server, and connect an AI agent. See what works and what doesn't. Pay attention to tool description quality — this is the #1 factor in agent performance.
Own the narrative. Don't let the AI team or the platform team define where the MCP layer sits. This is an integration pattern. Integration teams should own integration patterns.
Think about consumer onboarding. Your current API onboarding process (documentation, API keys, rate limits) needs an MCP equivalent. Start designing the config-driven model now.

The Bottom Line
MCP is not an AI concern. It's an integration concern that happens to serve AI consumers. The protocol, the architecture, the governance challenges — they all map to patterns integration architects have been solving for two decades.
The teams that recognise this early will have the opportunity to shape how AI agents integrate with their enterprise. The teams that don't may find themselves reacting to fragmentation rather than leading the architecture.
Either way, the conversation is happening now. Integration architects should be part of it.

Top comments (0)