DEV Community

Cover image for The Great Decoupling: The Enterprise Capability Graph
Jay Desmarais
Jay Desmarais

Posted on

The Great Decoupling: The Enterprise Capability Graph

Part 2 of "The Great Decoupling" series

In Part 1, I argued that we're witnessing the decoupling of capability from presentation — that the SaaS interface, far from being the product, is becoming an ephemeral rendering layer generated on demand. MCP has emerged as the enabling standard for this shift, proving the pattern works and catalyzing industry-wide adoption.

But that's just the first domino.

Here's the question that kept nagging at me: if external SaaS products expose capabilities via MCP, why wouldn't internal enterprise systems do the same? And if they do, what happens to the boundary between "our software" and "their software"?

The answer, I've come to believe, is that the boundary dissolves. Internal and external systems become equivalent nodes in a unified capability graph. And that changes everything about how enterprises think about architecture, vendors, and control.

The Symmetry No One's Talking About

Most discussions of MCP focus on AI agents calling external services — your assistant querying your CRM, pulling data from your document platform, triggering workflows in your automation tools. That's valuable, but it's only half the picture.

The pattern works identically for internal systems. Your custom ERP. Your homegrown analytics pipeline. That sprawling collection of internal tools your platform team maintains. Each of these can expose capabilities through the same protocol.

Once they do, something interesting happens. The orchestration layer — whatever routes requests, manages authentication, handles authorization — stops caring about the origin of capabilities. It routes to get_customer_credit_limit without knowing or caring whether that capability lives in an internal Oracle instance or an external Experian API.

Enterprise Orchestrator

(Meridian is a fictional CRM — a stand-in for the dominant platforms you're already thinking of. The pattern applies regardless of vendor.)

Internal and external become implementation details. The enterprise sees a unified graph of capabilities, some provided internally, some externally, all accessible through the same interface.

This symmetry has profound implications.

Build vs. Buy Transforms

The traditional "build vs. buy" decision pits two options against each other: build a feature internally (control but cost) or purchase a product with a UI your employees learn (faster but lock-in). The comparison involves not just functionality but also interfaces, training, integration work, and long-term dependency.

Capability-first architecture transforms this into expose vs. consume.

Should we build this capability internally or consume it from an external provider? The integration is identical either way — same protocol, same orchestration layer, same developer experience. The evaluation becomes purely about the capability itself: cost, quality, reliability, compliance, and specialization.

This doesn't eliminate the decision's difficulty, but it changes its shape. You're no longer comparing "our custom tool with our custom UI" against "their product with their UI that we need to train everyone on." You're comparing capability providers on capability merits.

And critically, the decision becomes more reversible. If you start with an external capability provider and later decide to build internally, the integration doesn't change. If you build internally and later find a better external option, same story. The orchestration layer abstracts the origin.

This reversibility is strategic gold in the current environment, where the right answer keeps changing.

Capability Architecture as Chaos Survival

Let me paint a picture of the current enterprise reality.

AI capabilities are emerging faster than evaluation cycles can process them. The vendor landscape shifts monthly — acquisitions, pivots, new entrants, sudden deprecations. Build vs. buy decisions that seemed sound six months ago look questionable today. Integration complexity explodes as each new AI tool requires its own connection pattern. Technical debt accumulates from point-to-point connections that made sense at the time.

Every enterprise architect I talk to describes some version of this chaos. The ground won't stop shifting.

Capability-based architecture directly addresses this reality:

Challenge Capability Architecture Response
Rapid AI evolution Swap capability providers without rewiring consumers
Vendor uncertainty Reduce lock-in via standard interfaces
Build/buy fluidity Internal and external capabilities integrate identically
Integration complexity Single protocol, not N×M point-to-point connections
Technical debt Clean abstractions prevent integration spaghetti

This isn't about preparing for some distant future. It's about surviving the present. The patterns that make your architecture maintainable today — isolation, explicit contracts, standard interfaces — are exactly what you need to navigate an environment where the right answer keeps changing.

The pitch to enterprises isn't "adopt this for the future." It's "adopt this to stay agile now."

The Orchestration Layer Emerges

As enterprises deploy capability-based architectures, a new layer crystallizes: the orchestrator.

This isn't just an API gateway or a service mesh, though it shares DNA with both. The orchestrator handles:

Capability discovery: What capabilities are available? What can this identity access? The orchestrator maintains the registry and handles dynamic discovery.

Request routing: When a request comes in, where does it go? The orchestrator routes based on capability type, data residency requirements, load, cost optimization, or custom rules.

Authentication and authorization: Can this identity invoke this capability with these parameters on this data? The orchestrator enforces access control consistently across internal and external capabilities.

Audit and observability: What was invoked, when, by whom, with what parameters, returning what results? The orchestrator maintains the audit trail that compliance requires.

Context management: What's the session state? What permissions were delegated? What constraints apply? The orchestrator maintains context across capability invocations.

The orchestrator becomes the enterprise's control plane for capabilities. Not the capabilities themselves — those remain distributed across internal systems and external providers — but the layer that makes them accessible, governable, and composable.

This is a new category of infrastructure. Not quite an integration platform (though it handles integration). Not quite an AI framework (though it enables AI). Not quite identity management (though it handles identity). It's the connective tissue of the capability-first enterprise.

What Platform Teams Become

If this architecture emerges, the role of enterprise platform teams transforms.

Today, platform teams manage specific systems. "We own the Meridian instance." "We maintain the internal analytics platform." "We run the integration middleware." Each system is a distinct responsibility with its own expertise.

In the capability-first model, platform teams become capability curators. They don't just maintain systems — they curate the capability graph. Their responsibilities shift:

From: Managing the CRM instance
To: Ensuring CRM capabilities are available, performant, and properly governed — regardless of whether those capabilities come from Meridian, a competitor, or internal implementation

From: Building integration pipelines between systems
To: Defining capability contracts and ensuring providers (internal or external) meet them

From: Training users on specific application UIs
To: Ensuring capabilities are discoverable and well-documented for AI and human consumption alike

From: Negotiating vendor contracts for products
To: Evaluating capability providers on capability merits — quality, reliability, cost, compliance

This is a meaningful elevation. Platform teams move from system administrators to capability brokers, from tool maintainers to graph architects.

The Enterprise AI Fabric

Here's where I see this converging. The combination of:

  • Unified capability graph (internal and external systems equivalent)
  • Orchestration layer (discovery, routing, auth, audit)
  • Contextual rendering (interfaces generated on demand)
  • AI agents (as primary capability consumers)

...constitutes something new. Call it the Enterprise AI Fabric — the connective layer that makes AI-native operations possible.

Contextual Rendering

This fabric enables scenarios that are currently painful or impossible:

Cross-system operations without integration projects: "Update the customer record, adjust their credit limit, and notify the account team" becomes a single orchestrated flow across CRM, financial system, and communication platform — without building a custom integration.

Graceful capability substitution: When a vendor raises prices or degrades quality, swap to an alternative without consumer-side changes. The orchestrator routes differently; everything else continues.

AI agents with appropriate enterprise access: Instead of giving AI tools direct API keys to everything (security nightmare) or nothing (useless), the orchestrator mediates access with proper authorization and audit.

Federated capabilities across organizational boundaries: Partners, suppliers, and customers can expose capabilities into your graph (with appropriate access controls), enabling inter-organization workflows without point-to-point integrations.

The Forcing Function Is Already Here

I want to be clear: this isn't a ten-year vision. The forcing function is already present.

Every major enterprise is grappling with AI adoption. Every one of them is hitting the same walls — how do we give AI access to our systems safely? How do we avoid building N×M integrations? How do we maintain governance while enabling experimentation?

Capability-based architecture is the answer emerging from these pressures. MCP — or whatever it evolves into — is the enabling standard crystallizing from the chaos. The orchestration layer is what enterprises build when they realize they need to manage capabilities systematically.

MCP's current form isn't the point. The point is that it's proven the pattern works, the major players have aligned around it, and the architectural direction is now clear. What MCP looks like in 2028 may be quite different from today — but the doors it opened won't close.

The question isn't whether this architecture emerges. It's whether your enterprise leads or follows.


But There's an Uncomfortable Question

If capabilities become standardized and interchangeable... if interfaces become ephemeral rendering layers... if internal and external systems become equivalent nodes in a graph... where does value live?

For two decades, SaaS vendors have built moats around their data. Your CRM doesn't just provide CRM capabilities — it accumulates your institutional memory. Every interaction, every deal progression, every customer relationship pattern. That's not a feature. That's a hostage.

The capability-first architecture makes this hostage-taking visible. When the orchestrator asks for customer data and the capability responds with friction designed to keep data inside vendor walls, the customer notices.

This leads somewhere uncomfortable for vendors and potentially liberating for enterprises: the data sovereignty question we've been deferring for twenty years.

That's Part 3.


Next in the series: **The Great Decoupling: The Data Sovereignty Correction* — How capability-first architecture inverts the SaaS power structure*


References

  1. Enterprise AI Agent Adoption: G2's August 2025 survey found 57% of companies have AI agents in production, with another 22% in the pilot phase. PwC research shows 79% of organizations have adopted AI agents to some extent. Source: Gupta Deepak - MCP Enterprise Adoption Guide

  2. Multi-Agent System Designs: 66.4% of enterprises use multi-agent system designs rather than single-agent approaches, creating demand for coordination protocols. Source: Gupta Deepak - MCP Enterprise Adoption Guide

  3. Agentic AI Project Challenges: Gartner predicts over 40% of agentic AI projects will be canceled by the end of 2027 due to unclear ROI and implementation challenges. Source: Gartner Press Release

  4. AWS Agentic AI Security Framework: AWS published the Agentic AI Security Scoping Matrix, defining four security scopes from basic tool use to fully autonomous systems. Source: AWS Security Blog

  5. AI Agent Security Concerns: Gartner predicts 25% of enterprise breaches will trace back to AI agent abuse by 2028. Source: CIO - Autonomous AI Agents = Autonomous Security Risk

  6. MCP and A2A Protocols: Analysis of how MCP (tool integration) and Google's A2A (agent-to-agent coordination) are positioned as complementary rather than competing standards. Source: Koyeb - A2A and MCP: Start of the AI Agent Protocol Wars?

Top comments (0)