Part 4 of "The Great Decoupling" series
We've covered a lot of ground in this series. Part 1 argued that interfaces are becoming ephemeral — capability and presentation decoupling permanently. Part 2 extended this to enterprise architecture — internal and external systems becoming equivalent nodes in a unified capability graph. Part 3 followed the implications to data sovereignty — the hostage model inverting as enterprises and users reclaim ownership.
One question has been building throughout: if this architecture emerges, who operates the orchestration layer?
This isn't an academic question. Whoever controls the layer that routes capabilities, manages delegations, and maintains the audit trail occupies the most strategic position in the new software landscape. This is the platform war that will define the next era.
The Orchestration Layer as Power Center
Let's be precise about what the orchestration layer does and why it matters.
In the capability-first architecture, the orchestrator sits between consumers (humans, AI agents, other services) and capability providers (internal systems, external SaaS). It handles:
- Discovery: What capabilities exist? What can this identity access?
- Routing: Where should this request go? Which provider? Which instance?
- Authorization: Can this identity invoke this capability with these parameters?
- Delegation management: Who has granted access to whom, under what constraints?
- Audit: What was accessed, when, by whom, for what purpose?
- Context: What's the session state? What constraints carry forward?
None of these functions is revolutionary individually. API gateways do routing. Identity providers handle auth. Logging systems track access. But the orchestration layer integrates them specifically for capability-based computing, with AI agents as first-class consumers.
The entity that operates this layer sees everything. Not the data itself — that stays with owners under the sovereignty model. But the metadata about data relationships: who's accessing what, how often, through which capabilities, for what purposes. This is the map of trust relationships across the entire enterprise (or across all users, in the personal layer).
That map is extraordinarily valuable. It reveals:
- Which capabilities are actually used vs. just purchased
- Which data is most accessed and by whom
- Where bottlenecks and friction exist
- How workflows actually flow across system boundaries
- Which providers are reliable vs. problematic
Whoever holds this position can optimize, recommend, intermediate, and eventually shape how capability-first computing evolves.
Who Wins?
Several categories of players are positioning for this layer:
The hyperscalers (AWS, Azure, GCP) have obvious advantages: existing enterprise relationships, infrastructure at scale, capital to invest, and integration with their cloud platforms. Microsoft has embedded MCP into Copilot Studio, Azure API Management, and VS Code. AWS's Bedrock AgentCore includes an MCP gateway. Google's Vertex AI Agent Builder supports both MCP and A2A.
But hyperscalers also have conflicts of interest. If Microsoft operates the orchestration layer, do capabilities hosted on AWS get fair treatment? If AWS operates it, does Azure integration work smoothly? Enterprises are wary of deepening dependency on cloud vendors who already have significant leverage.
Integration platform vendors (MuleSoft, Workato, Boomi) have relevant expertise — they've been connecting enterprise systems for years. Some are owned by larger CRM platforms, positioning them to extend into orchestration. But these platforms were built for a different era of point-to-point integration. Adapting to capability-first, AI-native architectures requires significant evolution.
New entrants positioning as neutral may have the clearest path. The Snowflake playbook — positioning as "not your cloud vendor's data warehouse" — demonstrated that enterprises will pay a premium for perceived neutrality. A new orchestration platform that explicitly avoids cloud vendor lock-in could capture enterprise trust.
Open-source foundations offer another model. Anthropic donated MCP to the Linux Foundation's Agentic AI Foundation, with governance designed to prevent any single vendor from controlling the protocol. The orchestration layer could follow a similar path — an open-source core with commercial distributions, like Kubernetes and its ecosystem.
My intuition is that the orchestration layer needs to be perceived as neutral to achieve enterprise adoption at scale. This favors either the open-source foundation model or new entrants explicitly positioning against incumbent lock-in. The hyperscalers will compete aggressively, but their conflicts of interest create openings.
The Protocol Layer Beneath
The platform war isn't just about who operates the orchestration layer — it's also about what protocols that layer speaks. And here's where it's important to be clear about MCP's role.
MCP is the enabling standard that opened the door. It proved that a capability-first architecture works at scale. It aligned the major players — OpenAI, Google, Microsoft, AWS — around a common pattern. That alignment is the breakthrough, not the specific protocol details.
MCP as it exists today will evolve. The 2029 version may look quite different from the 2026 version. But the doors it opened won't close. The architectural direction is set.
What needs to mature:
Agent-to-agent coordination: When AI agents need to collaborate, they need protocols beyond just invoking capabilities. Google's Agent2Agent (A2A) protocol addresses this — how agents discover each other, negotiate collaboration, and coordinate workflows.
Streaming and subscriptions: Current MCP is primarily request-response. But real-time capabilities need streaming updates and subscription models. "Notify me when inventory drops below threshold" requires different patterns than "tell me current inventory."
Semantic standards: MCP provides transport; it doesn't define what an "order," "customer," or "invoice" means across systems. The semantic layer — shared vocabulary for business concepts — is still fragmented. Whoever establishes semantic standards for key domains gains enormous influence.
Delegation protocols: The OAuth-style delegation we discussed in Part 3 needs protocol-level standardization. How does one entity grant scoped access to another? How are constraints expressed? How is revocation propagated?
The platform war is partly about who controls the orchestration layer and partly about who shapes the protocols that the layer implements. These battles are related but distinct. You can win the protocol war and lose the platform war, or vice versa.
The Timeline: Why 2-3 Years, Not 5-10
The skeptical response to everything in this series is: "Maybe, eventually, but this is a decade-long transformation." I disagree. The timeline compresses because of three interlocking factors.
AI accelerates its own adoption. When capability integration used to take six months, it now takes six weeks because AI assists with building it, and technical bottlenecks dissolve. When AI can help evaluate and select capability providers, procurement cycles shorten. When AI can document and migrate data, switching costs drop. The technology accelerates its own deployment.
Competitive pressure shifts from advantage to survival. The messaging from tech leadership has shifted notably. Jensen Huang, Sam Altman, Satya Nadella — they've all expressed variations of the same message: companies that don't embrace AI transformation face existential risk. Not eventually. Competitively. Now.
When the question shifts from "what's the ROI of this transformation?" to "what's the cost of being disrupted while we deliberate?" — decision-making accelerates. Boards push harder. Procurement cycles compress. "Fast follower" strategies become suicide pacts.
User expectations bleed across contexts. Once users experience contextual rendering in one application — asking Claude for a custom visualization, getting exactly what they need in seconds — they carry that expectation everywhere. They won't tolerate logging into five systems to answer one question. They won't accept fixed dashboards that don't match their mental model. They won't wait for IT to build reports that an agent could generate instantly.
This expectation pressure drives adoption from the bottom up, even as strategic pressure drives it from the top down. The pincer movement compresses timelines.
My estimate: foundational infrastructure solidifies in 2026-2027. Early adopter enterprises deploy capability-first architectures in 2027-2028. Mainstream adoption follows in 2028-2029. By 2029, the frozen interface will feel as dated as the CD-ROM software installation experience feels today.
What Builders Should Do Now
If you're building software today — whether SaaS products, internal enterprise systems, or new ventures — the implications are actionable.
Expose Capabilities, Not Just APIs
Traditional REST APIs expose data operations — CRUD on resources. Capability-first systems expose capabilities — operations with semantic descriptions that AI can reason about.
The difference matters when the consumer is an agent trying to understand what's possible. A well-described capability with clear parameters, examples, and constraints enables AI to invoke it appropriately. A generic REST endpoint requires the AI to guess.
Will the specific MCP protocol details matter in five years? Maybe, maybe not. But the pattern — semantically rich capability definitions that AI can discover and invoke — that's the direction. Build toward the pattern, not just the current spec. Start building capabilities with AI consumption in mind, even if your current consumers are human-operated UIs.
Separate Capability from Presentation
Even if you're building a traditional UI-driven application, architect it so that the capability layer can be consumed independently. This is the viewer/action pattern I described in my earlier post on capability-based architecture.
The test: could an AI agent invoke your core functionality without touching your UI code? If the answer is no, you have architectural coupling that will become technical debt.
Design for Delegated Data Access
Build assuming your capability might operate on data you don't own or store. This changes how you think about state, caching, and persistence.
Instead of: "My service stores customer records and exposes operations on them."
Think: "My service performs customer operations on data that might live anywhere — my database, the customer's data lake, or their personal data store."
This doesn't mean you can't have a database. It means your architecture should accommodate data sources you don't control, with appropriate caching, consistency handling, and fallback behavior.
Make Authorization Capability-Aware
Traditional authorization asks: "Can this user access this endpoint?"
Capability-aware authorization asks: "Can this identity (human or agent) invoke this capability with this scope on this data under these constraints?"
The difference is significant. Capability-aware authorization handles:
- Agents acting on behalf of users (with delegated permissions)
- Scoped access that varies by data subset
- Constraints that travel with delegations (time limits, purpose restrictions)
- Audit requirements that track the full delegation chain
Build your authorization model with these requirements in mind, even if your current implementation is simpler.
Invest in Your Semantic Layer
If interfaces become ephemeral and capabilities become standardized, where does differentiation live?
Increasingly, it lives in the semantic layer — the meaning structure of your domain. What does "customer lifetime value" mean in your context? How do "orders," "shipments," and "returns" relate? What business rules govern state transitions?
This semantic layer becomes your moat when capability invocation is commoditized. Anyone can expose a getCustomerValue capability. But the meaning of that value — how it's calculated, what factors it includes, how it predicts behavior — that's your domain expertise encoded as computable structure.
Invest in making your semantic layer explicit, documented, and defensible. It's where your differentiation moves.
The Opportunity in Transition
I want to close with something I find genuinely exciting about this shift.
The capability-first architecture doesn't just redistribute existing value — it creates conditions for new value that couldn't exist before.
Cross-system intelligence becomes possible. When capabilities are composable across system boundaries, AI can reason about your business holistically rather than system by system. The agent that understands your CRM, ERP, and communications platform together can surface insights none of those systems could generate alone.
Workflow creation becomes dynamic. Instead of building integrations between specific systems, users can describe workflows in natural language and have them assembled from available capabilities on demand. "When a high-value customer hasn't ordered in 30 days, alert the account manager and prepare a re-engagement offer" becomes a capability composition rather than an integration project.
New specializations emerge. The orchestration layer, the semantic layer, the rendering layer — these create new categories of tools and platforms. Some of the most valuable companies of the next decade will operate in layers that don't quite exist yet.
Personal agency increases. When users own their data and control delegation, they become genuine participants in the digital economy rather than products being monetized. The business models that emerge will necessarily be more aligned with user interests because users have architectural leverage they currently lack.
The Scaffolding Falls Away
The SaaS interface was never the point. It was a temporary solution to a temporary limitation — humans needed to operate software, and the technology for anything else didn't exist.
MCP didn't create this shift — it revealed it was possible. It opened the door. The specific protocol will evolve, mature, perhaps be superseded by something we can't yet imagine. But the architectural direction is set. Capability-first is the future.
As that future unfolds, the interface becomes what it should have been all along: a rendering choice, generated on demand, adapted to context, owned by no one.
The capability remains. The data — properly owned, properly governed — remains. Everything else was scaffolding.
If you've been building capability-based architectures, you're ahead of the curve. If you haven't, I'd encourage you to start now — not because the future demands it, but because the present does. The chaos of AI transition rewards architectural clarity. The patterns that make your code maintainable today will make it adaptable tomorrow.
And the next time someone asks what your product is — consider whether the answer should be an interface, or a capability waiting to be rendered however your users need it.
This concludes "The Great Decoupling" series. For practical implementation patterns, see my earlier post on Capability-Based Architecture.
References
Current AI Agent Capabilities: Carnegie Mellon's TheAgentCompany benchmark found the best agent tested completed only 24% of realistic office tasks autonomously. Top models hit only 82% accuracy on enterprise document processing. Source: AIMultiple - 12 Reasons AI Agents Still Aren't Ready
Agentic AI Project Failure Rates: Gartner predicts over 40% of agentic AI projects will be canceled by end of 2027 due to unclear ROI and implementation challenges. Source: Gartner Press Release
AI Agent Implementation Challenges: Analysis of early AI agent deployments and common failure modes. Source: Medium - Six Weeks After Writing About AI Agents, I'm Watching Them Fail Everywhere
Microsoft MCP Integration: Microsoft embedded MCP into Copilot Studio, Azure API Management, and VS Code. Source: Wikipedia - Model Context Protocol
OpenAI MCP Adoption: OpenAI brought full MCP support to ChatGPT in September 2025, including "Developer Mode" with read/write capabilities. Source: Nerd @ Work - OpenAI Brings Full MCP Support to ChatGPT
A2A and MCP Complementary Positioning: Google's Agent2Agent protocol addresses agent-to-agent coordination, explicitly positioned as complementary to MCP's tool integration focus. Source: Koyeb - A2A and MCP: Start of the AI Agent Protocol Wars?
Enterprise AI Transformation: Analysis of how AI agents will transform enterprise software, with the prediction that the shift will be gradual rather than sudden. Source: The New Stack - AI Agents Will Eat Enterprise Software, Just Not in One Bite
AI Agent Development Costs: Enterprise AI agent development costs range from $50,000-$500,000+ for enterprise-grade solutions, with ongoing maintenance at 20-30% of initial costs annually. Source: Symphonize - Costs of Building AI Agents
RPA Implementation Lessons: Historical context on why RPA implementations often failed, providing cautionary parallels for AI agent adoption. Source: CIO - Why RPA Implementations Fail
Top comments (0)