DEV Community

Nathaniel Hamlett
Nathaniel Hamlett

Posted on • Originally published at nathanhamlett.com

The Agent Protocol Wars Are Over. Here's What the Dust Settled On.

The Agent Protocol Wars Are Over. Here's What the Dust Settled On.

For most of 2024 and early 2025, the AI agent space looked like a framework war. New orchestration tools dropped weekly. Teams argued about LangChain vs. raw function calls. Everyone had opinions about multi-agent architectures and almost nobody was running them in production.

That phase is over. By early 2026, the industry quietly agreed on a lot — and if you haven't caught up, you're building on shaky ground.

Here's what actually settled.


The Protocol Layer Stopped Being Contentious

The most significant convergence nobody is talking about: MCP and A2A are not competing. They're complementary. And both are now under the same foundation.

MCP (Model Context Protocol) — originally from Anthropic — is now at the Linux Foundation's Agentic AI Foundation (AAIF). Current numbers: 97 million monthly SDK downloads, 10,000+ production servers, and adoption by Google, OpenAI, Microsoft, and Amazon. What started as one company's tool spec became the de facto standard for how agents connect to tools and external systems.

A2A (Agent-to-Agent Protocol) — Google's contribution — landed at the same foundation with 150+ supporting organizations behind it. Accenture, BCG, Deloitte, and Capgemini are actively building A2A-native delivery practices. That's not hype; that's the consulting industry telling you where enterprise money is going.

The distinction matters:

  • MCP = how an agent talks to tools (APIs, file systems, databases, external services)
  • A2A = how agents talk to each other (delegation, coordination, handoffs between systems)

One handles vertical integration (agent → tools). The other handles horizontal integration (agent → agent). You need both in a real system.

There's a new addition worth watching: AP2 (Agent Payments Protocol), also from Google Cloud — designed for financial transactions between agents. Autonomous systems that can pay each other is still early, but the protocol work is being laid now.


What the Production Stack Actually Looks Like

The frameworks shook out into distinct lanes, and they're no longer really competing with each other:

Framework Won at Best for
LangGraph Complex stateful orchestration with audit trails Enterprise / compliance-heavy use cases
CrewAI Speed to production — reportedly 40% faster time-to-deploy Startups shipping fast
Microsoft Agent Framework (ex-AutoGen) Event-driven, Azure-native (GA'd Q1 2026) Enterprise Microsoft shops
OpenAI Agents SDK Low latency + GPT-native integration Teams already deep in OpenAI

The underlying shift in all of them: graph-based execution beat linear chains. Early agent frameworks (and frankly most of the tutorial content you'll find) treated agents as sequential pipelines. Production systems don't work that way. Tasks branch, fail, retry, delegate — you need a graph, not a chain.

The other shift that's now table stakes: heterogeneous model routing. Running every task through your most capable (and expensive) model is how you burn budget without improving outcomes. Analysis across production deployments consistently shows 80-90% cost reduction when you route by task type — fast/cheap models for extraction and formatting, capable models for synthesis and strategy.


The People Side Is Moving Fast

The job market validated all of this. On Glassdoor, 1,100+ "agentic AI" roles in SF alone. ZipRecruiter is showing 700+ "AI Agent Engineer" postings at $43–$91/hr. Year-over-year job postings in this category nearly doubled.

Salary bands for reference:

  • Mid-level (3–5 yrs agentic experience): $150K–$220K
  • Senior: $200K–$312K+
  • The specialist premium over generalist AI roles: 30–50%

The skills in demand aren't exotic: LLM fine-tuning, RAG pipelines, agentic system design, and fluency with MCP/A2A. The bottleneck is people who've actually operated these systems at scale — not people who read about them.


Why This Matters If You're Building Now

The protocol convergence has a practical implication: stop building proprietary integration layers. If you're writing custom tool-calling scaffolding or roll-your-own agent communication protocols, you're going to end up maintaining a compatibility nightmare when the ecosystem expects MCP and A2A.

The good news: MCP server implementations are not complicated. The protocol is well-documented, there are SDKs in multiple languages, and the community tooling is mature. The activation energy to get compliant is low; the long-term payoff of interoperability is high.

The graph-based execution note applies here too. If your agent system is a series of prompt-chained calls, it's going to be harder to debug, harder to parallelize, and harder to add recovery logic when things break. Graph orchestration frameworks give you inspection points, branching, and retry semantics at the cost of some additional setup. Worth it once you're past the prototype stage.


The Consolidation Isn't Complete

A few things are still in flux:

Memory and state — Persistent memory across agent sessions is still solved differently by every framework. This will likely converge more over the next 12 months.

Observability — LangSmith, Langfuse, and a few others are competing here. The tooling is good but not standardized.

Agent identity and auth — When agents are acting on behalf of users (booking, purchasing, publishing), the auth and identity model is still being worked out. AP2 is an early stab at this for payments specifically.


The protocol wars produced clear winners. The framework wars produced clear specializations. What's left is the execution problem: most teams still haven't figured out how to run these systems reliably in production.

That gap is where the actual work is.


I build and operate autonomous agent systems. If you're working on agentic infrastructure and want to compare notes, I'm at nathanhamlett.com.

Top comments (0)