The Agent2Agent Protocol (A2A) is an open communication standard, developed by Google and released on April 9, 2025, that enables AI agents to discover, authenticate, and delegate tasks to other AI agents across different platforms and frameworks. Unlike MCP, which connects agents to tools, A2A connects agents to agents. The protocol is governed by the Linux Foundation and counts 150+ organizational supporters as of mid-2025, according to IBM Think.
Search interest is accelerating. According to DataForSEO Keyword Overview (April 2026), 'a2a protocol' search volume grew 22% month-over-month and 52% quarter-over-quarter, with 'agent2agent protocol' up 88% quarter-over-quarter. The reason is simple: as multi-agent systems move from research projects to production deployments, the question of how agents talk to each other is no longer theoretical, it's a blocking problem for anyone building at scale.
This article covers what A2A actually is, how its three-step flow works end to end, how it differs from MCP (and why that distinction matters), what security research says that most coverage ignores, and why the agent protocol landscape is consolidating faster than most developers realize.
TL;DR: Google's A2A Protocol (launched April 2025, now Linux Foundation-governed) is the open standard for AI agent-to-agent communication. It works over HTTP/JSON-RPC 2.0 with Agent Card-based discovery and five official SDKs. With 150+ organizational supporters and IBM's competing ACP merged in, A2A is the emerging de facto enterprise standard for multi-agent interoperability, per IBM Think.
What is the Agent2Agent Protocol?
The Agent2Agent Protocol (A2A) is an open protocol for AI agent communication, developed by Google and released on April 9, 2025, initially with 50+ founding technology and services partners, per the Google Developers Blog announcement. After Google donated A2A to the Linux Foundation in June 2025, the supporter count grew to 150+ organizations, according to IBM Think. A2A is the infrastructure layer that lets independently built AI agents discover each other, authenticate, and exchange work without requiring custom integration code on either side.
As the Google Developers Blog stated at launch: "The A2A protocol will allow AI agents to communicate with each other, securely exchange information, and coordinate actions on top of various enterprise infrastructure." In practice: if you build an HR agent with LangChain and another team builds a scheduling agent with CrewAI, A2A is the shared language that lets them collaborate without either team writing bespoke glue code.
A2A runs on deliberately boring infrastructure, HTTP, JSON-RPC 2.0, and Server-Sent Events (SSE), per the official a2aproject/A2A GitHub repository (Apache 2.0 license). Agents are exposed as standard web services, so enterprise adoption requires no new tooling. Authentication follows industry standards: OAuth 2.0, OpenID Connect (OIDC), and API keys for simpler deployments. The protocol officially supports five language SDKs: Python, Go, JavaScript, Java, and .NET, per the GitHub repository.
One design choice that often gets overlooked: A2A implements what it calls the "opaque agent" model. Remote agents never expose their internal tools, memory, or proprietary reasoning to a calling agent, only their declared capabilities and outputs, per the AgentsIndex A2A protocol listing. For large enterprises, this is the IP protection guarantee that makes A2A practical. You can participate in a multi-agent workflow with an external organization without exposing your competitive logic. This single design decision is likely why enterprise adoption has moved as quickly as it has.
Developers report reaching a functional first agent-to-agent message exchange within 1–2 hours of starting implementation, according to the AgentsIndex A2A listing. For an enterprise communication protocol, that's an unusually low barrier. The design choices explain it: standard HTTP transport, JSON payloads, no new infrastructure requirements. You're essentially adding a well-known endpoint to an existing service and implementing the task-handling logic.
A2A Protocol in Action: Video Overview
https://www.youtube.com/watch?v=Fbr\_Solax1w
How do AI agents communicate with each other?
AI agents communicate through standardized protocols like A2A. Using A2A, one agent publishes a JSON Agent Card declaring its capabilities. A second agent reads this card, authenticates, and sends a Task object. The first agent executes the task and returns structured Artifacts as output. Communication can be synchronous, streaming via SSE, or asynchronous via webhooks depending on task complexity and duration.
The full A2A interaction follows a three-phase lifecycle:
Phase 1: Discovery via Agent Cards
Each A2A-compatible agent publishes a JSON-formatted Agent Card at a standard endpoint: /.well-known/agent.json, per the Google Developers Blog. This card advertises what the agent can do, its capabilities, supported modalities (text, audio, video files), authentication requirements, and endpoint URLs. A client agent reads this card to determine whether the remote agent can handle a given task. Think of it as a machine-readable job listing that any other agent can read before deciding to hire the specialist. No centralized registry, no vendor-specific discovery service, just a standard HTTP path.
Phase 2: Authentication
Before any task delegation happens, the client agent authenticates to the remote agent using the scheme declared in the Agent Card: OAuth 2.0, OIDC, or API key. The opaque agent model kicks in here: the remote agent accepts the authenticated request and handles the task, but never reveals the internal tools or reasoning it used to produce its output. From the calling agent's perspective, the specialist is a black box that takes tasks and returns results. This is intentional, it's how you build collaborative multi-agent systems without creating intellectual property exposure.
Phase 3: Task execution and artifact exchange
Once authenticated, the client sends a Task object to the remote agent. A2A defines five task lifecycle states, submitted, working, input-required, completed, and failed, per IBM Think's A2A explainer. These states cover both fast synchronous operations and long-running asynchronous workflows. When a task requires human input or additional context mid-execution, the input-required state pauses the workflow cleanly. The remote agent returns structured Artifacts, the outputs, which the calling agent uses or passes to the next step in the pipeline.
Communication patterns are flexible by design. Synchronous request/response handles tasks that complete quickly. Streaming via Server-Sent Events handles tasks where incremental results are useful, like a research agent that streams findings as it discovers them. Asynchronous push notifications via webhooks handle long-running operations where the client shouldn't be blocked waiting. A single A2A implementation can use all three patterns depending on the task type.
What is the difference between A2A and MCP?
MCP (Anthropic's Model Context Protocol) connects AI agents to external tools, databases, APIs, and file systems. A2A (Google's Agent2Agent Protocol) connects AI agents to other AI agents for task delegation. They are complementary layers in a multi-agent architecture: an agent uses A2A to delegate work to a specialist, which then uses MCP to call the tools it needs. Neither protocol replaces the other; production multi-agent systems will use both.
This distinction is frequently confused, and that confusion has consequences. As of April 2026, ChatGPT provides factually incorrect answers about both protocols, according to DataForSEO ChatGPT Scraper queries, conflating A2A with generic multi-agent coordination concepts and misidentifying MCP as "Multi-Agent Coordination Protocol" rather than Anthropic's Model Context Protocol. The accurate picture:
| Dimension | MCP (Anthropic) | A2A (Google / Linux Foundation) |
|---|---|---|
| Purpose | Connect agents to tools and data sources | Connect agents to other agents |
| What it connects | Agent ↔ Tool (APIs, databases, file systems) | Agent ↔ Agent (peer delegation) |
| Direction | Vertical (agent calls down to resources) | Horizontal (agents coordinate as peers) |
| Transport | JSON-RPC over stdio or HTTP SSE | HTTP/JSON-RPC 2.0 with SSE and webhooks |
| Initiated by | Agent requesting tool access | Orchestrator delegating to specialist |
| Best for | Giving agents access to capabilities (search, code execution, data retrieval) | Building pipelines where specialists handle subtasks autonomously |
| Governance | Anthropic (open standard) | Linux Foundation (open standard) |
The clearest mental model: MCP is the vertical layer (agent ↔ tool), A2A is the horizontal layer (agent ↔ agent). In a real multi-agent workflow, an orchestrator agent uses A2A to delegate "find me a qualified candidate" to a recruiting agent. That recruiting agent then uses MCP to query a LinkedIn API and a resume database. The orchestrator never knows which tools the recruiting agent used, A2A maintains the abstraction. The recruiter never exposes its internal tooling, the opaque agent model protects it.
Andrew Ng, co-founder of DeepLearning.AI, stated when announcing the official A2A course in February 2026: "Connecting agents built with different frameworks usually requires extensive custom integration. A2A, the open protocol standardizing how agents discover each other and communicate, has emerged as the industry standard after IBM's ACP joined forces with A2A."
Ivan Nardini, a Google engineer and co-instructor of the DeepLearning.AI A2A course, framed the underlying problem directly: "Building agents is the easy part. Getting them to talk to each other across different organizational boundaries and frameworks is another game entirely." A2A addresses exactly that problem. MCP addresses a different but equally real one. Developers building production multi-agent systems need both, and need to understand which layer each protocol operates at.
Who supports the A2A Protocol?
A2A launched on April 9, 2025 with 50+ founding partners including Atlassian, Salesforce, SAP, ServiceNow, PayPal, Workday, and all major management consultancies, Accenture, BCG, Deloitte, McKinsey, and PwC, per the Google Developers Blog announcement. After Google donated the protocol to the Linux Foundation in June 2025, support grew from 50+ to 150+ organizations, according to IBM Think. IBM's independently developed Agent Communication Protocol (ACP) merged with A2A in 2025, with DeepLearning.AI and IBM Research co-building an official A2A course with Google Cloud Tech in February 2026, per the DeepLearning.AI course announcement.
The ACP-A2A merger deserves more attention than it's received. IBM didn't quietly sunset ACP, it actively merged it into A2A, which is a different signal entirely. It's the protocol equivalent of competing network standards consolidating around TCP/IP in the 1990s. Enterprise networks once ran fragmented protocols (DECnet, NetWare IPX/SPX, and others) before TCP/IP became the universal standard. The AI agent protocol landscape is going through the same consolidation now, and A2A is positioned as the TCP/IP equivalent for agent-to-agent communication. IBM's decision to fold ACP into A2A rather than compete removes the most credible enterprise alternative.
The Linux Foundation governance model matters here too. It's the same structure that legitimized Kubernetes (from Google internal tool to the industry standard for container orchestration) and OpenTelemetry. Linux Foundation backing signals that A2A is infrastructure, not a product, meant to be depended on by everyone rather than controlled by anyone. That's a meaningful trust signal for enterprises evaluating whether to build long-term on top of it.
The supporter list spans the full enterprise software stack:
| Category | Key supporters |
|---|---|
| Enterprise software | Atlassian, Box, Intuit, MongoDB, PayPal, Salesforce, SAP, ServiceNow, UKG, Workday |
| AI / ML platforms | Cohere, LangChain, Google Cloud, IBM Research |
| Management consulting | Accenture, BCG, Capgemini, Cognizant, Deloitte, KPMG, McKinsey, PwC, TCS, Wipro |
| Protocol governance | Linux Foundation (since June 2025) |
When every major consultancy signs on from day one, they're signaling to their enterprise clients that A2A is going on the roadmap for every agentic AI implementation they deliver. That's an adoption driver that doesn't show up in the official supporter count but matters enormously for how quickly A2A becomes the assumed standard in enterprise AI deployments.
How does A2A agent discovery work?
A2A agents advertise their capabilities by publishing a JSON Agent Card at a standard URL: /.well-known/agent.json, per the Google Developers Blog. The card declares what the agent can do, which communication modalities it supports (text, audio, video), authentication requirements, and its endpoint address. Any A2A client can read this card to decide whether to delegate a task, no centralized directory service, no vendor-specific registry, no configuration management overhead.
Agent discovery relies on standard HTTP and JSON-RPC infrastructure, making enterprise adoption frictionless.
The Agent Card has four key sections. Capabilities define the task types the agent handles. Modalities specify the input and output formats it supports (text, audio files, video files). Authentication declares the required scheme (OAuth 2.0, OIDC, or API key). Endpoint provides the URL where tasks should be sent. A client agent reads the card, checks for a capability match, and proceeds to authenticate if there's alignment. The whole negotiation happens over standard HTTP, intentionally designed to feel like calling a web API, not configuring a distributed system.
Discovery at scale gets more interesting. Google's Agent Development Kit (ADK) treats A2A as its default inter-agent communication protocol, which means A2A adoption is a natural consequence of ADK adoption, developers building on ADK are using A2A whether or not they think of it in protocol terms. The Google Cloud AI Agent Marketplace is positioned to be where A2A-compatible agents are discovered and deployed at scale, turning the Agent Card mechanism from a point-to-point protocol into a marketplace-compatible standard.
For developers evaluating the implementation effort: A2A officially supports five language SDKs, Python, Go, JavaScript, Java, and .NET, per the a2aproject/A2A GitHub repository. That covers virtually every enterprise development environment. The reported 1–2 hours to first working message exchange (AgentsIndex A2A listing) comes down to the same design choices: you're not installing new infrastructure, you're adding a /.well-known/agent.json endpoint and implementing task-handling logic in a language that already has an official SDK. Most enterprise inter-service protocols take days to implement end to end. A2A's developers made adoption velocity a first-class design goal.
A community developer on r/LocalLLaMA described A2A's architectural advantage this way: the protocol is "more universally applicable, since it is a layer isolated from the deployment and intraservice communication components." That isolation is the key. A2A doesn't care what cloud you're running on, what container orchestration you're using, or what LLM is powering your agents, it sits above all of that and provides a clean interface for agent-to-agent coordination.
What are the most common mistakes in A2A security implementations?
Baseline A2A agents suffered 60% to 100% data leakage rates under prompt injection attacks in empirical testing, according to arXiv:2505.12490 (published August 2025). The same research identified eight specific security weaknesses in the v1 protocol, including token lifetime issues (no enforcement of short expiration for sensitive operations), coarse-grained OAuth scopes that violate the principle of least privilege, and missing user consent mechanisms. For any A2A deployment handling sensitive data, payments, personal information, identity verification, these aren't theoretical concerns.
The opaque agent model ensures enterprises can collaborate across organizational boundaries without exposing proprietary reasoning.
The research is significant for another reason: it's essentially absent from every other editorial source covering A2A. Google's own blog doesn't cover it. IBM's explainer doesn't cover it. None of the Medium posts cover it. The arXiv paper exists in the academic record but hasn't reached developer awareness. If you're evaluating A2A for a production deployment involving sensitive data flows, arXiv:2505.12490 should be required reading before you start implementation.
The fix is documented and testable. An enhanced A2A protocol design with three specific modifications, ephemeral tokens with 30-second to 5-minute validity windows, granular per-operation OAuth scopes, and explicit consent orchestration, achieved zero data leakage across 45 prompt injection test attempts, per the same arXiv:2505.12490 research. The paper proposes adding a USER_CONSENT_REQUIRED task state to A2A's five existing lifecycle states, pausing task execution for explicit approval before sensitive operations proceed.
What this means practically: the baseline A2A protocol gives you the interoperability infrastructure. It does not give you production-grade security out of the box for sensitive workloads. The distinction between deployments is clear:
| Use case | Baseline A2A sufficient? | Enhanced security needed? |
|---|---|---|
| Internal tool orchestration (no PII) | Yes | Optional |
| Cross-team agent coordination | Likely yes | Recommended |
| External agent collaboration with PII | No | Required |
| Payment processing or financial data | No | Required |
| Identity verification workflows | No | Required |
If you're building A2A systems that handle any of the high-sensitivity categories, implement ephemeral tokens (not long-lived OAuth tokens), declare granular scopes per operation rather than broad API access, and build explicit consent checkpoints for operations involving sensitive data. The enhanced patterns aren't complex to implement, they're just not in the default protocol specification, which is why the vulnerability research matters.
How does A2A fit into the broader agent protocol landscape?
A2A doesn't exist in isolation. The AI agent protocol landscape includes several complementary and competing standards, and understanding where A2A sits clarifies when you'd reach for it versus alternatives. The short version: A2A handles agent-to-agent communication; the Model Context Protocol (MCP) handles agent-to-tool connections; AGENTS.md provides behavioral instructions for agents working in specific codebases; and the FIPA Agent Communication Language is a 1990s predecessor that pioneered many concepts A2A now implements at modern web scale.
The Agent Network Protocol (ANP), also indexed on AgentsIndex, takes a more decentralized, identity-first approach to agent networking. Where A2A is designed around organizational trust boundaries and enterprise deployment patterns, ANP targets more open, peer-to-peer agent networks. They solve related but distinct problems, it's plausible that some production deployments will use both, depending on whether agents operate within or across organizational trust perimeters. This is the emerging question in agent protocol architecture: not which protocol wins, but which protocols serve which tiers of the agent communication stack.
The broader consolidation story is worth stating plainly. Just as the web standardized on HTTP for human-facing communication, the AI agent ecosystem is standardizing on open protocols for machine-to-machine agent coordination. A2A is the leading candidate for the enterprise tier of that stack, backed by Linux Foundation governance and a supporter list spanning every major enterprise software vendor. The pattern, IBM's ACP merging in, 150+ organizations signing on, Andrew Ng's DeepLearning.AI building the canonical course, the Linux Foundation providing governance, follows the same playbook as every successful infrastructure standardization cycle. It's Kubernetes for agent communication.
For developers building in this space, the protocols are additive, not competitive. A production multi-agent system will likely use several in different layers of the same architecture. For the most comprehensive view of where each protocol sits, AgentsIndex maintains direct listings for A2A, MCP, AGENTS.md, FIPA Agent Communication Language, ANP, and others in the Standards & Protocols category, each with technical summaries based on public documentation rather than vendor claims.
Frequently Asked Questions
What is Agent2Agent protocol?
The Agent2Agent Protocol (A2A) is an open communication standard developed by Google (launched April 2025) that enables AI agents to discover each other's capabilities via JSON Agent Cards, authenticate, and delegate tasks to one another across different frameworks and platforms. It operates over HTTP/JSON-RPC 2.0 and is governed by the Linux Foundation with 150+ organizational supporters as of mid-2025.
What is the difference between A2A and MCP?
MCP (Anthropic's Model Context Protocol) connects AI agents to external tools, databases, APIs, and file systems. A2A (Google's Agent2Agent Protocol) connects AI agents to other AI agents for task delegation. They are complementary: an agent uses A2A to delegate to a specialist, which then uses MCP to call the tools it needs. Both protocols are necessary in a production multi-agent architecture; neither replaces the other.
How do AI agents communicate with each other?
Using protocols like A2A, AI agents communicate by first discovering each other through published Agent Cards (JSON files at /.well-known/agent.json advertising capabilities). The requesting agent authenticates, sends a Task object, and receives structured Artifacts in return. Communication can be synchronous, streaming via Server-Sent Events, or asynchronous via webhooks depending on task complexity and duration.
Who supports the A2A protocol?
A2A launched with 50+ founding partners including Atlassian, Salesforce, SAP, ServiceNow, PayPal, Workday, and major consultancies (Accenture, Deloitte, McKinsey, PwC). After Google donated it to the Linux Foundation in June 2025, support grew to 150+ organizations, per IBM Think. IBM's ACP merged with A2A, making it the emerging cross-industry standard for multi-agent interoperability.
How does A2A agent discovery work?
A2A agents advertise capabilities by publishing a JSON Agent Card at /.well-known/agent.json. The card declares supported task types, communication modalities (text, audio, video), authentication requirements, and the endpoint address. Any A2A client reads this card to decide whether to delegate a task, no centralized registry required. Developers report reaching first working message exchange within 1–2 hours of implementation, per the AgentsIndex A2A listing.
What key takeaways should developers have about A2A?
A2A is real, it's here, and it's moving fast. Search volume for 'a2a protocol' grew 52% quarter-over-quarter as of March 2026, per DataForSEO, which tells you where developer interest is heading. The protocol has the governance structure (Linux Foundation), ecosystem breadth (150+ organizations), competitive consolidation (IBM's ACP merged in), and implementation accessibility (five SDKs, HTTP transport, 1–2 hour first implementation) to become the default infrastructure layer for multi-agent systems.
The practical checklist for developers evaluating A2A:
- Use A2A for agent-to-agent task delegation; pair it with MCP for agent-to-tool connections, they solve different layers of the same architecture
- Implement Agent Cards at
/.well-known/agent.jsonwith accurate capability declarations before anything else - For sensitive data flows (payments, PII, identity), apply the enhanced security patterns from arXiv:2505.12490: ephemeral tokens, granular scopes, explicit consent steps
- Check the five official SDKs (Python, Go, JavaScript, Java, .NET) before building custom integrations, the baseline is already there
- If you're using Google ADK, A2A is already the default inter-agent protocol, you're using it whether or not you've named it
For context on the broader multi-agent architecture patterns that A2A enables, the AgentsIndex coverage of multi-agent systems explains the orchestrator-specialist architecture that A2A is designed to support. If you're choosing an agent framework to build on top of A2A, the guide on how to choose an AI agent framework covers the current framework landscape with clear decision criteria. The types of AI agents overview is useful background if you're new to the agent taxonomy that A2A's specialization model assumes.
The agent protocol ecosystem is consolidating. A2A is the leading candidate for the enterprise communication standard. Now is the right time to understand it, not when it's already assumed knowledge.


Top comments (0)