Google's Agent2Agent Protocol: The Real Reason This Matters Is the Authentication Layer
A2A's most significant design choice isn't the message format — it's mandating OAuth 2.0 and OpenID Connect for every agent interaction, which solves the "who authorized this agent to act on my behalf" problem that's been quietly plaguing every enterprise AI deployment.
The Part Everyone Is Glossing Over
Every article about A2A leads with "Google created a standard for AI agents to talk to each other" and then lists the 50+ partner companies. That's the press release. Here's what actually matters:
A2A requires cryptographic proof of both the agent's identity AND the user's delegated authority for every single interaction. This isn't optional. It's baked into the protocol at layer one.
Why does this matter? Because right now, most "agent" deployments are just LLMs with API keys stuffed into environment variables. When Agent A calls Agent B, there's no standardized way to answer: "Who is the human that authorized this chain of actions, and what are they actually allowed to do?"
A2A answers this with AgentCard — a JSON metadata document that every A2A-compliant agent must host at /.well-known/agent.json. It declares the agent's capabilities, supported authentication schemes, and crucially, the OAuth scopes it requires. This is the handshake that happens before any task execution.
How It Actually Works
The protocol defines three core primitives:
1. Agent Discovery via AgentCard
{
"name": "expense-processor",
"description": "Processes expense reports and integrates with SAP",
"url": "https://agents.acme.com/expense",
"authentication": {
"schemes": ["oauth2"],
"oauth2": {
"authorizationUrl": "https://auth.acme.com/authorize",
"tokenUrl": "https://auth.acme.com/token",
"scopes": ["expenses:read", "expenses:approve"]
}
},
"capabilities": {
"streaming": true,
"pushNotifications": true
}
}
When your orchestrator agent needs to process expenses, it first fetches this card, determines if it can satisfy the auth requirements, and only then initiates a task.
2. Task Lifecycle Management
Tasks have explicit states: submitted, working, input-required, completed, failed, canceled. The input-required state is particularly interesting — it's how an agent signals "I need more information from the user before I can continue" without breaking the async flow.
3. Message Parts with MIME Types
Agents don't just pass text. A2A messages contain typed Part objects — text, files, or structured data — each with explicit MIME types. This means an agent can return a PDF, a JSON payload, and a human-readable summary in a single response, and the receiving agent knows how to handle each.
What This Changes For You
If you're building anything that orchestrates multiple AI capabilities, A2A gives you three things you'd otherwise have to build yourself:
Audit trails that actually work. Because every task carries the user's delegated credentials through the chain, you can answer "who authorized the agent to approve that $50k purchase order" six months later when compliance asks.
Agent substitutability. If your expense-processing agent goes down, you can swap in a different A2A-compliant agent without changing your orchestration logic. The AgentCard discovery means your system can adapt to capability changes at runtime.
Timeout and cancellation semantics. A2A defines how to cancel a running task and what state transitions are legal. This sounds boring until you're debugging why your agent chain ran for 47 minutes burning tokens because nothing knew how to give up gracefully.
The practical starting point: Google's published a Python SDK and sample agents. The samples/python directory has a working client-server pair you can run locally in about ten minutes.
The Catch
Three things to watch:
No built-in rate limiting or cost attribution. A2A tells you who authorized a task but not how much that task should be allowed to cost. You'll still need your own guardrails to prevent a runaway agent chain from burning $10k in API calls.
The "capabilities" field is self-reported. An agent declares what it can do, but there's no verification. A malicious or misconfigured agent can claim capabilities it doesn't have. Trust but verify — or just verify.
Adoption is the actual test. The partner list includes Salesforce, SAP, ServiceNow, and others. But "partner" often means "we're aware this exists and might do something with it eventually." Until these vendors ship production A2A endpoints, this is a protocol with reference implementations, not an ecosystem.
My take: A2A solves real problems, but it's solving them for enterprise multi-agent orchestration specifically. If you're building a single-purpose AI feature, this is premature complexity. If you're trying to connect three vendors' AI capabilities with auditable authorization — this is exactly what you need, and building it yourself would take months.
Where To Go From Here
Clone the repo and run the sample agents: git clone https://github.com/google/A2A && cd A2A/samples/python. The README walks you through standing up a host agent and remote agent that communicate over A2A. Thirty minutes of hands-on time will tell you more than any spec document.
ai agents protocols oauth google
Photo by Altin Çibukçiu on Unsplash
Top comments (0)