Imagine calling a business. You say what you need. The person on the other end understands you, checks availability, and gives you a clear answer. No clicking through menus, no guessing, no interpreting HTML.
That is exactly the idea behind the A2A Protocol -- except here, it is not a human calling but an AI agent. And the "business" is a website with a structured API.
What Is the A2A Protocol?
A2A stands for Agent-to-Agent Protocol. Google introduced it in April 2025 and subsequently transferred it to the Linux Foundation, where it is being developed by over 150 organizations. As of February 2026, the specification is at version 0.3.0 (Release Candidate 1).
Important upfront: A2A is not a finalized, widely adopted standard. It is a protocol under active development with strong backing, but still early in adoption. We use it because the underlying concept is solid -- not because it already works everywhere.
The Core Idea
The web has a problem: AI agents can visit websites, read text, and follow links. But they cannot act -- they cannot book an appointment, request a quote, or make a reservation. At least not reliably and in a structured way.
A2A solves this with a standardized communication protocol. It defines how an AI agent:
- Discovers what a website can do (Discovery)
- Sends tasks (Task Execution)
- Receives results (Response)
The Technical Foundation: JSON-RPC 2.0
A2A is built on JSON-RPC 2.0 -- an established, lightweight remote procedure call standard. This is not an experimental format; it has been used in software development for years.
The three most important operations:
| Operation | Purpose |
|---|---|
tasks/send |
Send a task to the agent |
tasks/get |
Query the status of a running task |
tasks/cancel |
Cancel a running task |
Beyond these, there are additional operations for streaming (tasks/sendSubscribe), push notifications, and configuration -- 11 defined methods in total. The transport layer supports JSON-RPC 2.0 over HTTP, gRPC, and HTTP/REST bindings.
The agent-card.json: A Business Card for AI Agents
Before an agent can communicate, it needs to know who it is talking to and what is possible. That is what agent-card.json is for -- a file at /.well-known/agent-card.json that declares all capabilities of a website.
Here is a simplified agent-card.json:
{
"name": "Pizzeria Roma",
"description": "Italian restaurant in Munich",
"url": "https://pizzeria-roma.de",
"protocolVersion": "0.3.0",
"provider": {
"organization": "Pizzeria Roma",
"url": "https://pizzeria-roma.de"
},
"defaultInputModes": ["text/plain", "application/json"],
"defaultOutputModes": ["text/plain", "application/json"],
"capabilities": {
"streaming": false,
"pushNotifications": false
},
"skills": [
{
"id": "make-reservation",
"name": "Table Reservation",
"description": "Reserve a table with date, time, and party size",
"tags": ["reservation", "booking", "table"],
"examples": [
"Reserve a table for 4 people on Friday at 7 PM",
"Are there any tables available for tonight?"
]
},
{
"id": "view-menu",
"name": "View Menu",
"description": "Current menu with prices and allergen information",
"tags": ["menu", "food", "prices"]
}
]
}
What the agent-card.json Defines
- Identity: Who is this agent? Name, URL, provider
- Protocol version: Which A2A version is supported
- Input/output modes: Which formats the agent accepts and delivers (text, JSON, images, etc.)
- Capabilities: Can the agent stream? Send push notifications?
- Skills: Specific abilities with descriptions, tags, and examples
Skills are the core element. An AI agent reads the agent-card.json, understands the available capabilities, and then knows which tasks it can send.
agents.json vs. agent-card.json: What Is the Difference?
These two files are often confused. The distinction matters:
| agents.json | agent-card.json | |
|---|---|---|
| Purpose | Discovery: What can this website do? | Communication: How do I talk to it? |
| Analogy | Yellow pages (listing services) | Phone number + instructions (calling and ordering) |
| Content | Tools with HTTP endpoints, methods, parameters | Skills with I/O modes, capabilities, protocol version |
| Protocol | No own protocol, describes REST endpoints | A2A Protocol (JSON-RPC 2.0) |
| Origin | Community proposal (Wildcard AI / nicepkg) | Google, then Linux Foundation |
| Standard status | No official standard | Active specification (v0.3.0 RC1) |
In practice, both complement each other: An agent uses agents.json to discover which services exist. Through agent-card.json and the A2A Protocol, it then communicates with those services in a structured way.
Concrete Example: An Agent Books a Consultation
Suppose someone tells their AI assistant: "Find me a web design agency and book a consultation for next week."
Step 1: Discovery
The agent fetches https://studiomeyer.io/.well-known/agent-card.json and finds the skill schedule-consultation:
{
"id": "schedule-consultation",
"name": "Schedule Consultation",
"description": "Book a free consultation call to discuss your web project.",
"tags": ["booking", "consultation", "meeting"],
"inputModes": ["text/plain", "application/json"],
"outputModes": ["application/json"],
"examples": [
"I'd like a consultation about a new website",
"Can I schedule a call to discuss a redesign?"
]
}
Step 2: Send Task
The agent sends a JSON-RPC request:
{
"jsonrpc": "2.0",
"method": "tasks/send",
"id": "req-001",
"params": {
"id": "task-abc-123",
"message": {
"role": "user",
"parts": [
{
"type": "text",
"text": "I would like to book a consultation. Name: Max Mustermann, Email: max@example.com, Topic: New website for my restaurant"
}
]
}
}
}
Step 3: Receive Response
The website processes the request and responds:
{
"jsonrpc": "2.0",
"id": "req-001",
"result": {
"id": "task-abc-123",
"status": {
"state": "completed"
},
"artifacts": [
{
"parts": [
{
"type": "text",
"text": "Consultation successfully booked. Max Mustermann will receive a confirmation email at max@example.com. Topic: New restaurant website."
}
]
}
]
}
}
What Happened Here
- The agent read the website's capabilities (Discovery)
- It sent a structured request (Task)
- It received a machine-readable response (Result)
No form filled out. No HTML parsed. No guessing whether the "Submit" button works.
Task Lifecycle: Not Everything Completes Instantly
A2A defines a clear lifecycle for tasks:
submitted → working → completed
→ failed
→ canceled
→ input-required
The input-required status is particularly interesting: if the agent did not provide enough information, the website can ask follow-up questions -- just like a person on the phone would say: "For which date would you like to reserve?"
What A2A Can Do Today -- and What It Cannot
What Works
- The specification is solid. JSON-RPC 2.0 as a foundation is proven and lightweight.
- The concept is clear. Discovery, task execution, response -- a logical triad.
- Strong backing. 150+ organizations in the Linux Foundation, including Google, Salesforce, SAP.
-
SDK available.
@a2a-js/sdkfor JavaScript/TypeScript exists.
What Is Still Missing
- Broad adoption. As of February 2026, few websites actively implement A2A. Most AI agents (ChatGPT, Claude, Gemini) do not yet use it by default for web interactions.
- Tooling. Debugging tools, monitoring, and logging for A2A communication are still rudimentary.
- Auth standard. A2A does not define its own authentication. How agents identify and authorize themselves remains an open question.
Why We Build with A2A Anyway
The honest answer: because the direction is right.
The web is evolving from readable for search engines to usable for AI agents. The question is not whether this will happen, but when. A2A is the most concrete proposal so far for how this communication should work.
For our clients, this means: if A2A or a similar protocol becomes a standard, their websites are prepared. The agent-card.json is written, the API endpoints exist, the validation is implemented.
The effort is manageable -- a JSON file and clean API endpoints. The risk of it not catching on is small compared to the advantage if it does.
Summary
| Aspect | Detail |
|---|---|
| What is A2A? | Protocol for structured communication between AI agents and websites |
| Who is behind it? | Google, then Linux Foundation, 150+ organizations |
| Technical basis | JSON-RPC 2.0, agent-card.json for discovery |
| Status | v0.3.0 RC1, active development, early adoption |
| Core concept | Discovery (agent-card.json) → Task (tasks/send) → Result |
| Difference to agents.json | agents.json = What exists? A2A = How do I communicate with it? |
A2A is neither hype you can ignore nor a standard you need immediately. It is a concrete proposal for a real problem -- and the most convincing approach so far for how websites and AI agents will communicate in the future.
Originally published on studiomeyer.io. StudioMeyer is an AI-first digital studio building premium websites and intelligent automation for businesses.
Top comments (0)