Six protocols shipped to let AI agents buy things. None of them prove a human authorized the purchase. Mastercard and Google just open-sourced a standard that does.
On March 5, Mastercard and Google open-sourced a specification called Verifiable Intent. It links three things into a single tamper-resistant record: the identity of the human who authorized an agent's action, the specific intent that human expressed, and the outcome the agent produced. The specification is published at verifiableintent.dev, maintained on GitHub, built on standards from the FIDO Alliance, EMVCo, the Internet Engineering Task Force, and the World Wide Web Consortium. Eight companies — Fiserv, IBM, Checkout.com, Basis Theory, Getnet, Adyen, Worldpay, and Google — have indicated support.
Three days earlier, Santander completed Europe's first live end-to-end payment executed by an AI agent, using Mastercard's Agent Pay platform. The transaction worked. The payment cleared. And nothing in the architecture of that payment proved that the human who owned the account had authorized the specific purchase the agent made.
That gap — between a payment that processes and a payment that is provably authorized — is the gap Verifiable Intent was built to close.
The Missing Layer
The Payment Rail documented six protocols that launched in five months to let AI agents buy things. Google's Universal Commerce Protocol and Agent Payments Protocol. Stripe and OpenAI's Agentic Commerce Protocol — already live in ChatGPT with eight hundred million weekly active users and fifty million shopping queries per day. Coinbase's x402, reviving the HTTP 402 status code for programmatic payments. Visa's Trusted Agent Protocol, with over one hundred partners and thirty building in its sandbox. Mastercard's own Agent Pay.
Each protocol solves a real problem. UCP standardizes the shopping journey. ACP handles checkout. AP2 introduces cryptographic mandates for governance. x402 enables micropayments. Visa TAP verifies agent identity. Agent Pay orchestrates the payment itself.
None of them answer a question that becomes urgent the moment an agent spends real money on behalf of a human: can you prove, after the fact, that the human authorized this specific transaction?
The distinction matters because the agent commerce market is not theoretical. Fortune Business Insights sizes it at nine billion dollars in 2026. Morgan Stanley projects it reaching three hundred eighty-five billion by 2030. McKinsey forecasts one trillion dollars in orchestrated agentic commerce in the United States alone by the end of the decade. Every one of those dollars will pass through an agent acting on behalf of a human. Every one of those transactions will eventually face the question: who authorized this?
What the Specification Does
Verifiable Intent uses a layered credential format based on Selective Disclosure JSON Web Tokens — SD-JWTs. The architecture is a delegation chain. A credential provider issues a base token to a human user. The user constrains that token with specific parameters — amount bounds, merchant allowlists, budget caps, recurrence terms — and delegates it to an agent. Each layer cryptographically constrains the next through key confirmation claims.
Eight constraint types are defined in the specification. Each is machine-verifiable. The constraints travel with the credential, not alongside it. An agent operating outside its delegated parameters produces a cryptographically detectable violation — not a policy breach that requires interpretation, but a mathematical fact.
The specification defines two modes. In immediate mode, the human is present, reviews the exact cart and payment, and authorizes before the agent acts. In autonomous mode, the human sets boundaries and delegates — amount limits, merchant categories, time windows — and may not be present when the agent transacts. The authorization travels with the agent as a cryptographic artifact, not as a behavioral assumption.
Selective Disclosure means each party in a transaction sees only what it needs. The merchant sees proof of authorization and spending capacity. The issuer sees proof of identity. The dispute resolution system sees the full chain. No party sees everything. This is not a privacy feature bolted on after the fact — it is the structural design of the credential format.
The Legal Precedent
On March 10, a federal judge in California issued a preliminary injunction blocking Perplexity's Comet AI shopping agent from accessing Amazon. The ruling drew a distinction that the entire agent commerce industry will spend years working through: user permission to an AI agent does not equal platform authorization.
Perplexity had user permission. Its users explicitly asked Comet to shop on their behalf. Amazon argued that Comet accessed its platform without Amazon's authorization — disguising itself as a regular Chrome browser, ignoring at least five warnings over twelve months. The judge agreed. The agent had the user's intent but not the platform's consent.
The House Rules documented Amazon's updated Business Solutions Agreement requiring every AI agent on its marketplace to self-identify. The Perplexity injunction is the enforcement mechanism. Together, they establish a precedent: in agent commerce, intent must be verifiable by every party in the transaction, not just the party that benefits from the agent's action.
Verifiable Intent is architecturally designed for exactly this problem. The delegation chain doesn't just prove the user authorized the agent — it proves the scope of that authorization in a format every party can independently verify. When an agent presents a Verifiable Intent credential to a merchant, the merchant can confirm that the human's authorization covers this specific category of purchase, at this price range, from this type of seller. The proof is in the credential, not in the agent's claim about the credential.
The Friction Tax Revisited
The Friction Tax documented a pattern: companies that replaced human workers with AI agents eventually added human gates back when the agents failed. Amazon rebuilt human review layers after a trend of high-severity incidents. The market rewarded the subtraction of humans and did not yet have a vocabulary for the cost of adding them back.
Those human gates — manual reviews, approval queues, escalation procedures — are the crude version of what Verifiable Intent engineers precisely. A human gate is binary: someone checks and approves, or doesn't. A Verifiable Intent credential is parametric: the human's authorization is bounded, specific, and cryptographically constrained. The agent operates within the bounds without requiring a human to stand at the gate.
This is the distinction between controlled friction and engineered trust. Controlled friction is a cost. Engineered trust is infrastructure.
A Bain and Company survey of over two thousand consumers found that only twenty-four percent are comfortable using AI agents to complete purchases. Eighty-five percent want explicit control over what data agents can access. Forty-three percent worry an agent could select the wrong product. The consumer trust deficit is not a marketing problem. It is an infrastructure problem — the systems that let agents buy things do not yet give consumers provable evidence that the agent bought the right thing for the right reason.
The Acquisition Asymmetry
The security industry's response to autonomous AI agents has been enormous and lopsided. Palo Alto Networks acquired Koi for four hundred million dollars — agentic endpoint security, monitoring what agents can read, write, and move. The same month, it completed its acquisition of CyberArk for agent identity and privilege governance. ServiceNow spent eleven point six billion dollars on three security acquisitions in 2025: Armis, Moveworks, and Veza. Google acquired Wiz for thirty-two billion dollars.
The total: tens of billions of dollars consolidating every security layer around AI agents — perimeter, identity, endpoint, governance. The entire AI security startup ecosystem raised eight point five billion dollars across one hundred seventy-five companies over two years. ServiceNow alone outspent the venture ecosystem.
Of that eight point five billion in venture funding, only four hundred fourteen million — less than five percent — went to the thirteen companies focused specifically on securing AI and agentic systems. The rest went to companies adapting existing security products for new threats.
The asymmetry is not in the amount spent. It is in what was built. Perimeter security answers: what can this agent access? Identity governance answers: who is this agent? Endpoint monitoring answers: what did this agent do? Authorization — the layer that answers who told this agent to do it, and can you prove it? — has attracted no major acquisition because no major product exists to acquire.
The Wrong Abstraction documented five authorization platforms building better versions of role-based access control for agents. Each encodes the same assumption: that authorization is a property of the agent's identity, not a record of the human's intent. Verifiable Intent inverts this. The authorization is not stored in the system that manages the agent. It travels with the transaction as a cryptographic artifact issued by the human.
The Infrastructure Declaration
When a company builds a proprietary feature, it files a patent. When a company builds infrastructure, it publishes a standard.
Mastercard open-sourced Verifiable Intent. The specification is in draft — version zero point one — and maintained under a multi-stakeholder governance model. It interoperates with Google's AP2 and the Universal Commerce Protocol. It is designed to sit alongside the Agentic Commerce Protocol, not replace it. Cloudflare has already announced integration with both Visa TAP and Mastercard Agent Pay, using Web Bot Auth — based on IETF RFC 9421 — as the agent authentication layer.
The interoperability is the declaration. These are not competing protocols. They are layers in a stack. ACP handles the shopping flow. AP2 handles governance. Visa TAP verifies the agent. Verifiable Intent proves the human authorized the action. x402 handles micropayments where human authorization would be overhead.
An enterprise deploying agents for procurement might use all five. A consumer agent buying groceries might use two. The stack assembles based on the assurance level the transaction requires — a pattern the specification calls graduated authorization.
The Spending Limit documented two banks letting AI agents buy things with real credit cards. The Credential documented a two-hundred-forty-year-old bank giving one hundred thirty AI agents their own login credentials. Each was a milestone in agent capability. Verifiable Intent is a milestone in agent accountability — the infrastructure layer that makes capability auditable.
The specification's most important design choice may be the simplest: when an agent books a hotel outside the stated budget, or purchases the wrong variant of a product, the question of whether the action was authorized becomes a matter of cryptographic record rather than human memory. The dispute is resolved by mathematics, not testimony.
That is what infrastructure means. Not a feature that one platform offers. A standard that every platform can verify.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)