Mastercard and Google Are Building the Trust Layer for AI That Spends Money
16% of U.S. consumers trust AI to make payments on their behalf. Not because they don't understand the technology—because they don't understand what the AI will actually do.
Will it book the flight I asked for, or also add travel insurance I didn't authorize? Will it buy the specific product I selected, or the "best" one according to criteria I never approved?
This isn't an AI capability problem. It's a trust infrastructure problem.
Mastercard and Google just open-sourced a piece of that infrastructure: Verifiable Intent.
What Verifiable Intent Actually Does
The framework creates cryptographic proof that an AI agent is operating within bounds a human explicitly authorized.
Think of it as a digitally signed power of attorney with machine-enforceable constraints:
- Amount caps: The agent can't spend more than $X without re-authorization
- Merchant allowlists: The agent can only transact with approved vendors
- Category restrictions: The agent can't drift from "book my flight" to "book my vacation package"
- Time windows: Authorization expires automatically
Each transaction carries proof that the specific action was within the scope of what the human approved.
No more "the AI decided to upgrade my booking because it seemed like what I'd want."
Why This Matters More Than Agent Payments Protocols
Stripe's Machine Payments Protocol (MPP) lets agents respond to HTTP 402 challenges and pay programmatically.
Visa's agent credit cards give autonomous spending power.
Ramp's corporate cards for AI book flights and software subscriptions.
All of these assume the transaction was authorized.
But what does "authorized" mean when:
- The human gave vague instructions
- The model interpreted those instructions creatively
- The business logic layer added upsells
- The checkout flow had dark patterns
Verifiable Intent answers a different question than payment rails:
- Payment protocols: Can this agent spend money?
- Verifiable Intent: Did this agent stay within the bounds the human specified?
Both layers are necessary. Neither replaces the other.
The Technical Architecture
The framework works through signed authorization objects.
When you ask an agent to book a flight, you're not just giving natural language instructions. You're approving a structured authorization:
{
"intent": "book_flight",
"constraints": {
"max_amount": 500,
"allowed_airlines": ["united", "delta", "american"],
"departure_date": "2026-04-10",
"return_date": "2026-04-15",
"class": "economy"
},
"valid_until": "2026-04-06T23:59:59Z",
"signature": "<human_approval_signature>"
}
The agent can't exceed the constraints without invalidating its proof. Merchants can verify the signature against the authorization. If the transaction doesn't match, it gets rejected—not by the payment network, but by the trust layer.
Why Mastercard and Google Open-Sourced This
They could have kept it proprietary. Made it a differentiator for Google Pay or Mastercard's agent payment products.
Instead, they released it as open source.
Because network effects matter more than moats in the agent economy.
Every agent transaction that fails due to trust issues hurts the entire ecosystem. Users lose confidence. Merchants lose sales. Payment volumes stagnate.
The more players adopt Verifiable Intent, the more:
- Merchants trust agent-initiated transactions
- Users feel comfortable delegating spending
- Agent frameworks standardize on the same authorization model
- Regulators accept that AI spending has guardrails
This is infrastructure, not product. Making it open grows the market for everyone.
What's Still Missing
Verifiable Intent solves one piece of the puzzle. It answers "did this specific agent action match what the human authorized?"
Two other pieces remain:
1. Agent Identity
Verifiable Intent doesn't prove who is running the agent. Is it the human who authorized it, or someone who compromised their credentials?
This is where Sam Altman's World AgentKit comes in—verifiable identity for AI agents linked to human owners.
2. Transaction Context
Authorization proofs work for structured purchases. They don't work well for:
- Open-ended requests ("find me the best deal")
- Multi-step transactions that compound
- Agents that learn preferences over time
The framework handles bounded tasks well. Fuzzy tasks still need human judgment or a different authorization model.
The Takeaway
The agent payments conversation has been dominated by spending capability: Can agents pay? What payment rails support machine-to-machine transactions?
The real conversation should be about spending trust: How do humans know agents will do what they asked, and nothing more?
Verifiable Intent is the first credible answer to that question that's open, interoperable, and cryptographically sound.
Payment rails are coming fast. Stripe, Visa, and Ramp are racing to let agents spend.
The trust layer is what makes that spending safe enough for mainstream adoption.
Mastercard and Google didn't build the payment rail. They built the rail's guardrails.
The agent economy will work when users can answer one question with confidence: "If I let this agent spend my money, what exactly will it do?" Verifiable Intent makes that question answerable.
Top comments (0)