AI agents are making autonomous payments. Stripe launched Machine Payments Protocol. Every agent framework is adding payment tools. But here's the problem nobody is solving: how do you know if an agent should be trusted to spend money?
The Gap
When a human makes a payment, there's a credit check, KYC, fraud detection. When an AI agent makes a payment, there's... an API key. That's it. A static string that grants full access with zero verification of who the agent is or whether it should be allowed to pay.
No trust scoring. No spend limits per agent. No identity verification. No impersonation detection. No audit trail linking agent to transaction.
What I Built
AgentPass -- the credit check for AI agents. Before a payment goes through, the platform checks the agent's trust score. Same model as a credit bureau check before a loan.
Trust Scoring (L0-L4)
Every agent gets a behavioural trust score computed from 5 dimensions:
| Level | Score | Per TX Limit | Daily Limit |
|---|---|---|---|
| L0 | 0-19 | $0 | $0 |
| L1 | 20-39 | $10 | $50 |
| L2 | 40-59 | $100 | $500 |
| L3 | 60-79 | $1,000 | $5,000 |
| L4 | 80-100 | $50,000 | $200,000 |
Agents start low and earn trust through consistent, legitimate behaviour. Anomalies (magnitude spikes, velocity, limit probing) drop the score. Dormant agents decay. Self-dealing earns zero trust credit.
Challenge-Response Identity
Each agent gets its own ECDSA P-256 key pair. The private key never leaves the developer's infrastructure (Keychain on iOS, KMS on cloud).
Store: GET /api/identity/challenge/agent_abc
AgentPass: { challenge: "a7f3b2...", expiresIn: 60 }
Agent: signs challenge with private key
Agent: POST /api/identity/verify
AgentPass: { verified: true, score: 68, recommendation: "ALLOW" }
Wrong key? IMPERSONATION_DETECTED. Trust penalised. Logged. This catches agent ID spoofing, passport replay, MITM identity swap, and agent cloning.
Public Trust API
Any platform checks an agent's trust in one call. No auth required:
curl https://agentpass.co.uk/api/public/trust/agent_abc123
Returns score, level, recommendation, spend limits. The store decides what to do with it.
Real Stripe Payments
The demo includes a third-party store that checks agent trust before processing a real Stripe test payment:
- Agent wants to buy something
- Store calls AgentPass trust API
- Trust score 68, L3, recommendation: ALLOW
- Store creates Stripe PaymentIntent
- Payment succeeds -- visible in Stripe dashboard
Mobile SDK
Built an iOS Swift SDK using CryptoKit and Keychain. The agent's private key is stored in the Secure Enclave -- never leaves the device. The demo app runs the full flow: register, create agent, make payment, prove identity, detect impersonation.
Zero dependencies. Pure Apple frameworks.
The Threat Model
I identified 10 attack vectors and mitigated 7 fully:
- Trust farming -- promotion cooldown (min days per level)
- Sybil attacks -- developer aggregate spend limits
- Self-dealing -- circular payments earn zero trust
- Dormancy exploit -- auto-demotion after 30/60/90 days
- Impersonation -- challenge-response with trust penalty
- Limit probing -- near-boundary pattern detection
- Score gaming -- public API hides scoring dimensions
Why This Matters
Agent impersonation is the next cybercrime category. When agents have financial authority, stealing an agent's identity is fraud. There's no OWASP standard for this yet. No framework covers agent-to-agent identity verification at the payment layer.
The existing auth (OAuth, JWT, HMAC) was built for humans using browsers. Agents don't use browsers. They operate at machine speed, autonomously, across multiple platforms. They need machine identity, not session management.
npm: @proofxhq/agentpass
iOS SDK: Swift Package (CryptoKit + Security)
Docs: agentpass.co.uk/docs
Demo store: cloudbyte-store.fly.dev
IETF Draft: draft-sharif-mcps-secure-mcp
Top comments (1)
Good read. Worth noting for security architects: browser fingerprinting operates below the IP/session layer. Canvas API generates a GPU-specific hash — stable, invisible, survives proxy rotation and cookie clearing. WebGL renderer strings, AudioContext, and font enumeration are hardware-level identifiers most privacy tools don't address. Real isolation in security systems needs explicit handling of these signals.