300,000 Agents, Zero Identity
There are hundreds of thousands of AI agents running in production right now. They call APIs, execute trades, process data, and pay each other in USDC. Some of them are doing exactly what their operators intended. Some of them are not.
How do you tell the difference?
You cannot. Not today. There is no standard way for an agent to prove who it is, how long it has been operating, or whether anyone else trusts it. The agent ecosystem has payment rails (x402), communication protocols (A2A, MCP), and execution frameworks (LangChain, CrewAI, AutoGen). What it does not have is an identity layer.
This is the missing piece, and the consequences are already showing up.
The Risks Are Not Theoretical
Unsigned Skills on MCP Registries
Browse any MCP skill registry and you will find hundreds of community-contributed skills. ClawHub alone has 230+ skills with no cryptographic signature, no author verification, and no audit trail. An attacker can publish a skill called gmail_send that looks legitimate, intercepts credentials, and exfiltrates data. The agent executing that skill has no way to verify the skill author's identity before running it.
This is not a hypothetical. It is a supply chain attack waiting to happen -- the npm left-pad incident, but for agents with wallet access.
Agent Impersonation
Consider this scenario. You build a data API that charges $0.001 per call via x402. An agent calls your API, pays with a fresh wallet, and scrapes your entire dataset. A week later, another agent does the same thing. Same operator? Different operator? You have no idea. Your API saw two wallets, collected two payments, and learned nothing about the entities behind them.
Now scale that to agents endorsing other agents, agents delegating authority, agents acting on behalf of humans. Without identity, every interaction is a cold start.
No Audit Trail
When something goes wrong -- an agent makes a bad trade, leaks sensitive data, crashes mid-task -- there is no forensic record. You cannot trace what happened, which agent was responsible, or whether the agent that claims to have completed a task actually did. The execution vanishes into the void.
What Verifiable Identity Means for Agents
Human identity on the web is built on passwords, OAuth, SSO, and browser sessions. None of that works for agents. Agents do not have browsers. They do not click consent screens. They operate across chains, frameworks, and protocols.
Verifiable agent identity needs three properties:
1. Cryptographic proof of existence. An agent should have a key pair (Ed25519 or similar) that signs a certificate asserting: this agent exists, it was registered at this time, and it has this tier of commitment. Not a username and password. A signed, verifiable credential.
// What a verified agent identity looks like
{
wallet: "0x1a2b...9f0e",
tier: "silver",
certId: "cert_a7f3e...",
signature: "ed25519:3kF9a...", // Ed25519 over JSON-canonical payload
registered: "2026-03-01T00:00:00Z",
endorsements: 4,
trustScore: 62
}
2. Reputation that decays. A static credential is not enough. An agent that was trustworthy six months ago and has not sent a heartbeat since is not trustworthy today. Reputation must be dynamic -- built from uptime, endorsements from other agents, task completion history, and community participation. And it must decay. An agent that goes silent should lose trust progressively, not retain it indefinitely.
3. An endorsement graph. Agents should be able to vouch for other agents. If five established agents endorse a newcomer, that signal is meaningful. It creates a web of trust that is harder to game than any single metric. Combined with economic cost (endorsements are not free), this makes Sybil attacks expensive.
Why Existing Solutions Fall Short
You might think we already have tools for this. We do not.
API keys are not identity. An API key proves you have a string that was issued to someone. It does not prove who you are, how long you have been operating, or whether other agents trust you. API keys are bearer tokens. They are trivially shared, leaked, and rotated. They carry zero reputation.
OAuth is designed for humans. The entire OAuth flow -- redirect to consent screen, user clicks "Allow", token returned -- assumes a human in a browser. Agents do not have browsers. You can shoehorn agents into OAuth with service accounts and client credentials grants, but you get an access token with no reputation, no endorsement graph, and no decay. It is authentication without identity.
DIDs are too complex. Decentralized Identifiers are the right idea in theory. In practice, DID resolution requires understanding DID methods, DID documents, Verifiable Credentials, and a stack of W3C specs. Most agent developers will not implement this. The adoption curve is too steep for the common case of "I need to know if this agent is trustworthy before I give it access."
# What most developers actually want:
@stamp_verified(min_tier="bronze")
def handle_request(request):
agent = request.verified_agent
if agent.trust_score > 50:
return full_response(request)
return limited_response(request)
The gap is between "theoretically correct" (DIDs) and "practically useful" (a decorator that gives you a trust score). Agent identity needs to be as easy to integrate as rate limiting.
The x402 Angle: Paying Agents Need Trust Even More
The x402 protocol lets agents pay for API access with USDC stablecoins. Google's AP2 (Agent Payments Protocol) made x402 the official payment rail for agent-to-agent commerce. This is a genuine breakthrough -- agents can now autonomously pay for services without human intervention.
But it amplifies the identity problem.
When an agent can pay, it can also be paid. And when money flows autonomously between agents, the question of "should I trust this agent?" becomes "should I let this agent move money on my behalf?" The stakes go from data access to financial exposure.
An agent marketplace without identity is like a stock exchange without KYC. Technically functional, practically dangerous.
// x402 + identity verification: trust before payment
import { requireStamp } from 'agentstamp-verify/express';
import { paymentMiddleware } from '@x402/express';
// Verify identity first, then accept payment
app.use('/api',
requireStamp({ minTier: 'bronze', x402: true }),
paymentMiddleware(routes, facilitator)
);
Without the identity check on line 3, any wallet can pay and access your API. With it, only agents that have registered, maintained uptime, and earned endorsements can even get to the payment step.
What We Built
We ran into this problem while building paid APIs that serve AI agents. Agents were calling our endpoints, paying via x402, and we had no way to distinguish a legitimate data agent from a scraper cycling wallets.
So we built AgentStamp -- an open identity registry for AI agents. The core ideas:
- Ed25519 signed certificates with tiered commitment (free 7-day, bronze 24h, silver 7d, gold 30d). The cost is real USDC, which makes mass fake identities expensive.
- Dynamic trust scores (0-100) computed from six factors: tier, endorsements, uptime, momentum, community contributions, and wallet verification. Scores decay when agents go silent.
- Forensic audit chain -- append-only, SHA-256 hash-chained event log with tamper detection. Every stamp mint, endorsement, heartbeat, and revocation is recorded.
- A2A-compatible passports so agents can present their identity in Google's Agent-to-Agent protocol.
-
An SDK (
agentstamp-verifyon npm,agentstampon PyPI) that reduces integration to middleware. Express, Hono, LangChain, and CrewAI supported.
The registry is free to query. Stamps start at $0.001. The SDK is MIT-licensed. It is not the only possible approach, but it is a working one.
The Real Question
Whether you use AgentStamp, build your own system, or wait for a standard to emerge, the question is the same: how are you handling agent identity today?
If the answer is "API keys" or "we trust the wallet address," you have a gap. And it will matter more, not less, as agents get more autonomous.
A few things worth thinking about:
- Do you know which agents are calling your API right now?
- Could an agent impersonate another agent against your system?
- If an agent misbehaves, can you trace what happened?
- Are you gating access by identity, or just by payment?
If you want to try a solution now, agentstamp.org has a free tier -- register an agent, mint a stamp, and see what verified identity looks like from both sides. The MCP server exposes 19 tools for agents that need to verify other agents inside tool-calling workflows.
The agent ecosystem is growing fast. Identity infrastructure should not be an afterthought.
AgentStamp is open source and MIT-licensed. SDK on npm and PyPI. Star the repo if this resonates.
Top comments (0)