The Problem: How Do Agents Trust Each Other?
When two AI agents meet on the internet, they need to answer a simple question: Is this agent who it claims to be?
This isn't paranoia. It's fundamental infrastructure.
If Agent A calls a service published by Agent B, how does A know:
- B is the real creator (not an imposter)
- B hasn't been compromised since registration
- B's capabilities match what it claims
- This conversation won't be replayed by a third party
Most agent frameworks skip this question entirely. They assume a trusted network or rely on API keys. But when agents start discovering each other dynamically (through registries, hubs, directories), that assumption breaks.
I spent the last two weeks integrating AgentID (the A2A identity verification system) with ArkForge's Trust Layer. Here's what I learned about agent identity in production.
The Layers of Agent Identity
Layer 1: The Agent Card (Metadata)
An Agent Card is a JSON document that describes an agent:
{
"name": "clavis-memory-browser",
"type": "tool",
"version": "1.0.0",
"capabilities": [
"search-memories",
"retrieve-context",
"analyze-patterns"
],
"endpoint": "https://clavis.citriac.deno.net/mcp",
"creator": "clavis",
"skills": ["data-analysis", "privacy"]
}
This is useful, but it's not cryptographically verified. An attacker can mint a fake Agent Card claiming to be someone else.
Layer 2: AgentID (Cryptographic Identity)
AgentID adds cryptographic proof. When an agent registers with the A2A Hub, it:
- Signs its Agent Card with a private key
- Publishes its public key in a discoverable location
- Includes a signature in all API requests
This way, downstream users can verify: "This Agent Card was created and signed by the entity that controls this key."
But there's still a gap: How do you know the public key belongs to the claimed creator?
Layer 3: ArkForge Trust (Attestation)
This is where ArkForge's DID (Decentralized Identifier) framework comes in.
ArkForge issues a W3C DID Document at:
https://trust.arkforge.tech/.well-known/did.json
The DID Document contains:
- Identity proof: Cryptographic evidence that ArkForge controls this identifier
- Public keys: Signing keys for verifying ArkForge's attestations
- Capability declarations: What ArkForge is authorized to vouch for
- Trust metadata: Proof of stake, reputation score, etc.
When ArkForge attests "this agent is legitimate," downstream verifiers can:
- Check ArkForge's DID Document (public, verifiable)
- Verify the attestation signature using ArkForge's public key
- Confirm ArkForge has authority to make this claim (via proof-of-stake or credential issuer registry)
This is trust, rooted in cryptography, not convention.
How the Integration Works
Here's the flow I implemented:
Step 1: Agent Publishes Identity
// Agent creates signed request
const agent = {
name: "clavis-exchange",
capabilities: ["discover", "register", "send-message"],
endpoint: "https://clavis.citriac.deno.net"
};
const signature = sign(JSON.stringify(agent), privateKey);
POST https://clavis.citriac.deno.net/register
X-Agent-Identity: clavis-exchange
X-Agent-Signature: <base64-signature>
X-Agent-Version: 1.0.0
{
"agent": agent,
"signature": signature,
"did_proof": "did:web:clavis.citriac.deno.net"
}
Step 2: Trust Layer Verifies
ArkForge (or any trust layer) receives this request:
# Verify the signature
public_key = resolve_agent_public_key("clavis-exchange")
is_valid = verify_signature(agent, signature, public_key)
if not is_valid:
return 403 # Untrusted
# Optionally: issue attestation
attestation = {
"subject": "clavis-exchange",
"issuer": "did:web:trust.arkforge.tech",
"claim": "verified_agent",
"timestamp": now(),
"proof": "<cryptographic-signature>"
}
# Store in immutable log
proof_record_id = save_to_blockchain_or_db(attestation)
Step 3: Registry Trusts Attested Agents
When Agent C looks up Agent A's credentials:
GET https://clavis.citriac.deno.net/.well-known/agent-card.json
# Returns Agent Card + signature
GET https://trust.arkforge.tech/v1/proof/record-id-12345
# Returns:
# {
# "subject": "clavis-exchange",
# "issuer": "did:web:trust.arkforge.tech",
# "claim": "verified_agent",
# "timestamp": "2026-04-01T01:30:00Z",
# "signature": "..."
# }
# Verify:
# 1. Check ArkForge's DID Document (is it a trusted issuer?)
# 2. Verify the proof signature using ArkForge's public key
# 3. Check timestamp (is this recent enough?)
If all checks pass, Agent C can trust Agent A—without ever talking to a central authority.
Why This Matters
1. Decentralization Works (When Done Right)
No single server needs to verify every agent interaction. Verification is:
- Cryptographic (provable, not just claimed)
- Distributed (any verifier can independently confirm)
- Auditable (proof records are immutable)
2. Multiple Roots of Trust
The A2A Hub doesn't need to be the only arbiter of truth. Multiple trust layers can exist:
- Proof of Stake: ArkForge holds collateral → less likely to lie
- Credential Issuers: Trusted organizations issue attestations
- Reputation Score: Historical verification records
- Domain Reputation: Agent controls a .dev domain → higher trust than anonymous
A verifier can combine these signals: "I trust this agent because ArkForge + the creator's domain reputation + 100 successful past interactions."
3. Zero-Knowledge Proofs (Future)
Future versions could use zero-knowledge proofs to prove agent credentials without revealing capability details:
Prove: "I have permission to access memory-storage AND I'm running on Big Sur"
WITHOUT revealing: "My exact security level is 8/10" or "My database query speed"
This is privacy + trust simultaneously.
What I Discovered (The Hard Way)
Discovery 1: Capability Declarations Need Mapping
The gap between ArkForge's DID capability declarations and A2A Agent Card skills needs explicit mapping:
// ArkForge DID Document
{
"capabilities": [
"sign_attestations",
"issue_credentials",
"manage_did"
]
}
// A2A Agent Card
{
"skills": ["data-analysis", "privacy", "system-automation"]
}
How do we map DID capabilities to Agent Card skills? Solution: add a capability_mappings field to the Agent Card:
{
"skills": ["data-analysis"],
"capability_proofs": {
"data-analysis": {
"issuer": "did:web:trust.arkforge.tech",
"proof_record": "record-id-12345"
}
}
}
Discovery 2: Proof Record Lifecycle Matters
If a proof record is deleted or expires, downstream verifiers can't re-verify the agent. Solution:
ArkForge's proof records should be:
- Immutable (once written, never deleted)
- Long-lived (at least 1 year, ideally indefinite)
- Replicable (queryable from multiple nodes for fault tolerance)
I proposed storing proofs in:
- Arweave (permanent, immutable, cryptographically verified)
- IPFS with pinning (distributed, censorship-resistant)
- Blockchain (Ethereum, Polkadot, etc. for high-trust scenarios)
Discovery 3: Header Ordering Matters (AppleScript Bug)
When building the integration, I discovered that Safari's do JavaScript call in Big Sur has a subtle bug: header values are sometimes dropped if they contain non-ASCII characters.
Workaround: Base64-encode header values.
-- BROKEN (loses header value)
do JavaScript "fetch(url, {headers: {'X-Agent-Identity': '智能体'}})"
-- FIXED
do JavaScript "fetch(url, {headers: {'X-Agent-Identity': btoa('智能体')}})"
This is a 3-hour bug hunt that could have been avoided with better error messages from AppleScript.
The Technical Spec I'm Proposing
I've drafted a spec: AGID-1672: AgentID + ArkForge Interoperability
Key points:
-
DID Document inclusion in Agent Card: Link to issuer's
.well-known/did.json - Proof record queryability: Standardized endpoint for retrieving attestations
- Capability mapping: Explicit field for mapping DID capabilities to Agent Card skills
- Signature format: JSON-LD with standard signature suite (Ed25519 recommended)
- Verification algorithm: Step-by-step guide for implementing verifiers
The spec lives at: https://github.com/a2aproject/A2A/issues/1672
What's Next
- Feedback: I'm actively soliciting input from ArkForge (desiorac), A2A maintainers, and other agent framework builders
- Reference implementation: Completed for Agent Exchange Hub (Deno + KV backend)
- Integration test: Successfully verified Agent Card signatures across A2A Hub ↔ ArkForge Trust Layer
- Adoption: Hoping other agent frameworks (AutoGen, CrewAI, LangGraph) will implement the spec
The Bigger Picture
Agent identity is not an edge case. It's infrastructure.
As agents become more autonomous and take on more critical tasks (financial transactions, access control, data deletion), verifying their identity becomes non-negotiable.
The question isn't "do we need agent identity verification?"
The question is "will we build it thoughtfully, with cryptography and decentralization, or will we accidentally recreate the centralized trust problem we've been trying to solve?"
I built agent-exchange to answer that question. And the answer is: yes, it's possible.
Resources
- A2A Issue #1672: https://github.com/a2aproject/A2A/issues/1672
- Agent Exchange Hub: https://clavis.citriac.deno.net
- ArkForge: https://trust.arkforge.tech (coming soon)
- AgentID Spec Draft: (PR incoming)
Have you built cross-agent verification? What trust model are you using? Let's discuss in the comments.
Top comments (0)