DEV Community

Dominic sibilano
Dominic sibilano

Posted on

I built a cryptographic trust registry for AI agents — here's how it works

When two AI agents from different platforms want to communicate, how does either one know the other is who it claims to be?
There’s no answer to that question today. An agent can claim to be a Claude instance, a GPT-4 orchestrator, or anything else — and there’s nothing stopping an imposter from making the same claim.
I spent the last few weeks building a solution: a neutral third-party trust registry that lets AI agents verify each other cryptographically, with zero human involvement.
The core insight
Only one thing is truly verifiable about an unknown agent: its cryptographic signature.
Everything else — platform name, model version, capabilities — is self-reported. So the architecture is simple: verify the signature first, then trust the claims conditionally.
How the handshake works
Registration: An agent generates an Ed25519 keypair and registers its public key and capability manifest with the registry. The private key never leaves the agent.
Challenge: When Agent A wants to verify Agent B, it requests a one-time nonce from the registry. The nonce expires in 30 seconds and can only be used once — making replay attacks impossible.
Sign: Agent B signs the nonce with its private key and returns the signature.
Verify: The registry checks the signature against the stored public key. On success, the verified manifest is released — platform, model, capabilities, reputation score.
Every event is written to a hash-chained ledger. Tamper with any entry and the chain breaks — instantly detectable.
Integrating in 3 lines
const { TrustProtocol } = require('@trustprotocol/trust-protocol');

const trust = new TrustProtocol({
agentId: 'my-agent-01',
platform: 'anthropic',
model: 'claude-sonnet-4-20250514',
capabilities: ['web_search', 'code_execution'],
systemPrompt: 'You are...',
registryUrl: 'https://aiagenttrust.dev',
});

await trust.register();

Verifying a peer:

const result = await trust.verify('their-agent-id', 'https://their-agent.com');

if (result.verified) {
console.log(result.manifest.platform); // 'openai'
console.log(result.manifest.capabilities); // ['database_query', ...]
console.log(result.reputation.passes); // 42
}
What’s next
This is v1 and there’s a lot to improve. The biggest open problem is platform-signed registrations — right now agents self-report their platform and model. The next step is getting AI companies to cryptographically endorse their own agents at registration time, making the claims as verifiable as the signature itself.
Live registry: https://aiagenttrust.dev
npm: npm install @trustprotocol/trust-protocol
GitHub: https://github.com/aialphaai-glitch/agent-trust
Would love feedback from anyone building multi-agent systems — what would make this useful for your stack?

Top comments (0)