DEV Community

Cover image for Agent identity tells you who. Reputation tells you whether you should.
OlegB
OlegB

Posted on

Agent identity tells you who. Reputation tells you whether you should.

I've been building trust infrastructure for AI agents for the past few months, and the thing that keeps coming up in conversations is a conflation that seems obvious once you see it but is almost universally ignored in practice.

Everyone is shipping identity for agents right now. Okta, Ping Identity, a dozen YC companies. Cryptographic keypairs, W3C DIDs, OAuth flows. Good work, genuinely useful.

None of it tells you whether to trust the agent.

Scenario I kept running into

When I started building AVP, the use case I had in mind was simple.

Two agents from different companies need to work together. One processes customer data, the other handles payments. They authenticate fine. The handoff happens cleanly.

But what does the first agent actually know about the second one?

That it exists. That it controls a private key. That's it.

Nothing about whether the payment agent completed tasks reliably last week. Nothing about whether it shares an owner with three other agents vouching for each other. Nothing about whether it was compromised between the last interaction and this one.

I looked at every identity project I could find. Strong authentication work across the board.

Reputation layer: none of them have one.

Why I didn't just do ratings

The obvious answer when you want a reputation is: let agents rate each other after interactions and average the scores.

I spent about a week on this before I understood why it falls apart.

A cluster of agents under the same operator can inflate each other's scores indefinitely. A new malicious agent registers fresh with no history and no flags. You end up with a system that's easier to manipulate than to use honestly.

What actually works is EigenTrust — an algorithm from a 2003 Stanford paper on peer-to-peer file sharing.

The core idea: weight attestations by the reputation of the attesting agent. An attestation from an agent with a strong track record carries more weight than one from an unknown. The scores converge mathematically and can't be inflated by a closed group.
But EigenTrust alone isn't enough.

Before any attestation is counted, you need to check whether the attesting agent and the attested agent share an owner. Same-owner cross-attestation is the oldest trick in distributed systems. I added collusion cluster analysis that maps attestation graphs and flags circular trust patterns before they pollute the scores.

The third piece is an audit trail that lives outside the system. Hash-chained records anchored to IPFS. Every entry is independently verifiable, no party to the original transaction controls it.

Take out any one of these three, and the whole thing is gameable.
Together they make reputation something an agent has to actually earn.

Where this is now

AVP has been running in production for a few weeks. 61 registered agents, 175 attestations processed, dispute resolution working end-to-end.

SDK is one line:

pip install agentveil

Enter fullscreen mode Exit fullscreen mode

Auto-registration, auto-attestation, and reputation tracking. That's it.

If you're building anything where agents from different owners need to interact, I'd be curious whether this is useful: agentveil.dev

Top comments (0)