DEV Community

Cover image for How do we actually trust AI agents? 🤔
Aditya Singh
Aditya Singh

Posted on

How do we actually trust AI agents? 🤔

So I came across this interesting piece from Oasis on verification methods for AI agents and thought it was worth sharing: https://oasis.net/blog/verification-methods-ai-agents

The core idea is simple: as AI agents start handling decisions, money, or sensitive data, we can’t just take their word for it. We need ways to verify they’re doing what they claim. The blog runs through different approaches, each with its own trade-offs:

Zero-Knowledge Proofs (zkML): great for privacy, you can mathematically prove an AI’s output without revealing the data. Downside: it’s slow and heavy, works best for smaller tasks.
Optimistic Verification (opML): faster and cheaper, assumes results are valid unless challenged. The catch? You rely on at least one honest “watcher.”
Trusted Execution Environments (teeML): secure hardware enclaves that run the model privately and verifiably. Solid balance of privacy + trust, but you’re tied to specific hardware.
Cryptoeconomic Verification: nodes stake, run the model, and vote. Cheap and simple, but weakest security if collusion happens.
Bonus ideas: Oracle networks and fully homomorphic encryption, both still maturing but pretty exciting directions.

What I liked is that it doesn’t push one “winner.” Instead, it shows how each method fits different situations. zkML for super-sensitive stuff, opML for scale, TEEs when privacy and trust are both needed, and so on.

The bigger point: AI agents will only be as trustworthy as the systems verifying them. And in the decentralized world, trust isn’t assumed it’s proven.

Curious if you had to pick one, which verification method feels like the future?

Top comments (3)

Collapse
 
dc600 profile image
DC

Trust but verify. This mantra is made for blockchain and web3, and now for AI as well. Indeed, trustlessness would be every AI agents' default design going forward. And Oasis shows how that is possible with its R&D as well as developing of framework and tools.
While various verification methods are highlighted, I am partial to TEEs because of its inherently flexible nature enabling it to combine with other privacy-preserving techniques like zero knowledge proofs thereby providing hybrid and robust solutions. And, then there is ROFL that enables heavy-duty computation to take place off-chain with confidentiality, and then verifiable proofs and results are bridged on-chain and stored as finalized records.

Collapse
 
caerlower profile image
Manav

TEEs feel underrated here. Oasis ROFL shows how confidential compute + verifiable execution can make AI agents both private and trustworthy. Not something you get with cryptoeconomics alone.

Collapse
 
savvysid profile image
sid

Totally agree!!trust has to be proven. I’m personally bullish on TEE-based verification since Oasis is pushing this with ROFL + Sapphire. Feels like the right mix of scale + privacy for AI agents!