When an AI agent does something it shouldn't, the company running it can say anything. "The user authorized this." "The model went rogue." "We have no record of that."
Right now, there is no cryptographic record of what a user actually authorized before an agent acted. The operator — the company running the agent — is a trusted third party with no binding commitment. Every AI agent deployment in the world has this gap.
I kept thinking about it like the early internet. For years there was no SSL. Websites just asked you to trust them with your credit card. Then someone built the cryptographic primitive that made trust unnecessary. That padlock in your browser is SSL.
AI agents need the same thing. Not monitoring. Not logs. A cryptographic receipt that existed before the first action.
Top comments (0)