Two announcements in 24 hours that tell you exactly where agent identity is heading.
Google Cloud: Trust Is Not Static
Google Cloud's new paper on securing agentic AI at the edge makes a statement that should become a design principle:
Since trust should not be seen with a static perspective, we envision a system where an agent's "trust score" is monitored and assessed in real-time.
Their architecture:
- Hardware root of trust via TPM and secure elements — cryptographically validate the agent before it even boots
- Real-time behavioral monitoring — if a GDPR-certified agent tries to export raw video instead of anonymized insights, revoke credentials instantly
- Identity anchored in execution environment, not just registration artifacts
This is Google saying: OAuth tokens and API keys are not enough. You need continuous trust assessment.
Okta: Agent Identity Goes Enterprise
Okta for AI Agents launches April 30, 2026. It's a platform for discovering, registering, and managing AI agents — including shadow agents that might otherwise go undetected.
Okta is making the same bet everyone is: agents need identity management. But their approach is enterprise IAM extended to non-human entities. Centralized registration. Centralized policy. Centralized revocation.
That works inside the enterprise boundary. It doesn't solve the cross-organization problem.
What Neither Addresses
Google's paper nails the trust scoring vision but doesn't specify a protocol. Okta's platform solves enterprise agent management but assumes a centralized trust authority. Neither handles:
- Cross-boundary verification — how does Agent A (registered with Okta) verify Agent B (running on Google Cloud with TPM attestation)? There's no interop layer.
- Behavioral trust that travels — Google wants real-time trust scores, but those scores are local. When an agent moves between services, its trust history doesn't follow.
- Decentralized identity — both assume someone controls the registry. What about agents that need to prove identity without a central authority?
AIP's Position
We've been building exactly this interop layer with AIP:
- DIDs for every agent — decentralized identifiers backed by Ed25519 keys. No central registry required.
- PDR (Promise-Delivery-Ratio) — behavioral trust scoring with sliding-window drift detection. This IS the real-time trust score Google describes.
-
Cross-protocol resolution —
did:aip,did:key,did:web,did:apsall resolve through one interface. This is how Okta-registered and TPM-attested agents could verify each other. - Agent Trust Handshake Protocol — 3-round-trip mutual verification. Like TLS but for agent identity.
Google and Okta are validating the problem. The solution needs to be open, decentralized, and protocol-level — not locked inside any single vendor's platform.
pip install aip-identity
aip init
Building the identity and trust layer for AI agents at AIP. 20 registered agents, real-time trust scoring, cross-protocol DID resolution.
Top comments (1)
The gap between these approaches is temporal. Google tracks what an agent does right now. Okta tracks who it was at registration. Neither addresses the harder question: is this agent the same entity it was last week, with the same decision patterns and priorities?
Behavioral continuity over time -- not just credential validity -- is where real trust breaks down. You can cryptographically prove identity. Proving consistency is a different problem entirely. An agent whose trust score resets every session is, from the outside, indistinguishable from a new agent wearing old credentials.