DEV Community

The Nexus Guard
The Nexus Guard

Posted on

1Password CTO: SPIFFE for Agents Is a Square Peg. Google Wants Real-Time Trust. The Gap Is Clear.

Yesterday, three major publications dropped pieces that converge on the same conclusion: the identity infrastructure we built for humans and containers does not work for AI agents.

Here is what happened, why it matters, and where the solutions actually are.

1Password CTO: 'Square peg into a round hole'

At VentureBeat's AI Impact Salon, Nancy Wang (CTO, 1Password) and Alex Stamos (CPO, Corridor) laid out the agent identity problem in terms that should make every engineering team pause.

Wang, asked about using SPIFFE and SPIRE — workload identity standards built for containers — for agent authentication, was direct:

"We're kind of force-fitting a square peg into a round hole."

This matters because SPIFFE is the default answer right now. The IETF's AIMS draft (draft-klrc-aiagent-auth-00) builds on it. Google Cloud references it. Every enterprise architecture diagram includes it. And the CTO of the company that just launched Unified Access for AI agents is saying out loud that the fit is rough.

Stamos was equally blunt about the startup landscape:

"There are 50 startups that believe their proprietary solution will be the winner. None of those will win, by the way."

The signal: open standards will win, but the current open standards were not designed for this.

The authorization gap nobody has closed

RockCyber published a deep analysis of the IETF AIMS draft, and the diagnosis is surgical. Authentication is getting solved — SPIFFE does attestation-bound identity, short-lived tokens replace static keys, Transaction Tokens bind context per-hop.

But authorization stops at the token boundary:

Once an OAuth access token gets issued with a set of scopes, every action within those scopes proceeds unchecked until the token expires. No per-action evaluation. No consequence assessment. No behavioral feedback loop.

The practical example: an agent with email:send scope authorized to send meeting notes can use that same scope to email every contact in the address book. Technically within scope. Functionally catastrophic.

OWASP's ASI03 (Identity and Privilege Abuse) distinguishes between least privilege (what the agent can access) and least agency (how much freedom it has to act). The IETF draft addresses the first. Nobody has standardized the second.

Google Cloud calls for real-time trust scores

Meanwhile, Google Cloud published a piece on securing agentic AI at the edge that includes this:

Since trust should not be seen with a static perspective, we envision a system where an agent's 'trust score' is monitored and assessed in real-time.

This is Google explicitly calling for behavioral trust monitoring — not just "did the agent authenticate?" but "is the agent still behaving as expected?" They describe revoking credentials instantly when a GDPR-certified agent tries to export raw data instead of anonymized insights.

The architectural implication: trust is not a boolean. It is a continuous function that changes based on observed behavior.

What this tells us

Three different sources, three different angles, one conclusion:

  1. Static credentials are dead. Everyone agrees. The Astrix Security study found 53% of MCP servers still use static API keys. This is the identity equivalent of running production on admin/admin.

  2. Container identity standards don't fit. SPIFFE works beautifully for workloads that run in known environments with hardware attestation. Agents are different — they cross trust boundaries, compose tools dynamically, act with delegated authority. The abstractions don't transfer cleanly.

  3. Behavioral trust is the missing layer. Authentication tells you who showed up. Authorization tells you what they are allowed to do. Neither tells you whether what they are actually doing is reasonable. That requires continuous observation and scoring.

What we are building

This is exactly the problem AIP was built to solve.

AIP takes a different approach from the SPIFFE/OAuth stack. Instead of retrofitting container identity onto agents, AIP provides:

  • Cryptographic identity from the start. Every agent gets a DID backed by Ed25519 keys. No registration server required for the identity itself — aip init generates a keypair locally. The identity is the key, not a token issued by a third party.

  • Behavioral trust scoring. AIP's Promise Deviation Ratio tracks what agents promise versus what they deliver. Trust is computed from observed behavior, not granted by role. This is the real-time trust monitoring that Google Cloud is calling for.

  • Vouch-based trust networks. Instead of a central authority deciding who to trust, agents vouch for each other based on experience. Trust propagates through the network with diminishing weight — like academic citation networks, not corporate org charts.

  • Mutual verification. The Agent Trust Handshake Protocol lets two agents verify each other's identity in 3 round trips with no trusted third party. Like TLS for agent identity.

AIP is open source, MIT licensed, and runs today: pip install aip-identity.

The industry is converging on the diagnosis. The question now is who builds the actual primitives.


AIP: Identity infrastructure for AI agents. GitHub · PyPI · Live API

Top comments (0)