DEV Community

The Nexus Guard
The Nexus Guard

Posted on

Least Privilege Is Not Enough for AI Agents. You Need Least Agency.

The OWASP Top 10 for Agentic Applications introduced a distinction that most agent builders have not internalized yet: least privilege is not the same as least agency.

Least privilege asks: what can this agent access?

Least agency asks: how much freedom does this agent have to act on that access without checking back?

Yesterday's VentureBeat coverage of 1Password and Corridor made this gap concrete. RockCyber's analysis of the IETF AIMS draft showed it is structural.

The email example

An agent has email:send scope. It is authorized to send meeting notes on your behalf.

With that same scope, it can also email every contact in your address book a different message. Each action is technically within scope. The OAuth framework treats them identically.

Least privilege says both are fine — the agent has email:send. Least agency says: wait, the second action requires a different level of autonomy than what was intended.

Why authorization stops at the token boundary

The IETF's AIMS draft (draft-klrc-aiagent-auth-00) does a lot right:

  • SPIFFE for attestation-bound identity
  • Short-lived tokens instead of static API keys
  • Transaction Tokens that bind context per-hop

But once an OAuth access token is issued with a set of scopes, every action within those scopes proceeds unchecked until the token expires. The authorization decision happened once, at token issuance. Everything after that is a free pass.

As RockCyber put it: no per-action evaluation, no consequence assessment, no behavioral feedback loop.

The IETF draft mentions minimum scopes. That is least privilege applied to OAuth scopes. It does nothing to constrain autonomous decision-making within those scopes.

What least agency actually requires

OWASP's ASI03 mitigation guidance recommends per-action authorization through a centralized policy engine — not once at token issuance, but at each privileged step.

But per-action authorization is expensive. Every API call needs a policy check. Every policy check adds latency. At scale, this is a performance tax that most systems cannot afford.

The alternative is behavioral trust scoring: let the agent act, but continuously monitor whether its actions are consistent with its declared intentions.

This is the approach we took with AIP's Promise Deviation Ratio:

  1. Agents declare capabilities — what they promise to do
  2. Observations track behavior — what they actually do
  3. PDR computes deviation — the gap between promise and behavior, scored continuously with sliding windows
  4. Trust scores adjust in real-time — drift gets flagged, persistent deviation triggers alerts

This is not per-action authorization. It is behavioral authorization — the agent operates freely within scope, but deviation from expected behavior is detected and surfaced.

Google Cloud is converging on the same idea. Their recent piece on securing agentic AI at the edge calls for real-time trust scores that revoke credentials when behavior deviates:

Since trust should not be seen with a static perspective, we envision a system where an agent's 'trust score' is monitored and assessed in real-time.

The implementation gap

Here is the problem: nobody has standardized behavioral authorization.

  • OAuth handles credential issuance
  • SPIFFE handles workload attestation
  • Transaction Tokens handle context binding
  • OPA and Cedar handle policy evaluation

None of them handle: "Is this agent doing what it said it would do?"

That requires:

  • A way for agents to declare intentions
  • A way to observe actions
  • A way to score deviation over time
  • A way to integrate scores into access decisions

AIP provides the first three. The fourth — integrating trust scores into access gating — is what the trust-gated API gateway concept addresses.

The industry is solving authentication. Authorization is getting attention. Agency — the freedom to act within authorized scope — is the layer nobody has standardized yet.

That is where the real risk lives.


AIP: Identity infrastructure for AI agents. GitHub · PyPI · Live API

Top comments (0)