Enterprise identity used to have a fairly stable center of gravity.
A user authenticated. An application received a token. The token carried scopes or claims. The backend enforced what that application was allowed to do.
That model was never trivial, but it was legible.
Agents are making it less so.
An AI agent is not just another software client. It can plan, delegate, chain tools, invoke other agents, operate over time, and make decisions inside partially autonomous loops. It may act on behalf of a user in one moment, on behalf of a service in the next, and through a brokered protocol hop after that. It may hold authority briefly, derive narrower authority for a subtask, or preserve more authority than anyone intended.
That is why the emerging identity problem in AI is not simply authentication.
It is delegation.
More specifically, it is the combined problem of agent identity, delegated authority, and protocol trust.
That is where the next serious access-control failures are likely to come from.
Why classic OAuth thinking starts to strain
OAuth was built for an important but narrower question:
How can one application access a resource on behalf of a user, under bounded consent?
That question still matters. It is just no longer enough.
Agents introduce harder questions:
- Is this agent acting as the user, for the user, or instead of the user?
- Can the agent delegate part of its authority to another agent or tool?
- If it does, what exactly is supposed to survive that delegation?
- Can the delegated authority be narrowed but never expanded?
- Can the user revoke the whole chain later?
- Can a relying service tell whether this request came from a human, a first-party agent, or a third-party delegated sub-agent?
Traditional OAuth patterns do not disappear here, but they begin to strain because the delegated actor is no longer a passive software client. It is a reasoning system with workflow freedom.
That changes the trust problem.
The real issue is not "who are you?"
Identity conversations often begin with a familiar question:
Who is making this request?
That remains necessary, but in agent systems it is no longer sufficient.
The more important question is:
Whose authority is being exercised right now, under what limits, and through how many hops?
That is a different class of problem.
An agent may be authenticated correctly and still be dangerously over-authorized. It may be a valid agent with an invalid delegation chain. It may hold a token that proves origin without proving the action is appropriate. It may invoke a second agent that inherits too much context or too much scope. It may call a tool with what looks like user authority even though the user never meant to authorize that specific kind of step.
In other words, authentication is table stakes. Delegation semantics are where the hard failures live.
Agents turn authority into a chain, not a session
This is the architectural shift that matters most.
Classic application auth often centers on a session or token grant between a user, a client, and a resource server.
Agent systems create authority chains:
user intent
→ primary agent
→ tool call or protocol broker
→ secondary agent
→ downstream API
→ side effect in an external system
Every hop raises new questions:
- Did the original user intend this exact step?
- Was the authority narrowed or preserved?
- Did the downstream system verify the chain or just trust the immediate caller?
- Can the second agent prove what it is allowed to do versus what it merely can do?
- If something goes wrong, who is accountable for the action?
The more agentic the workflow becomes, the less useful it is to think in terms of one flat access token floating around the system.
The dangerous default: delegation without attenuation
This is where the next wave of poor security design is likely to show up.
Many teams will correctly recognize that agents need to call tools and other services. They will wire up tokens, API keys, service accounts, or on-behalf-of flows and consider the problem solved.
But the real danger is not delegation by itself.
It is delegation without attenuation.
Meaning:
- the child agent gets as much authority as the parent
- the tool receives a broad token instead of a task-specific one
- the scope does not shrink as the chain gets longer
- argument-level limits are not preserved
- the downstream service cannot tell whether the user approved this exact action
That is how agent delegation becomes the new OAuth problem.
OAuth taught us that overly broad scopes and long-lived tokens create trouble. Agent systems add a new twist: authority can now move across reasoning systems that generate new subtasks on the fly.
If scope does not shrink as the chain expands, the system is effectively multiplying trust rather than routing it.
In AI systems, that risk is amplified by a property ordinary clients do not have: delegated authority is being handed to components that can be manipulated by language.
If a child agent holds a broad, unattenuated token and then processes malicious input, prompt injection stops being only a reasoning failure. It becomes an authorization failure. The attacker does not need to steal the token directly. They only need to steer the agent that already has it.
That is what makes over-delegation so much more urgent in agent systems than in ordinary software clients.
Protocol trust is now part of security, not plumbing
This is another area the industry still underestimates.
When agents talk to tools or other agents through emerging protocols, the protocol itself becomes part of the trust model.
Not because protocols are inherently unsafe, but because they define:
- how identity is represented
- how delegation is expressed
- what provenance survives handoff
- how consent is bound to action
- whether scope restrictions remain machine-verifiable
- what the receiver is allowed to assume about the caller
Protocol design is no longer just interoperability work. It is authorization design.
If the protocol does not clearly preserve identity, delegation depth, authority narrowing, and provenance, then every implementation ends up reconstructing trust from partial hints.
That is how systems become integrated without becoming defensible.
Four failures we are likely to see more of
1. The helpful agent with a service-account skeleton key
A company wants its internal agent to work reliably, so it grants backend service credentials with broad read and write access across several business systems. The user experience feels great because the agent rarely gets blocked.
But once the agent operates through a shared backend identity, the system starts collapsing user intent and service privilege into one bundle.
Now the question is no longer "can the user do this?" It becomes "can the agent backend do this?" Those are not the same thing.
2. The child agent that inherited too much
A primary agent hands work to a specialist agent for scheduling, procurement, code changes, legal lookup, or data retrieval. The child gets the same authority envelope as the parent because building a narrower one felt inconvenient.
That is classic over-delegation.
The user authorized a task. The architecture quietly authorized a whole class of adjacent actions.
3. The downstream API that trusted the last hop only
An external service receives a request from an agent broker or sub-agent and checks that the presented token is valid. What it does not verify is whether the full chain of delegation still matches the original user intent, whether the action exceeded the approved task boundary, or whether scope was meant to attenuate at each hop.
The request is authenticated.
The chain is still wrong.
4. The revocation problem nobody modeled
A user revokes consent, an approval expires, or the primary task is canceled. But delegated authority already propagated to a child agent, a queued job, or a downstream tool execution context.
Now the system has to answer a very uncomfortable question:
Did revocation actually follow the authority chain, or did it only update the front door?
Why agent identity is not just a naming problem
A lot of teams hear "agent identity" and think mainly about registration:
- naming the agent
- assigning a client ID
- issuing credentials
- deciding whether it is first-party or third-party
That matters, but it is not enough.
The deeper problem is that agent identity has to be meaningful in context.
The receiver needs to understand things like:
- which agent this is
- who authorized it
- whether it is acting directly or through delegation
- which task or intent boundary applies
- what autonomy level is allowed
- whether human approval was required upstream
- whether any of those facts were transformed across hops
That is much richer than "this token belongs to client X."
The next design pattern will be proof-carrying delegation
The answer is not "invent one magic protocol and everything is solved."
But the direction is becoming clearer.
This work is not starting from zero. The broader security world already has useful building blocks for attenuated and delegable authority, even if AI systems have not applied them seriously enough yet.
Concepts like OAuth Token Exchange, Macaroons, and Biscuit tokens all point in the right direction. They are different tools, but they share an important idea: authority can be delegated with constraints, caveats, attenuation, and verifiable structure instead of being passed around as one broad bearer credential.
None of them is the complete answer for agent systems. Multi-agent planning, protocol handoff, prompt injection, and long delegation chains introduce additional problems. But they give builders a far better place to start than pretending agent authorization has to be invented from scratch.
Agent systems are going to need delegation artifacts that carry more proof than ordinary bearer access:
- who the agent is
- which human, service, or policy delegated authority
- what exact task or capability is allowed
- how much further delegation is permitted
- whether the authority was narrowed at each hop
- when the delegation expires
- how revocation propagates
- what audit evidence links the final action back to the original grant
That is the shape of a more trustworthy agent authorization model.
Not just a token that says "allowed," but a chain that says allowed by whom, for what, how far, and under which constraints.
What good looks like
A serious agent platform should treat delegation as a first-class control surface.
That means:
Short-lived, task-bound authority.
An agent should not carry broad reusable permission when a narrow, per-task grant would do.
Attenuating delegation.
Child agents and downstream tools should inherit less authority than the parent, not the same amount.
Explicit delegation depth.
If the system allows agent-to-agent handoff, it should define how many hops are allowed and what changes at each hop.
Machine-verifiable provenance.
The receiver should not have to trust narrative claims about who authorized the action. It should be able to verify them.
Cascade revocation.
When the root authority is withdrawn, dependent delegated grants should not keep drifting alive in queues, workers, or child agents.
Separation between user intent and service convenience.
A backend service account should not become a universal substitute for delegated user authority just because it is easier to implement.
The practical question for teams
If you are building an agent that calls tools, APIs, or other agents, ask this:
What is the maximum authority this agent can pass downstream, and can we prove that it shrinks rather than spreads?
That question exposes the real architecture quickly.
It forces you to inspect:
- token exchange design
- scope narrowing
- tool-level constraints
- delegation depth
- revocation semantics
- protocol assumptions
- auditability of the full chain
If those answers are fuzzy, the identity layer is probably less mature than the demo suggests.
The next OAuth lesson is already here
Classic OAuth taught the industry a durable lesson: authorization is easy to get working and hard to get right.
Agents are reopening that lesson in a more complicated form.
Now the problem is not just application consent screens, bearer tokens, and API scopes.
It is delegated authority moving through reasoning systems, protocol hops, child agents, and external tools.
That is why this topic matters now.
The new identity problem in AI is not simply "how does the agent sign in?"
It is:
How do we make delegated agent authority narrow, provable, revocable, and trustworthy across the entire chain?
That is the new OAuth problem. And most teams are only beginning to discover it.
Top comments (0)