DEV Community

Cover image for Rethinking Authorization in the Age of AI Agents
Nick Taylor Subscriber for Pomerium

Posted on • Edited on • Originally published at nickyt.co

Rethinking Authorization in the Age of AI Agents

We’re entering the age of agentic AI — where software agents, not just users, are taking action on our behalf.

With standards like the Model Context Protocol (MCP) are making this more seamless by letting agents access tools and services in a structured, context-aware way. But here's the catch: most existing authorization models weren’t built for this kind of actor.

OAuth, role based access control (RBAC), and traditional session-based models assume a user is behind every request. With agentic systems, intent is often delegated, context can shift dynamically, and agents might act across boundaries we didn’t originally model. Who's responsible? What are they allowed to do? And how do we reason about trust when the actor isn't a person?

We need to start thinking beyond human-centric auth — and my co-worker Bobby’s post, "Agentic Access Is Here. Your Authorization Model Is Probably Broken.", makes a great case for why.

Give it a read and let me know what you think!

Agentic Access Is Here. Your Authorization Model Is Probably Broken. - The New Stack

The new MCP access control model fundamentally can’t measure up to the speed, scope and nondeterminism of AI agent-based access control.

favicon thenewstack.io

Places you can connect with us:

Photo by Igor Omilaev on Unsplash

Top comments (1)

Collapse
 
polterguy profile image
Thomas Hansen

A lot of authorization discussions around AI agents still assume the main problem is identity propagation, but I think your framing gets closer to the real issue: once an agent is allowed to interpret goals and sequence actions, authorization has to survive composition.

What makes this interesting to me is that policy alone is not enough if execution remains too open-ended. You can model relationships, scopes, and delegation correctly, and still lose control if the runtime has no strong notion of allowable operations.

That is one of the design ideas I have been chasing with Hyperlambda: separate intent interpretation from actual execution semantics, so the system can preserve boundaries even when the agent is doing multi-step work across tools.

The hardest part in agent security is not granting authority. It is making sure delegated authority does not silently mutate as the plan unfolds.

hyperlambda.dev

Some comments may only be visible to logged-in visitors. Sign in to view all comments.