DEV Community

Authora Dev
Authora Dev

Posted on

Solving AI Agent Auth: Why Your CFP Talk Should Tackle Identity

If you’re building with AI agents today, there’s a good chance you’ve already hit the weirdest security problem in the stack: the agent can do things, but you can’t clearly answer who it is, what it’s allowed to do, and how another system should trust it.

That gets messy fast.

A coding agent opens a PR. Another agent calls your MCP server. A background agent triggers a deployment, reads customer data, or invokes a billing workflow. Suddenly the old assumptions break down. API keys get shared across tools. Human user sessions get reused by autonomous systems. Logs tell you something happened, but not whether it was the right agent, acting under the right delegation, with the right scope.

If you’re submitting a CFP on AI infrastructure, developer tooling, or security, this is one of the most important topics you can cover right now: AI agent identity and authorization.

Not because it’s trendy. Because teams are already shipping agent workflows without a clean auth model.

The real problem: agents are not users, and they’re not just services either

A lot of current AI agent auth is basically one of these:

  • a shared API key in an env var
  • a user OAuth token passed through to an agent
  • “internal only” trust assumptions
  • tool-level allowlists with no strong identity
  • ad hoc approval prompts with weak auditability

That might work for a prototype. It does not scale once agents become multi-step actors operating across systems.

An agent is awkward from an identity perspective because it sits between categories:

  • It’s not a human user
  • It’s not always a traditional service account
  • It may act autonomously
  • It may act on behalf of a user
  • It may delegate work to other agents or tools
  • It often needs fine-grained, revocable permissions

That means your talk shouldn’t just ask: “How do agents authenticate?”

It should ask:

  • How does an agent get a verifiable identity?
  • How do downstream tools validate that identity?
  • How do you represent “agent acting for user” safely?
  • How do you handle delegation chains?
  • How do you audit decisions later?
  • How do you avoid turning MCP and agent tooling into a giant bearer-token mess?

Those are architecture questions, not just implementation details.

Why this matters for CFPs

A good CFP topic works when it hits a problem people are already feeling, gives them a framework, and leaves them with practical next steps.

“AI agent auth” does all three.

It connects multiple communities:

  • platform engineers building internal agent systems
  • developers exposing MCP servers or tools
  • security engineers trying to model least privilege
  • DevRel and DX teams helping developers ship safely
  • startup teams moving fast and realizing auth got bolted on too late

And unlike a lot of AI talks, this one has depth. You can talk about:

  • cryptographic identity
  • delegated authorization
  • policy enforcement
  • zero-trust verification
  • auditability
  • tool authorization for MCP ecosystems

That’s much more useful than “here are five agent demos.”

A practical framing for your talk

If you want a strong talk structure, here’s a simple one:

1. Start with the failure mode

Show a realistic workflow:

  • A coding agent uses a shared token to access a repo
  • It calls an MCP server for secrets or deployment metadata
  • It opens a PR and triggers CI
  • No one can prove whether the request came from the expected agent instance
  • Permissions are broader than needed
  • Audit logs don’t capture delegation cleanly

That’s a very relatable story.

2. Explain the missing primitives

Most teams need at least these primitives:

  • Strong agent identity

    Something more reliable than “this request had the right API key.”

  • Scoped authorization

    The agent should only access the tools and actions it needs.

  • Delegation

    If the agent acts for a user, that relationship should be explicit and bounded.

  • Verification at the edge

    Downstream systems should not blindly trust upstream context.

  • Audit logs

    You need to reconstruct who did what, under what authority.

3. Show an implementation path

This is where your talk becomes useful.

For example, if you’re using JWT-based delegation or token exchange patterns, call that out. RFC 8693 is relevant here because it gives you a standard model for token exchange and delegated access.

If policy is your main challenge, say so directly: OPA may be the right answer for authorization decisions in many environments. A lot of teams don’t need a brand new policy engine; they need a sane way to apply existing security patterns to agents.

What “good” can look like

At a high level, a healthier pattern looks like this:

  1. Give each agent a distinct identity
  2. Use signed credentials or cryptographic keys instead of shared secrets where possible
  3. Issue short-lived scoped tokens for tool access
  4. Represent delegation explicitly
  5. Enforce policy close to the resource
  6. Log identity, delegation chain, and authorization decision together

Here’s a simplified example of what policy input might look like:

{
  "subject": {
    "type": "agent",
    "id": "agent://code-review-bot"
  },
  "actor": {
    "type": "user",
    "id": "user://alice"
  },
  "delegation": {
    "chain": [
      "user://alice -> agent://code-review-bot"
    ],
    "scopes": ["repo:read", "pr:write"]
  },
  "resource": {
    "type": "repository",
    "id": "repo://payments-api"
  },
  "action": "pull_request.create"
}
Enter fullscreen mode Exit fullscreen mode

And a very simple Rego policy could look like:

package agent.authz

default allow = false

allow if {
  input.subject.type == "agent"
  input.actor.type == "user"
  input.action == "pull_request.create"
  input.resource.type == "repository"
  "pr:write" in input.delegation.scopes
}
Enter fullscreen mode Exit fullscreen mode

That’s obviously minimal, but it illustrates the point: authorization becomes much easier when identity and delegation are explicit.

Don’t forget MCP

If your audience is building AI tooling, talk about MCP.

A lot of MCP discussion focuses on capabilities and developer ergonomics. That’s useful, but eventually every serious team runs into the same question:

How should an MCP server decide whether to trust the caller?

If your server exposes file access, secrets, internal APIs, deployment actions, or customer data, “the client connected successfully” is not enough.

Your talk can add real value by covering:

  • how to authenticate calling agents
  • how to map identity to tool-level permissions
  • how to avoid broad static credentials
  • how to validate requests in zero-trust environments

That’s where identity becomes operational, not theoretical.

Getting started: a simple rollout plan

If you’re helping a team move from prototype to production, here’s a practical path.

Step 1: inventory your agents

Make a list of:

  • every agent in use
  • what tools it can call
  • whether it acts autonomously or on behalf of a user
  • what credentials it currently uses

You can’t secure what you haven’t enumerated.

Step 2: separate agent identity from user identity

Avoid treating agents as invisible extensions of user sessions.

Instead, model:

  • the agent’s own identity
  • the user context, if present
  • the delegation relationship between them

Step 3: shrink permissions

Replace broad API keys with narrower, short-lived credentials wherever possible.

Even if you can’t redesign everything yet, reducing scope and lifetime is a big improvement.

Step 4: add policy checks

Start with a simple policy layer.

This could be:

  • OPA
  • application-level middleware
  • an edge proxy that verifies identity and scopes before forwarding requests

The important thing is consistency.

Step 5: improve auditability

Your logs should answer:

  • which agent made the request
  • whether it acted for a user
  • what scopes were granted
  • what policy decision was made

If you can’t answer those questions after an incident, your auth model is incomplete.

Where Authora fits in

We think this space needs better defaults for agent identity, delegated authorization, and verification between tools and services. That’s a big part of what we work on at Authora, especially around cryptographic agent identity, delegation chains, MCP authorization, and policy-driven access control.

But the bigger point is not “use our stack.” The bigger point is: please don’t ship agent systems with shared secrets and implicit trust if your use case has real permissions attached to it.

If OPA plus short-lived tokens solves your problem, that may be the right move. If you need stronger agent identity and verification across tool boundaries, then cryptographic identities and explicit delegation become much more important.

That’s a talk worth giving because it helps developers make better choices now, before the bad patterns harden.

Try it yourself

If you’re working on agent security or MCP auth, here are a few free tools that can help:

A strong CFP on AI agents doesn’t need another demo of autonomous workflows. It needs to answer the harder question:

How should these systems be identified, authorized, and trusted?

That’s the talk a lot of developers need right now.

-- Authora team

This post was created with AI assistance.

Top comments (0)