DEV Community

Authora Dev
Authora Dev

Posted on

Cryptographic Identity & RBAC for Sovereign AI Agent Accountability

Cryptographic Identity & RBAC for Sovereign AI Agent Accountability

If you’re letting AI agents open pull requests, read secrets, call internal tools, or trigger deploys, you’ve probably hit the same uncomfortable question: who actually did what? Not “which app” or “which API key,” but which agent, acting under whose authority, with what permissions, and with what audit trail.

That problem gets worse as agents become more autonomous. A shared service token works for a demo, but in production it destroys accountability. If five agents use the same credential to access GitHub, Jira, Slack, your MCP tools, or internal APIs, you can’t reliably answer basic security questions:

  • Which agent approved this action?
  • Was it acting on behalf of a human?
  • Did it have permission at the time?
  • Can we revoke or limit that authority without breaking everything?
  • Can we prove the chain of delegation later?

This is where cryptographic identity and RBAC stop being “enterprise auth features” and become core infrastructure for agent systems.

The core idea

A sovereign AI agent should have its own identity, not just inherit a generic backend token.

In practice, that usually means:

  1. A unique cryptographic keypair per agent

    Ed25519 is a strong default: fast, modern, and widely supported.

  2. Short-lived credentials derived from that identity

    Avoid long-lived shared secrets where possible.

  3. Explicit role-based access control (RBAC)

    Agents should get only the permissions they need.

  4. Delegation chains

    If an agent acts on behalf of a human or parent service, that relationship should be recorded and verifiable.

  5. Policy evaluation and audit logging

    Authorization decisions should be explainable after the fact.

This model is a lot more useful than “the agent had the API key in an env var.”

Why API keys break down for agents

Traditional API keys assume a fairly simple system boundary: one service, one identity, one trust context.

Agents don’t behave like that.

They often:

  • switch tools dynamically
  • invoke MCP servers
  • act under user instructions
  • collaborate with other agents
  • operate across long-running sessions
  • require approvals for high-risk actions

A single shared token cannot capture that complexity. Even worse, it makes least privilege nearly impossible. The common result is an agent with broad standing access to systems it only occasionally needs.

That’s a security issue, but it’s also an operational issue. When something goes wrong, you need attribution.

A practical model for agent accountability

A useful mental model is:

agent identity + delegated authority + scoped role + auditable decision

For example:

  • agent:code-reviewer-17 has its own Ed25519 identity
  • it receives delegated authority from user:alice
  • that delegation is limited to:
    • reading repository contents
    • posting PR comments
    • creating draft pull requests
  • policy blocks:
    • branch deletion
    • production deploys
    • secret retrieval
  • every action is logged with identity, role, delegation chain, decision result, and timestamp

That gives you something much closer to real accountability.

Example: signing agent actions with Ed25519

At a minimum, agents should be able to sign requests or action envelopes so downstream systems can verify origin and integrity.

Here’s a simple Python example using Ed25519:

from nacl.signing import SigningKey
import json
import base64
import time

# Generate agent keypair once and store securely
signing_key = SigningKey.generate()
verify_key = signing_key.verify_key

agent_id = "agent:code-reviewer-17"

action = {
    "agent_id": agent_id,
    "action": "create_pull_request",
    "repo": "acme/payments-api",
    "branch": "agent/fix-null-check",
    "timestamp": int(time.time())
}

payload = json.dumps(action, sort_keys=True).encode("utf-8")
signed = signing_key.sign(payload)

print("public_key:", verify_key.encode().hex())
print("signature:", base64.b64encode(signed.signature).decode())
print("payload:", payload.decode())
Enter fullscreen mode Exit fullscreen mode

A receiving service can verify the signature against the registered public key for that agent before evaluating authorization policy.

That verification step matters because it separates identity proof from permission evaluation. First prove who the agent is. Then decide what it may do.

RBAC for agents: keep it boring and explicit

There’s a temptation to invent entirely new authorization models for AI systems. Usually, that’s unnecessary.

For many teams, standard RBAC is a good starting point:

  • repo.reader
  • pr.writer
  • issue.commenter
  • deploy.approver
  • secrets.denied

The important part is assigning roles to agent identities, not just services.

A simple authorization document might look like this:

{
  "subject": "agent:code-reviewer-17",
  "roles": ["repo.reader", "pr.writer"],
  "delegated_by": "user:alice",
  "expires_at": "2026-03-28T18:00:00Z"
}
Enter fullscreen mode Exit fullscreen mode

Then your tool gateway or MCP server checks both the role and the delegation context before allowing the action.

When to use OPA

If you already have Open Policy Agent (OPA) in your stack, it may be the right answer for policy evaluation. There’s no need to replace a working policy engine just because the caller is now an AI agent.

For example, a Rego policy for PR creation might look like:

package agent.authz

default allow := false

allow if {
  input.subject_type == "agent"
  "pr.writer" in input.roles
  input.action == "create_pull_request"
  input.resource.repo == "acme/payments-api"
  input.delegation.expires_at > input.now
}
Enter fullscreen mode Exit fullscreen mode

This works well when paired with cryptographic identity upstream. OPA can evaluate policy, but it still needs trustworthy inputs about the caller’s identity and delegation chain.

Delegation chains matter more than most teams expect

A lot of agent actions are not fully autonomous. They’re delegated.

That means your system should preserve statements like:

  • user:alice delegated to agent:planner
  • agent:planner delegated a subtask to agent:code-reviewer-17
  • the sub-delegation was limited to repository write actions
  • the delegation expired after 30 minutes

This is where token exchange and delegation standards like RFC 8693 become useful. Instead of handing the child agent the original user credential, you mint a scoped token representing delegated authority. That keeps boundaries clear and revocation manageable.

Without delegation chains, every agent action starts to look either fully autonomous or fully human-driven, and neither is accurate.

Getting started

You do not need a giant platform migration to improve agent accountability. A practical rollout looks like this:

1. Give each agent a unique identity

Start with one keypair per agent or per agent session. Ed25519 is a solid choice.

Track:

  • agent ID
  • public key
  • owner or parent system
  • creation time
  • status (active, revoked, expired)

2. Stop sharing broad credentials

Replace shared API keys with scoped, short-lived credentials bound to the specific agent identity.

3. Add RBAC at the tool boundary

Wherever agents access tools — internal APIs, MCP servers, Git providers, deployment systems — enforce roles based on the authenticated agent identity.

4. Record delegation context

If an agent acts for a user, store:

  • delegator
  • delegate
  • allowed scopes
  • expiration
  • approval requirements

5. Log every authorization decision

Not just successful actions. Log denials too.

Useful fields:

  • agent ID
  • action
  • resource
  • roles evaluated
  • delegation chain
  • policy result
  • timestamp
  • request/session ID

6. Add approval workflows for high-risk actions

For actions like production deploys, secret access, or external payments, require explicit approval instead of relying on static permissions alone.

Where Authora fits

If you’re building this yourself, the core pieces are straightforward: cryptographic identities, token exchange, policy evaluation, and audit logging.

If you want those pieces in one place, this is the kind of workflow Authora Identity is designed for: Ed25519-backed agent identities, RBAC, delegation chains, MCP authorization, policy engines, approval workflows, and audit logging, with SDKs in TypeScript, Python, Rust, and Go. But if OPA plus your existing auth stack already solves the policy side well, that can still be the right architecture.

The key point is not the vendor choice. It’s that agents need first-class identity and authorization models, not borrowed service credentials.

Final thought

As agent systems become more capable, “trust the app” stops being enough. You need to know which agent acted, under what authority, and whether the action was allowed.

Cryptographic identity gives you non-repudiable origin. RBAC gives you least privilege. Delegation chains give you context. Audit logs give you accountability.

That combination is what turns autonomous behavior into governable behavior.

Further reading

-- Authora team

This post was created with AI assistance.

Top comments (0)