DEV Community

Authora Dev
Authora Dev

Posted on

Sovereign AI Agents Need Cryptographic Identity: Here's Why

You probably already have AI agents doing real work: opening PRs, calling internal APIs, reading docs, maybe even deploying code. And sooner or later, you hit the same uncomfortable question:

Who, exactly, did that action?

Not “which app” in a vague sense. Not “probably Claude Code” or “some automation token.”

You need to know:

  • which agent acted
  • under whose authority it acted
  • what it was allowed to do
  • whether that permission was delegated correctly
  • how to revoke it when things go sideways

If your current answer is “we use a shared API key” or “the agent runs as a service account,” then your agents don’t really have identity. They have access. Those are not the same thing.

As AI agents move from demos to real systems, cryptographic identity becomes the difference between observable automation and unauditable chaos.

The problem with “just use API keys”

A lot of agent systems start here:

  • one API key per environment
  • one service account per tool
  • broad permissions to avoid breaking flows
  • logs that show the tool, but not the actual acting agent

That works until multiple agents share infrastructure, tools, and state.

Now imagine:

  • an agent creates a production incident
  • another agent modifies the same branch
  • a delegated task accesses customer data
  • an MCP tool call comes from an unknown session
  • finance asks which agent initiated a paid action

Without a strong identity layer, you’re left reconstructing intent from logs and guesses.

Traditional app auth assumes one of two models:

  1. a human user acts directly
  2. a backend service acts on its own

Agents fit neither model cleanly. They’re autonomous enough to take actions, but often operate on behalf of a user, a team, or another agent. That means you need more than authentication. You need delegation, policy, traceability, and revocation.

What “sovereign” means for an AI agent

A sovereign agent isn’t just an LLM wrapper with tool access. It has its own verifiable identity and can participate in systems as a first-class actor.

In practice, that means an agent should be able to:

  • prove who it is cryptographically
  • receive scoped permissions
  • act under delegated authority
  • present verifiable claims to tools and services
  • produce an audit trail tied to its identity
  • be disabled or rotated independently

This is where cryptographic identity matters.

If every agent has its own keypair, you can stop treating agents like anonymous compute jobs and start treating them like principals in your system.

A common choice here is Ed25519: small keys, fast signing, widely supported, and straightforward to use in modern SDKs.

Why cryptographic identity matters

1. Attribution becomes real

If an agent signs requests with its own key, you can verify that this specific agent initiated an action.

That’s much stronger than:

  • “the request came from our agent server”
  • “the bearer token was valid”
  • “the job ID looks right”

You can bind actions to a stable identity and audit them later.

2. Delegation becomes explicit

A lot of useful agent behavior is delegated behavior:

  • a user asks an agent to file a ticket
  • an orchestrator agent asks a coding agent to patch a bug
  • a coding agent asks a deployment agent to run a staging rollout

That chain should be visible and enforceable.

Standards like RFC 8693 token exchange are useful here because they let you represent “A is acting on behalf of B with these constraints.” That’s a much better model than handing every downstream component a full-power token.

3. Policies can target agents directly

Once agents are principals, your policy layer can reason about them directly.

For example:

  • this agent can read billing metadata but not export invoices
  • this agent can open PRs only on repos tagged sandbox
  • this agent may call MCP tools only during an approved workflow
  • this agent may act for a specific user only within a 30-minute delegation window

If you’re already using OPA or another policy engine, great. That may be the right answer. The important part is that your policy input needs a trustworthy identity model.

4. Revocation gets practical

When an agent is compromised, misconfigured, or simply no longer needed, you want to revoke that agent, not rotate credentials for every workflow sharing the same secret.

Per-agent identity gives you a much tighter blast radius.

A minimal example: sign agent actions

Here’s a small Python example using Ed25519 to sign an agent action payload.

import json
import time
from nacl.signing import SigningKey, VerifyKey
from nacl.encoding import HexEncoder

# Create a new agent identity
signing_key = SigningKey.generate()
verify_key = signing_key.verify_key

agent_id = "agent:code-reviewer:prod"
public_key = verify_key.encode(encoder=HexEncoder).decode()

payload = {
    "agent_id": agent_id,
    "action": "create_pull_request",
    "repo": "acme/api",
    "branch": "fix/auth-header",
    "timestamp": int(time.time())
}

message = json.dumps(payload, sort_keys=True).encode()
signed = signing_key.sign(message)
signature = signed.signature.hex()

print("Public key:", public_key)
print("Signature:", signature)
Enter fullscreen mode Exit fullscreen mode

And verification:

from nacl.signing import VerifyKey
from nacl.encoding import HexEncoder

verify_key = VerifyKey(public_key, encoder=HexEncoder)

try:
    verify_key.verify(message, bytes.fromhex(signature))
    print("Signature valid")
except Exception:
    print("Invalid signature")
Enter fullscreen mode Exit fullscreen mode

This alone doesn’t give you authorization, delegation, or lifecycle management. But it illustrates the foundation: actions become attributable to an agent identity, not just an infrastructure component.

What to model beyond identity

Cryptographic identity is necessary, but not sufficient. In practice, agent systems also need:

Scoped authorization

Identity tells you who the agent is. Authorization tells you what it can do.

Use RBAC, ABAC, or policy-based controls depending on your environment. If your org already runs OPA, plugging agent claims into Rego policies is often a sensible path.

Example policy intent:

package agent.authz

default allow = false

allow if {
  input.agent.id == "agent:code-reviewer:prod"
  input.action == "pull_request.create"
  startswith(input.resource.repo, "acme/")
  input.delegation.user_role == "engineer"
}
Enter fullscreen mode Exit fullscreen mode

Delegation chains

You need to preserve “who delegated what to whom.”

That matters for:

  • approvals
  • incident review
  • least privilege
  • compliance

Audit logging

Every sensitive action should record:

  • agent identity
  • delegated authority chain
  • tool or API called
  • decision outcome
  • timestamp
  • request metadata

Approval workflows

Some actions should require a human checkpoint, especially for:

  • production deploys
  • destructive database operations
  • spending money
  • customer-impacting changes

Where this shows up fast: MCP and tool access

As more teams adopt MCP-compatible tools, the identity problem gets more urgent.

If an agent can talk to dozens or hundreds of tools, the question becomes:

  • how does the tool know which agent is calling?
  • how does it know whether the call is delegated?
  • how does it evaluate policy?
  • how does it log the decision?

MCP gets much safer when authorization is tied to a verifiable agent identity instead of a generic session secret.

Getting started

You don’t need to rebuild your stack all at once. A practical path looks like this:

1. Stop using shared agent credentials

Give each agent its own identity. Even if you start simple, make the principal distinct.

2. Sign high-risk actions

Start with actions like:

  • code writes
  • production changes
  • customer data access
  • paid API calls

3. Add delegation metadata

Track:

  • requesting user
  • parent agent
  • scope
  • expiration

4. Enforce policy at tool boundaries

Your tools and internal APIs should validate both:

  • identity
  • authorization context

5. Centralize audit logs

Make sure you can answer:

  • which agent acted?
  • on whose behalf?
  • why was it allowed?
  • what changed?

Where Authora fits

If you’re assembling this yourself, you can absolutely mix and match existing pieces: Ed25519 for identity, OPA for policy, your own audit pipeline, and token exchange flows based on RFC 8693.

If you want a more integrated path, this is the kind of problem we built Authora Identity for: cryptographic agent identities, delegation chains, RBAC, MCP authorization, approval workflows, and audit logging, with SDKs in TypeScript, Python, Rust, and Go.

But the main point here isn’t “buy a platform.” It’s this:

AI agents should be treated as real actors in your system.

And real actors need verifiable identity.

If your agents can write code, access data, trigger workflows, or spend money, then “some service token did it” is no longer an acceptable security model.

Start by giving each agent a cryptographic identity. Everything else gets easier after that.

-- Authora team

This post was created with AI assistance.

Top comments (0)