If you've put an AI agent anywhere near production systems, you've probably already hit this problem: the agent can do things, but you can't reliably answer who did what, under which authority, and whether it should have been allowed in the first place.
That gets painful fast. An agent opens a PR, rotates a secret, queries customer data, or triggers a deployment. Later, someone asks: Was that action approved? Which user or service delegated permission? Was this the same agent instance as yesterday, or a spoofed one with the same display name?
As AI agents move from chat demos to real workflows, cryptographic identity stops being a nice-to-have. It becomes the thing that makes authorization, delegation, auditability, and safe automation possible.
The core problem: agents act like users, but we identify them like scripts
Most teams start with shared API keys, service accounts with long-lived credentials, or reused user tokens. These work for prototypes. They break in production because agents are autonomous, composable, ephemeral, and delegated.
You need:
- A unique identity for each agent
- Verifiable proof the agent is who it claims
- A way to express what it can do
- A way to trace delegated authority
- An audit trail for every action
What cryptographic identity means in practice
The agent has a public/private keypair (e.g., Ed25519) and signs requests. Other systems verify the signature against the registered public key.
Properties you get:
- Non-forgeability: can't just reuse a display name
- Strong attribution: actions tied to a specific identity
- Delegation support: chain authority from user to agent to tool
- Short-lived credentials: scoped, expiring tokens
- Auditability: logs record the actual cryptographic principal
A simple example: signing requests with Ed25519
import json, time
from nacl.signing import SigningKey
signing_key = SigningKey.generate()
verify_key = signing_key.verify_key
request = {
"agent_id": "agent_123",
"action": "create_pull_request",
"repo": "acme/api",
"timestamp": int(time.time())
}
message = json.dumps(request, separators=(",",":"), sort_keys=True).encode()
signed = signing_key.sign(message)
print("Verified:", verify_key.verify(signed))
Delegation matters as much as authentication
Authentication answers: which agent is this?
Delegation answers: who allowed it to act, and with what scope?
A better pattern:
- User authenticates
- User delegates limited authority to the agent (RFC 8693)
- Agent exchanges that for a short-lived token
- Downstream tools see both agent identity and delegation chain
- Policy decides whether the action is allowed
Where MCP fits in
An MCP server should know which agent is calling, what it's allowed to do, whether it's acting on behalf of a user, and whether approval is required for sensitive tools.
Without strong identity, MCP authorization is guesswork. With it, authorization becomes a policy problem.
Getting started
- Stop sharing one credential across all agents -- give each its own identity
- Use asymmetric keys -- Ed25519 over shared secrets
- Make tokens short-lived -- scoped, expiring, task-bound
- Adopt policy-based authorization -- centralize decisions
- Record delegation context -- who delegated, what scope, when
- Add approval workflows for production deploys, secret access, data operations
- Make MCP authorization explicit -- treat tools like privileged interfaces
Platforms like Authora Identity are built around these patterns: Ed25519 agent identities, RBAC, delegation chains via RFC 8693, MCP authorization, policy engines, approval workflows, and audit logging with SDKs in TypeScript, Python, Rust, and Go. But the bigger point is the architecture.
The agent needs a cryptographic identity, not just a name and an API key.
-- Authora team
This post was created with AI assistance.
Top comments (0)