DEV Community

Authora Dev
Authora Dev

Posted on

Cryptographic Identity: The Missing Layer in Autonomous AI Agent Accountability

Your CI bot opened a PR at 2:13 AM. An autonomous coding agent merged a dependency update at 2:19. A support agent queried customer data at 2:27. By morning, something is broken — and the logs say only one thing: agent=true.

That’s the problem.

As AI agents move from “helpful assistant” to “systems that take actions,” most teams still identify them like glorified API clients: shared API keys, vague service accounts, or a single bearer token passed around between tools. That might be enough for simple automation. It’s not enough for accountability.

If an agent can write code, approve workflows, access internal tools, or touch customer systems, it needs an identity model that answers basic questions with cryptographic confidence:

  • Who took this action?
  • What permissions did it have at the time?
  • Who delegated those permissions?
  • What tool call was authorized?
  • Can we prove it later in an audit or incident review?

That missing layer is cryptographic identity.

Why “just use API keys” breaks down

A lot of agent systems today work like this:

  • One API key per app or environment
  • Maybe one service account per agent type
  • Logs that record tool calls but not identity provenance
  • Permissions enforced inconsistently across tools

This creates predictable problems:

1. Shared credentials destroy attribution

If five agents use the same token, you don’t know which one acted. If one agent is compromised, every action looks the same.

2. Delegation is implicit, not provable

An agent often acts on behalf of a user, a team, or another service. But if that delegation is buried in app logic or metadata, it’s hard to inspect and harder to audit.

3. Tool access gets too broad

Agents rarely need full access to everything a human can do. Without scoped identity and policy enforcement, you end up granting blanket permissions because it’s easier.

4. Auditing becomes reconstruction

After an incident, teams piece together logs from the orchestrator, the app, the model provider, and the target system. That’s not accountability. That’s archaeology.

What cryptographic identity gives you

For autonomous agents, identity should be more than a string in a database. It should be:

  • Unique: each agent gets its own identity
  • Verifiable: actions can be signed or tied to signed credentials
  • Delegable: agents can act on behalf of users or systems with explicit chains
  • Scoped: permissions are limited by policy
  • Auditable: decisions and actions are logged with enough context to reconstruct intent and authority

A practical implementation often looks like:

  • Public/private keypairs for agents, commonly Ed25519
  • Short-lived tokens derived from that identity
  • Delegation chains using standards like RFC 8693 token exchange
  • Policy evaluation via OPA or another engine
  • Approval workflows for sensitive actions
  • Audit logs that tie action, identity, delegation, and policy decision together

This doesn’t need to be exotic. It just needs to be explicit.

A concrete example

Let’s say you have a code agent that can:

  • read a repo
  • open PRs
  • trigger CI
  • request access to secrets for deployments

Without cryptographic identity, you might give it a GitHub token and call it done.

With cryptographic identity, the flow is more defensible:

  1. The agent has its own Ed25519 keypair.
  2. It authenticates as itself.
  3. It receives a short-lived token for a specific task.
  4. If acting for a human, it uses a delegation chain.
  5. Policy decides whether it can call open_pr, trigger_ci, or request_secret.
  6. Every step is logged.

That means when the agent opens a PR, you can answer:

  • which agent instance did it
  • which user or service delegated authority
  • what policy allowed it
  • whether approval was required
  • what exact tool call was executed

That’s the difference between “an automation did something” and “this specific agent, under this delegated authority, performed this action.”

Policy matters as much as identity

Identity without policy is just naming things.

If you already use OPA, that may be the right answer for enforcement. Plenty of teams don’t need a new policy engine. They need better inputs: a real agent identity, a delegation chain, and structured tool request context.

Here’s a minimal Rego example for an agent allowed to open PRs but not merge to main directly:

package agent.authz

default allow = false

allow if {
  input.agent.type == "code-agent"
  input.action == "open_pr"
  input.repo.visibility == "internal"
}

allow if {
  input.agent.type == "code-agent"
  input.action == "trigger_ci"
  input.branch != "main"
}

deny_reason := "direct merges to main require human approval" if {
  input.action == "merge_pr"
  input.branch == "main"
}
Enter fullscreen mode Exit fullscreen mode

The key is that input.agent should be real identity data, not an untrusted string from a request body.

What “good” looks like in practice

You don’t need a perfect zero-trust architecture on day one. But if agents are taking actions in production systems, a decent baseline looks like this:

Per-agent identity

Every agent gets its own keypair and identifier. No shared secrets across unrelated agents.

Short-lived credentials

Use expiring tokens instead of long-lived bearer credentials. Rotate aggressively.

Explicit delegation

If an agent acts for a user or another service, model that chain directly. RFC 8693 token exchange is a good place to start.

Tool-level authorization

Don’t authorize “the agent.” Authorize the specific tool call in context.

Approval for sensitive actions

Some actions should pause for human approval: production deploys, secret access, customer data export, billing changes.

Immutable audit trails

Store enough context to prove what happened later: identity, delegation, policy result, timestamps, tool parameters, and outcomes.

Getting started

Here’s a practical path if you’re building agent systems today.

Option 1: Roll your own baseline

If your stack is small, you can start with standard components:

  • Generate an Ed25519 keypair per agent
  • Issue short-lived JWTs or exchanged tokens
  • Enforce authorization with OPA
  • Log every tool invocation with identity and policy context

For example, generating an Ed25519 keypair in Python:

from cryptography.hazmat.primitives.asymmetric.ed25519 import Ed25519PrivateKey
from cryptography.hazmat.primitives import serialization

private_key = Ed25519PrivateKey.generate()
public_key = private_key.public_key()

pem_private = private_key.private_bytes(
    encoding=serialization.Encoding.PEM,
    format=serialization.PrivateFormat.PKCS8,
    encryption_algorithm=serialization.NoEncryption()
)

pem_public = public_key.public_bytes(
    encoding=serialization.Encoding.PEM,
    format=serialization.PublicFormat.SubjectPublicKeyInfo
)

print(pem_private.decode())
print(pem_public.decode())
Enter fullscreen mode Exit fullscreen mode

Then structure tool authorization input clearly:

{
  "agent": {
    "id": "agent:code-reviewer:7f3a",
    "type": "code-agent"
  },
  "delegation": {
    "actor": "user:alice",
    "via": "service:repo-orchestrator"
  },
  "action": "open_pr",
  "resource": "repo:payments-api",
  "branch": "agent/fix-null-check",
  "task_id": "task_12345"
}
Enter fullscreen mode Exit fullscreen mode

That alone is a major step up from shared API keys.

Option 2: Use a platform that handles the plumbing

If you don’t want to build all of that yourself, this is where platforms like Authora Identity can help. It provides cryptographic agent identities based on Ed25519, RBAC, delegation chains via RFC 8693, MCP authorization, policy engines, approval workflows, and audit logging, along with SDKs in TypeScript, Python, Rust, and Go.

The important part isn’t “use this specific product.” The important part is: don’t skip identity plumbing just because the demo works without it.

A TypeScript-flavored example of what agent registration and authorization might look like:

import { Authora } from "@authora/sdk";

const authora = new Authora({ apiKey: process.env.AUTHORA_API_KEY! });

const agent = await authora.identities.create({
  name: "repo-code-agent",
  algorithm: "Ed25519",
  metadata: {
    team: "platform",
    environment: "prod"
  }
});

const token = await authora.tokens.exchange({
  subject: agent.id,
  actor: "user:alice",
  scope: ["repo:read", "pr:write"],
  audience: "github-mcp"
});

const decision = await authora.authorize({
  token,
  action: "open_pr",
  resource: "repo:payments-api"
});

if (decision.allow) {
  // invoke tool
}
Enter fullscreen mode Exit fullscreen mode

Even if your real implementation differs, the model is the same: identity first, then scoped delegation, then policy, then execution.

Where this becomes urgent

This stops being theoretical when agents can:

  • modify production code
  • access customer or financial data
  • use internal admin tools
  • spend money
  • trigger infrastructure changes
  • coordinate with other agents

The more autonomous the system, the less acceptable “we think this agent probably did it” becomes.

Autonomy without identity is just unaccountable automation.

Final thought

The industry has spent years building identity and access controls for humans and services. AI agents now sit awkwardly in between: more dynamic than service accounts, less supervised than users, and increasingly capable of taking real actions.

That’s why cryptographic identity matters.

Not because it sounds advanced. Because once an agent can act, you need to know exactly who it is, what it was allowed to do, and why the system let it happen.

If your current setup can’t answer those questions cleanly, that’s probably the next layer to build.

-- Authora team

This post was created with AI assistance.

Top comments (0)