DEV Community

Authora Dev
Authora Dev

Posted on

Cryptographic Agent Identity: A Technical Solution to Bot Detection

If you’ve tried to put an AI agent in front of real systems, you’ve probably hit the same wall: the moment an agent leaves your local demo and starts calling APIs, opening tickets, touching code, or invoking MCP tools, nobody can reliably answer a basic question:

Who is actually making this request?

Not “which API key was used,” and not “which user clicked the button.” I mean: which agent instance, running what code, under whose authority, with which permissions, and with what audit trail?

That gap is why bot detection keeps getting weirder. We pile on IP reputation, browser fingerprinting, CAPTCHAs, rate limits, behavioral heuristics, and anomaly scoring. Some of that is useful. But if you’re protecting agent-accessible infrastructure, those are mostly compensating controls for a missing primitive:

agents need identity.

In this post, I’ll argue that cryptographic agent identity is a more durable foundation than trying to “detect bots” after the fact. If an agent is going to act autonomously, it should authenticate like a first-class principal.

The problem with bot detection

Traditional bot detection is optimized for anonymous traffic on the public internet:

  • Is this browser real?
  • Is this user behaving like a human?
  • Does this session look automated?
  • Should we challenge or block it?

That works reasonably well for login pages and consumer apps. It works much less well when the “bot” is actually supposed to be there:

  • a coding agent opening PRs
  • an MCP client calling tools
  • an internal support agent updating tickets
  • a deployment agent changing infrastructure
  • a research agent reading sensitive docs

In these cases, “is this automated?” is the wrong question.

It’s automated by design.

The real questions are:

  • Is this a recognized agent identity?
  • Was this action delegated by a trusted user or service?
  • Does the agent have permission for this exact tool or resource?
  • Can we revoke it, scope it, and audit it?

If your only answer is “it has a bearer token,” you don’t have agent identity. You have shared secret distribution.

What cryptographic agent identity looks like

A practical model is to give every agent its own public/private keypair, typically something lightweight and modern like Ed25519.

That gives you a few useful properties immediately:

  1. Strong identity

    The agent can sign requests or tokens using its private key.

  2. Verifiable provenance

    Services can verify signatures using the public key, without trusting network location or user-agent strings.

  3. Delegation chains

    A user or parent service can delegate authority to an agent in a structured way.

  4. Scoped authorization

    Identity becomes the input to policy, not a replacement for policy.

  5. Revocation and rotation

    You can disable an agent identity without rotating every downstream credential.

This is not a silver bullet. You still need authorization, sandboxing, logging, and sane policies. If OPA fits your architecture, use OPA. If SPIFFE fits better for workload identity, use SPIFFE. The point is not that one vendor-specific mechanism solves everything.

The point is that you can’t authorize what you can’t identify.

A simple request-signing flow

At a high level, the flow looks like this:

  1. Agent generates or receives an Ed25519 keypair
  2. Public key is registered with your identity layer
  3. Agent signs a request payload or challenge
  4. API gateway or tool server verifies the signature
  5. Authorization policy checks:
    • agent identity
    • delegated subject
    • allowed actions
    • expiry / constraints

Here’s a minimal Python example using Ed25519 request signing:

from nacl.signing import SigningKey, VerifyKey
from nacl.encoding import HexEncoder
import json
import time

# Agent generates identity
signing_key = SigningKey.generate()
verify_key = signing_key.verify_key

public_key_hex = verify_key.encode(encoder=HexEncoder).decode()
print("Agent public key:", public_key_hex)

# Canonical request payload
payload = {
    "agent_id": "agent:code-reviewer-prod",
    "method": "POST",
    "path": "/mcp/tools/create_pr_comment",
    "timestamp": int(time.time()),
    "body_sha256": "abc123def456"
}

message = json.dumps(payload, sort_keys=True).encode()
signature = signing_key.sign(message).signature.hex()

print("Signature:", signature)

# Server-side verification
verify_key = VerifyKey(public_key_hex, encoder=HexEncoder)
verify_key.verify(message, bytes.fromhex(signature))
print("Verified")
Enter fullscreen mode Exit fullscreen mode

In production, you’d want:

  • replay protection via nonce or timestamp windows
  • canonical serialization
  • signed delegation claims
  • key rotation support
  • audit logging
  • policy checks after verification

But the core idea is simple: the request is attributable to a cryptographic principal.

Why this beats “bot detection” for agent systems

Bot detection is probabilistic. Cryptographic identity is deterministic.

With bot detection, you’re often asking:

  • does this look suspicious?
  • should we challenge it?
  • how confident are we?

With cryptographic identity, you can ask:

  • was this signed by a known agent?
  • is the delegation valid?
  • does policy allow this exact action?

That shift matters operationally.

Better auditability

Instead of logs like:

request from API key sk_live_xxx

you get logs like:

agent:release-bot-prod invoked deploy_service

delegated by user:alice@company.com

via policy prod-deploy-with-approval

approval attached change-4821

That’s much closer to how security teams actually investigate incidents.

Better least privilege

When agents have identities, you can grant permissions to the agent itself instead of sharing broad user credentials.

For example:

  • code review agent can comment on PRs
  • release agent can deploy to staging
  • support agent can read ticket metadata but not billing exports

Better zero-trust boundaries

If you run MCP servers or internal tool APIs, cryptographic identity lets you verify the caller at the edge instead of trusting that “anything inside the network is fine.”

That becomes increasingly important as agents get spawned dynamically across laptops, CI runners, cloud containers, and vendor-hosted environments.

Delegation matters as much as identity

Identity alone is not enough. Agents usually act for someone.

That’s where delegation comes in.

A useful model is a delegation chain similar to token exchange patterns from RFC 8693:

  • user authenticates
  • user delegates limited authority to agent
  • agent receives scoped credentials
  • downstream service evaluates:
    • who the agent is
    • who delegated to it
    • what constraints were attached

Example constraints:

  • valid for 30 minutes
  • only for repository org/api
  • only read_issues and comment_pr
  • requires human approval for merge_pr

That is much safer than handing an agent a long-lived personal access token and hoping for the best.

Getting started

You don’t need a giant platform rollout to start using this pattern.

1. Give agents stable identities

Create a unique identity per agent type or per deployed agent instance. Avoid shared credentials across multiple agents.

At minimum, track:

  • agent ID
  • public key
  • owner or delegated subject
  • creation time
  • status (active/revoked)

2. Sign requests to sensitive tools

Start with your highest-risk actions:

  • code modification
  • production deployment
  • data export
  • ticket updates
  • secret access

Require a signed header or detached signature, and verify it at the API or MCP boundary.

3. Add policy checks

Use whatever policy engine fits your stack:

  • OPA / Rego
  • Cedar
  • custom RBAC/ABAC layer

Example policy questions:

  • Can this agent call this tool?
  • Is the delegation still valid?
  • Does this action require approval?
  • Is this environment allowed?

4. Log identity + delegation + action

Your audit trail should capture:

  • agent identity
  • delegating user/service
  • requested action
  • policy decision
  • approval references
  • result

5. Rotate and revoke keys

Treat agent keys like workload credentials, not static app secrets. Build revocation into the design early.

Where Authora fits

This is the problem space Authora works on: cryptographic agent identity, delegation chains, authorization, approvals, and agent verification at the edge. But the broader pattern matters more than any single implementation.

If you’re building agent-facing systems, the main design shift is this:

stop treating agents as suspicious users to detect, and start treating them as principals to authenticate and authorize.

That gives you a cleaner model for MCP authorization, tool access, and auditability than trying to infer intent from traffic patterns alone.

Try it yourself

If you want to test or improve your agent security posture, here are some free tools:

The next wave of “bot detection” probably won’t look like better CAPTCHAs. It’ll look like cryptographic identity, explicit delegation, and policy-driven authorization for agents.

That’s a much more useful security model for systems where bots aren’t an attack pattern — they’re part of the architecture.

-- Authora team

This post was created with AI assistance.

Top comments (0)