Your agent can send an email, place an order, or merge a PR. If an auditor asks "prove it," what artifact do you hand them?
Plaintext logs aren't an answer. They're editable, deletable, and reorderable by anyone who controls the runtime. NIST has been quiet about this gap until recently — but in early 2026 they started lining up the answer.
On February 5, 2026, NIST NCCoE published a concept paper on AI agent identity and authorization surfacing four control areas any production agent deployment must address. Twelve days later, February 17, 2026, NIST CAISI launched the AI Agent Standards Initiative — more deliverables coming, exact timelines still emerging.
The concept paper is scoping work, not a prescriptive standard yet. But the four control areas are settled, and if you're building AI agents today, they tell you what you'll need to have working before NIST's normative output lands.
This post walks through each area, what it actually requires, and where the implementation gaps are today. Python code throughout.
The four control areas
The NCCoE concept paper surfaces four areas (the paper itself doesn't call them "pillars" — that's my framing for this post):
- Identification — How are AI agents identified? Persistent vs task-specific identities, metadata for action scoping.
- Authentication & Authorization — OAuth 2.0 extensions, ABAC, policy-based access control for agents as distinct principals. Delegation is discussed under authorization, not as a standalone area.
- Access Delegation (sub-area of authorization) — Linking user identities to agents while preventing privilege escalation through delegation chains.
- Auditing & Non-repudiation — "Mechanisms by which specific AI agent actions are attributed to their non-human entity for audit and forensic purposes."
The fourth area is where most production deployments are weakest today. Most agents generate logs. Few generate evidence.
Pillar 1: Identification
What NCCoE asks: A mechanism for issuing and resolving agent identities. Either persistent (the agent is a long-lived entity) or task-specific (a new identity per task).
What "good" looks like: Public-key cryptographic identity. Each agent has a keypair. The public key is the verifiable identifier. No central registry required.
Minimal working code:
from signet_auth import SigningAgent
# Persistent identity: create once, reuse across tasks
agent = SigningAgent.create("procurement-bot-01", owner="acme-corp")
print(f"Agent ID: {agent.public_key}")
# X9kF2mN8pQ3... (raw base64 Ed25519 public key, 44 chars)
The Ed25519 public key is the agent identifier. A verifier who receives any artifact signed by this key knows it came from this agent. No registry lookup, no external service.
Gap to watch: NCCoE mentions SPIFFE/SPIRE for workload identity. If you're in a Kubernetes environment, the integration point is real — SPIRE issues short-lived SVIDs that can back your agent identity instead of long-lived local keys.
Pillar 2: Authorization
What NCCoE asks: OAuth 2.0 extensions, ABAC, or policy-based access control. The agent is a distinct principal, not a user. Decisions about what the agent can do happen at policy evaluation time.
What "good" looks like: The policy decision is evaluated before the action executes, and the decision is captured cryptographically so an auditor can later verify that the policy ran.
Minimal working code:
import json
from signet_auth import (
SigningAgent,
Receipt,
parse_policy_yaml,
sign_with_policy,
load_signing_key,
default_signet_dir,
)
# Define policy in YAML. Rules use `id:` and numeric comparisons use operator
# objects (e.g. {gt: 1000}), not string expressions.
policy_yaml = """
version: 1
name: procurement-safe
default_action: deny
rules:
- id: allow-search
match:
tool: web_search
action: allow
- id: require-approval-over-threshold
match:
tool: place_order
params:
amount: {gt: 1000}
action: require_approval
"""
# Canonical JSON policy is what gets hashed into the attestation.
policy_json = parse_policy_yaml(policy_yaml)
agent = SigningAgent("procurement-bot-01")
action_json = json.dumps({
"tool": "web_search",
"params": {"query": "laptop prices"},
"params_hash": "",
"target": "",
"transport": "stdio",
})
# The low-level binding takes string args and returns (receipt_json, eval_json).
# It reads the signing key off disk; pass the raw key bytes yourself so the
# signing agent's in-memory key handle is not re-exported.
secret_key = load_signing_key(default_signet_dir(), "procurement-bot-01")
receipt_json, eval_json = sign_with_policy(
secret_key,
action_json,
agent.name,
agent.owner or "",
policy_json,
)
receipt = Receipt.from_json(receipt_json)
# receipt.policy contains the attestation: which policy hash, which rule id,
# which decision. All inside the Ed25519 signature scope.
The CLI-equivalent one-liner is signet sign --key procurement-bot-01 --tool web_search --params '{"query":"laptop prices"}' --policy policy.yaml.
What this gives you: The policy version (hashed), the matched rule, and the decision are co-signed with the action. A verifier can confirm "the policy evaluated allow-search for this exact call" without trusting the runtime that produced it.
Gap to watch: NCCoE emphasizes that enforcement and attestation are separate concerns. The policy must actually run before the action, not after. If your implementation signs the policy decision post-hoc, it's not enforcement — it's reconstruction.
Pillar 3: Access Delegation
What NCCoE asks: Mechanisms for linking user identities to agents while preventing privilege escalation. Actions must trace back to the human authority that delegated them.
What "good" looks like: A cryptographically signed delegation chain. The root is a human (or org). Each delegation narrows scope, never widens. Every delegation has an expiration. Verification is offline.
Minimal working code:
from signet_auth import SigningAgent
# Assumes both keys exist already (signet identity create alice-human, etc.).
# Use SigningAgent.create(name, owner=...) on first run to mint them.
alice = SigningAgent("alice-human")
bot = SigningAgent("procurement-bot-01")
# Alice delegates scoped authority to bot, expiring in 1 hour.
# Scope fields are passed as keyword args; permissions can only narrow from here.
token_json = alice.delegate(
bot.public_key,
"procurement-bot-01",
tools=["web_search", "place_order"],
targets=["mcp://procurement-api"],
max_depth=0, # cannot re-delegate
expires="2026-06-30T23:59:59Z",
)
# Bot signs an action carrying the delegation chain as proof.
# chain_json is a JSON array string of delegation tokens.
receipt_json = bot.sign_authorized(
"place_order",
params={"sku": "LAPTOP-01", "amount": 850},
target="mcp://procurement-api",
chain_json=f"[{token_json}]",
)
# The v4 receipt carries:
# - authorization.chain_hash: SHA-256 of the delegation chain
# - authorization.root_pubkey: Alice's public key
# - All inside the signature scope
Verification is offline. A third party with Alice's public key can verify the chain without contacting Alice:
scope_json = SigningAgent.verify_authorized(
receipt_json,
trusted_roots=[alice.public_key],
clock_skew_secs=60,
)
# Returns the effective scope as a JSON string, or raises if:
# - Signature invalid
# - Chain scope narrowing violated
# - Delegation expired
# - Root not in trusted_roots
Gap to watch: NCCoE calls out the privilege escalation risk explicitly. Without "permissions only narrow, never widen," a compromised intermediate agent could issue itself broader permissions. The scope narrowing check must be enforced at verification time, not just at delegation time.
Pillar 4: Logging and Transparency
This is where most deployments fail the auditor test. NCCoE asks:
"Mechanisms by which specific AI agent actions are attributed to their non-human entity for audit and forensic purposes."
Most current implementations answer this with: "We write logs." That is not what NCCoE is asking for.
What "good" looks like:
- Non-repudiation: The agent cannot later deny it took an action. Ed25519 signatures.
- Tamper-evident: Modifying a log entry is detectable. Signatures break.
- Tamper-evident ordering: Deleting or reordering entries is detectable. SHA-256 hash chains.
- Independently verifiable: An auditor doesn't need access to the original runtime. Offline verification with the public key.
Most audit log implementations satisfy 0 of 4. The concept paper's language is specific:
"Mechanisms by which agent actions can be logged in a tamper-proof manner."
Minimal working code:
from signet_auth import SigningAgent, audit_verify_chain, default_signet_dir
# Assumes the key already exists; .create(...) on first run.
agent = SigningAgent("procurement-bot-01")
# Every action signed and appended to hash-chained audit log
agent.sign("web_search", params={"query": "laptop"}, audit=True)
agent.sign("place_order", params={"sku": "LAPTOP-01", "amount": 850}, audit=True)
# An auditor — who never ran this code — verifies the chain:
signet_dir = default_signet_dir()
status = audit_verify_chain(signet_dir)
print(f"Chain intact: {status.valid}")
print(f"Total records: {status.total_records}")
If any entry is modified: the Ed25519 signature fails. If any entry is deleted: the SHA-256 hash chain breaks. If any entry is reordered: the hash chain breaks. All detectable independently of the runtime that produced them.
How the four areas compose
The four areas aren't independent — they compose into a single verifiable artifact. Real Signet receipt shape (simplified):
Receipt {
v: 4,
id: "rec_...",
action: {
tool: "place_order",
params: { ... },
params_hash: "sha256:...",
target: "mcp://procurement-api",
transport: "stdio",
},
signer: { // Pillar 1 (Identification)
pubkey: "...", // raw base64 Ed25519
name: "procurement-bot-01",
owner: "acme-corp",
},
policy: { // Pillar 2 (Authorization)
policy_hash: "sha256:...",
policy_name: "procurement-safe",
decision: "allow",
matched_rules: ["allow-search"],
},
authorization: { // Pillar 3 (Delegation)
chain_hash: "sha256:...",
root_pubkey: "...", // Alice's public key, raw base64
},
ts: "2026-04-30T12:00:00Z",
nonce: "rnd_...",
sig: "ed25519:...", // binding the whole thing (Pillar 4)
}
Every field inside the signature scope is tamper-evident. A verifier with the root public key can confirm, offline:
- Identity: this specific agent produced this receipt
- Authorization: this agent was delegated specific scope by this root
- Policy: this policy was evaluated and returned this decision
- Action: this tool was called with these parameters at this time
- Chain integrity: this receipt is part of an unbroken sequence
That's what NCCoE is asking for. Not logs. Not telemetry. Cryptographic receipts.
Where to start
The four areas are additive:
- Start with Auditing & Non-repudiation (signed receipts). This is the foundation — without it, the other three don't produce verifiable evidence.
- Add Identification. Name the agent with a public-key ID.
- Add Delegation if your agents act on behalf of humans or other agents.
- Add Authorization (policy) if you have deny rules that must be provably enforced.
Most teams start by retrofitting signed receipts onto an existing agent framework via callbacks. The examples above use Signet, which handles all four areas. The specific tool matters less than the pattern. Whatever you build, the verifier should be able to answer four questions without calling back to your infrastructure:
- Who did this? (signer pubkey)
- Were they allowed? (authorization chain root)
- Did the policy approve it? (policy hash + decision)
- Is the audit trail intact? (hash chain)
If your current implementation can't answer all four from a receipt file alone, that's the gap the NCCoE concept paper will push you to close.
What's next
NIST has signaled that more deliverables are coming under the AI Agent Standards Initiative — an Interoperability Profile is on the roadmap, but the published timeline and exact contents are still emerging. The direction is clear (cryptographic identity, signed delegation, tamper-evident audit) even if the final profile is not.
The IETF draft draft-farley-acta-signed-receipts is currently the most advanced concrete receipt specification — check Datatracker for the latest revision before citing a specific version.
If you want to follow the standards track:
- NIST CAISI AI Agent Standards Initiative (main page)
- NCCoE Concept Paper (the four control areas)
- Express interest in working with CAISI
- IETF draft-farley-acta-signed-receipts
The window between now and whenever the normative profile lands is when the formats get locked in. What you ship in the next quarter will probably dictate whether you're ahead of or behind the NIST curve.
Signet (Apache-2.0 OR MIT) is one open-source implementation in this space — Rust core, Python and TypeScript bindings — used as the working code in this post. pip install signet-auth.
Top comments (0)