You know what happened. You don't know who did it.
That's the state of most multi-agent systems right now. Five agents process the same request. Three of them write to shared state. Something goes wrong. The logs say "task completed successfully." Every agent claims innocence. Nobody can prove anything.
This isn't a hypothetical. It's the default.
The anonymous agent problem
Most agent frameworks treat agents as interchangeable workers. Agent A calls a tool. Agent B reads the result. The orchestrator routes work based on capability, not identity. Logs capture timestamps and tool calls, but authorship is an afterthought.
This works fine when you have two agents running a simple pipeline. It falls apart the moment you have five or more agents with overlapping access to shared resources. The question stops being "what happened" and becomes "who did this, and can they prove it?"
Traditional logging doesn't answer that question. Logs are mutable. They're written by the system itself. If an agent can write to state, it can (in theory) write to logs. There's no separation between the actor and the record of the action.
Identity as infrastructure
In distributed systems, identity isn't a feature. It's infrastructure. You can't build authorization without it. You can't build audit trails without it. You can't build trust without it.
The same principle applies to multi-agent systems, but most builders skip it entirely. They jump straight to permissions ("Agent A can read, Agent B can write") without first establishing a verifiable identity layer underneath.
Here's what that looks like in practice:
from sigil import Notary
# Each agent gets its own signing identity
notary = Notary(agent_id="data-processor")
# Every action produces a signed attestation
attestation = notary.attest(
action="write",
target="shared_state.json",
detail={"field": "revenue", "old": 0, "new": 15000}
)
# The attestation is cryptographically bound to this specific agent
print(attestation.agent_id) # "data-processor"
print(attestation.signature) # Ed25519 signature
print(attestation.chain_hash) # Links to this agent's previous action
Three things happen here that don't happen with standard logging:
- The action is signed. The attestation is cryptographically bound to the agent that created it. You can verify authorship independently.
- The chain is hash-linked. Each attestation references the previous one for that agent. You can detect gaps, deletions, or insertions.
- Identity is the primitive. The agent doesn't just perform an action. It claims the action under its own verifiable identity.
Why this matters now
The EU AI Act is starting to ask questions about AI system traceability. SOC 2 auditors are starting to ask how AI-generated changes are attributed. Regulated industries need to demonstrate that automated actions can be traced to specific system components.
"The AI did it" isn't going to satisfy an auditor. "Agent data-processor signed this action at 14:32:07, and here's the cryptographic proof" might.
But compliance isn't even the most practical reason. The most practical reason is debugging. When three agents touch the same data and the result is wrong, verifiable identity tells you exactly where to look. No guessing. No "it must have been the summarizer." Proof.
The identity stack
When you start thinking about agent identity as infrastructure, a natural stack emerges:
- Identity — every agent has a unique, verifiable identity (cryptographic key pair)
- Attribution — every action is signed by the agent that performed it
- Continuity — each agent maintains a hash chain linking all its actions in sequence
- Verification — any observer can verify an attestation without trusting the system that created it
Most systems have none of these layers. Some have basic attribution through logging. Almost none have continuity or independent verification.
The pattern is the same one that made version control, TLS certificates, and blockchain valuable. Not the hype around them, but the underlying principle: when you can verify authorship without trusting the author, you can build systems that scale trust.
Getting started
If you want to add identity to your agent system, the simplest starting point is Sigil (pip install sigil-notary). It gives each agent a signing identity, hash-chains their actions, and produces attestations you can verify offline.
But the principle matters more than the tool. Whatever you build, start with identity. Not permissions. Not logging. Identity. Because everything else in the trust stack depends on knowing, with certainty, who did what.
Building trust infrastructure for AI agents. Follow for weekly patterns on agent architecture, governance, and the systems underneath.
Try Sigil: github.com/chaddhq/sigil | Subscribe: The Alignment Layer
Top comments (0)