Forbes published a cybersecurity governance playbook for agentic AI yesterday. The core argument: behavioral detection is no longer enough. As agents become more human-like, security must shift toward cryptographic anchors that tie identity to something verifiable.
Their specific claim: proof of personhood is the only way to prevent identity swarming — where a single attacker deploys thousands of autonomous agents to mimic legitimate users.
Meanwhile, Strata published a comprehensive guide to agentic AI risks in 2026. Their #1 risk: unmanaged agent identities. Most enterprises lack a consistent way to provision, track, and retire AI agent credentials.
Both pieces converge on the same conclusion: existing IAM was not built for this.
What they get right
Forbes correctly identifies that behavioral detection alone fails against sophisticated agent impersonation. When agents can mimic human interaction patterns, you need something deeper — a cryptographic anchor that proves identity independent of behavior.
Strata correctly identifies that the principle of least privilege, well understood for human IAM, breaks down for ephemeral agents that spin up and down dynamically.
Both recognize that prompt injection — where malicious instructions are embedded in content agents process — creates attack surfaces that human-centric security never anticipated.
What they miss
Neither piece addresses the cross-protocol problem.
Enterprise agents do not live in a single identity system. An agent might have a did:key identity in one protocol, a did:web identity in another, and a registry-backed did:aip identity in a third. If these identity systems cannot verify each other's claims, the "proof of personhood" that Forbes advocates for becomes fragmented — valid in one context, meaningless in another.
Three engines just proved this is solvable
This week, three independent identity engines — AIP (Agent Identity Protocol), the Agent Passport System (APS), and Kanoniv's agent-auth framework — completed a full 3×3 cross-protocol verification matrix.
Each engine produced a signed Ed25519 delegation chain. Each of the other two engines independently verified it. The results:
| Verifier | Kanoniv (did:key) | APS (did:aps) | AIP (did:aip) |
|---|---|---|---|
| Kanoniv | — | ✅ | ✅ |
| APS | ✅ | — | ✅ |
| AIP | ✅ | ✅ | — |
Three DID methods. Three verification approaches. All Ed25519 underneath. Every cell verified.
Why this matters for the Forbes argument
Forbes wants cryptographic anchors for agent identity. We have them — and they work across protocol boundaries.
The cross-protocol proof demonstrates something specific: an agent's identity claim, made in one system, can be independently verified by a completely different system that uses a different DID method and a different trust model. This is the infrastructure that makes "proof of personhood" (or proof of agent-hood) portable rather than locked to a single vendor.
The next frontier: decision artifact verification
The 3×3 matrix proved that execution is verifiable across engines — delegation chains, signatures, scope containment.
The conversation has now moved to a harder question: can engines verify each other's decisions?
Not just "was this execution authorized?" but "was this authorization itself produced correctly?"
This requires a new artifact type — a signed decision object that includes:
- The intent (what was requested)
- The evaluation proof (why it was permitted or denied)
- The trust context (what trust signals were consumed)
- A determinism declaration (can this decision be replayed?)
Three engines are now iterating on this specification in real time. The goal: an agent's authorization decision, made in one engine, should be independently auditable by another engine — even if they use completely different trust models.
What enterprises should take from this
Cryptographic identity is necessary but not sufficient. You also need cross-protocol verification — otherwise your identity anchors are only valid within a single vendor's system.
Delegation chains are the right abstraction for agent permissions. Not static role assignments, not OAuth scopes designed for humans — cryptographically signed delegation chains that narrow permissions as they propagate.
Trust is multi-layered. Structural verification (signatures, expiry, scope containment) should be deterministic and universal. Trust-informed verification (behavioral scores, reputation) should be transparent and declared. Both matter.
Forbes and Strata are right that the problem is urgent. The solution is not another enterprise IAM product that treats agents like weird humans. It is cryptographic identity infrastructure designed for agents from the ground up.
The 3×3 verification matrix is the proof that this works. The decision artifact specification is what comes next.
AIP is open source. pip install aip-identity && aip init gives any agent a cryptographic identity in one command. The cross-protocol interop work is happening at kanoniv/agent-auth#2.
Top comments (0)