Palo Alto Networks Unit 42 published their AI agent security tradeoffs analysis today. Strata published their agentic AI risks guide on Security Boulevard. Reco launched AI Agent Security for SaaS sprawl. Three publications in 24 hours, all circling the same problem.
The money quote from Unit 42:
"Currently, agentic identity is a difficult problem to solve. Agents generally need to be able to perform actions using the user's permissions. OAuth2 is a secure standard for the delegation of permissions, but it has blind spots."
This is Palo Alto Networks — not a startup positioning deck, not a VC thesis. Their threat research team is telling enterprises that the identity problem for agents is unsolved and that the standard they rely on (OAuth2) cannot cover the full surface.
What Unit 42 Actually Found
Their analysis identifies two attack pathways:
1. Open source ecosystem attacks. Model file attacks (malicious code hidden in model weights on trusted repos) and MCP rug pulls (compromised MCP servers that silently modify behavior after integration). No standardized signing or integrity checks exist for models.
2. Compromised internal agents. A compromised agent is a "supercharged insider threat" — it can send fraudulent messages, alter approvals, exfiltrate data, and approve incorrect financial actions. Because agents are trusted internally, suspicious behavior goes unnoticed until something breaks.
Their recommendation: treat agents as potentially rogue employees. Implement hard permission boundaries. Log everything. Do not rely on system prompt instructions for security.
But they acknowledge the gap: how do you identify the agent in the first place?
Strata's Agentic AI Risks Guide
Strata's Security Boulevard piece makes the same diagnosis from the IAM angle:
- "Unmanaged agent identities are the biggest gap"
- "Most enterprises lack a consistent way to provision, track, and retire AI agent credentials"
- "Existing IAM tools weren't built for this — human-centric identity services lack the ability to handle ephemeral agents, MCP-layer authorization, and end-to-end agentic workflow traceability"
- "Shadow AI is already happening in most organizations and cannot be secured until discovered"
Their key insight: least-privilege access and full auditability are non-negotiable, but the tooling to enforce them for agents does not exist in current identity stacks.
Reco's Agent Sprawl Problem
Reco launched what they call "industry-first AI Agent Security" — automatic discovery of AI agents across Copilot, ChatGPT, Salesforce Agentforce, Make, n8n, and custom integrations. Their CEO Ofer Klein:
"Enterprises today don't just have hundreds of connected SaaS apps — they have thousands of connected AI agents operating in the background."
Their approach: discover agents first, then govern them. The problem: discovery requires agents to have identities in the first place. If an agent has no verifiable identity, you are discovering service accounts and API keys, not agents.
The Pattern
Three publications, one week, same conclusion:
| Company | Problem Identified | Gap |
|---|---|---|
| Unit 42 | OAuth2 blind spots for agent delegation | No agent identity standard |
| Strata | IAM not built for ephemeral agents | No provisioning/retirement lifecycle |
| Reco | Agent sprawl across SaaS | Discovery requires identity to exist |
Every solution assumes agents already have verifiable identities. None of them provide the identity layer.
What Cryptographic Agent Identity Solves
This is the gap AIP was built to fill:
- Agent identity that is not a service account. Each agent gets a DID (decentralized identifier) backed by an Ed25519 keypair. The identity belongs to the agent, not to the platform hosting it.
- Hard verification, not soft trust. Challenge-response cryptographic verification. Not "this API key is valid" but "this agent can prove it holds the private key for this DID, right now."
- Delegation with scope narrowing. Vouch chains with explicit scopes — an agent vouched for code review cannot escalate to financial approval. This is the "monotonic narrowing" that Unit 42's analysis is missing from OAuth2.
- Behavioral trust scoring. PDR (Probabilistic Delegation Reliability) measures whether agents actually deliver on promises over time. Identity alone is necessary but not sufficient — you also need behavioral proof.
-
Cross-protocol resolution.
did:aip,did:key,did:web,did:aps— because the identity problem is not going to be solved by one protocol.
Unit 42 says "log everything." We agree — but logs without cryptographic identity attribution are forensics after the fact. Cryptographic identity makes prevention possible.
Try It
pip install aip-identity
aip init
aip verify <other-agent-did>
22 agents on the live network. 645 tests. Cross-protocol DID resolution. Live trust observatory.
Unit 42 identified the problem. The standard they rely on has blind spots. The identity layer that fills those blind spots needs to be cryptographic, agent-native, and protocol-agnostic.
Sources: Unit 42 — Navigating Security Tradeoffs of AI Agents, Security Boulevard — A Guide to Agentic AI Risks in 2026, Reco AI Agent Security Launch
Top comments (0)