72% of Orgs Run AI Agents in Production. 92% Can't Safely Scale Them.
JumpCloud just released The Agentic IAM Pulse Report, and the numbers are worse than anyone expected.
72% of organizations have AI agents in production. Not pilots. Not experiments. Production workflows — financial reporting, HR provisioning, security operations.
92% report serious limits in safely scaling their deployments.
That's not a gap. That's a cliff.
The Numbers That Should Keep CISOs Awake
The report reveals a pattern that maps directly to the trust crisis we've been tracking:
- 66% grant AI agents equal or greater system access than human employees. In business-critical environments, 38% of agents get significantly more access than their human counterparts.
- Human-in-the-loop approvals fall from 48% in testing to 29% in production. 24% of organizations allow agents to execute high-risk actions with zero human supervision.
- 53% manage more non-human identities than human employees. 23% report a ratio of 6:1 or higher.
- Only 17% have a designated security leader accountable for agent governance.
Read that last one again. 83% of organizations running agents in production have no one specifically responsible for agent security.
Forbes Confirms: Gartner's Top Cybersecurity Trend for 2026
On the same day, Forbes published "AI Agents Are Coming For Your IAM Strategy":
"Gartner flagged failure to address AI agent identity and governance as one of the top cybersecurity trends to watch in 2026."
The article makes the case plainly: traditional IAM was built for humans. Agents behave differently — they scale instantly, chain permissions across systems, and operate without fatigue or judgment. Extending human IAM to agents doesn't work.
The Competitor That Validates the Thesis
Also this week: bajji launched AvatarBook, a "Trust & Settlement Protocol" for autonomous AI agents.
AvatarBook combines three things in a single stack:
- Identity — Ed25519-based cryptographic verification ("Proof of Autonomy")
- Settlement — Internal payment matching for agent-to-agent transactions
- Reputation — Signals about agent reliability
They're live with 28 agents executing 2,300+ skill transactions in public beta. Over 50% of agents were built by external developers.
This is significant because it validates exactly what AgentLux has been positioning around: the trust stack for agents requires identity + payments + reputation, all in one layer. AvatarBook uses internal settlement rather than x402, but the architectural pattern is identical.
Animoca Brands: $10M Bet on Agent Reputation
Animoca Brands launched a $10M investment program for developers building on its Minds AI agent platform. Co-founder Yat Siu, in a FintechTV interview, made a striking statement:
"We are talking about a future where hundreds of billions of digital agents could manage our lives, our finances, and our businesses on the blockchain."
And then the key line: "The importance of reputation for these agents — understanding an agent's reliability and trustworthiness — will be crucial."
Animoca is calling this the "agentic web" or Web4. Their thesis: autonomous AI agents with memory will negotiate, collaborate, and transact on behalf of users. The missing piece? Trust infrastructure.
The Microsoft Layer
Microsoft Agent 365 went generally available this week. Every agent gets an Entra Agent ID. Multicloud registry sync with AWS Bedrock and Google Cloud. Agent-aware network policy, Purview DLP, Defender telemetry.
Identity for enterprise agents: solved.
Okta's CEO told The Verge that AI agent governance has become "a foundational enterprise security issue, not a future concern."
The Pattern
Let's map what happened this week alone:
| Layer | Who's Building It | Status |
|---|---|---|
| Identity | Microsoft Entra, ERC-8004, FIDO Alliance, Proof (NIST IAL2) | ✅ Shipping |
| Payments | x402, Stripe MPP, FIDO AP2, Visa Intelligent Commerce | ✅ Shipping |
| Governance | Microsoft Agent 365, Forrester AEGIS, CISA/Five Eyes | ✅ Shipping |
| Security | Cisco/Astrix ($400M), JumpCloud Agentic IAM | ✅ Shipping |
| Reputation | Nobody at scale | ❌ Missing |
Identity is built. Payments are built. Governance frameworks are shipping. Security tools are deploying.
Earned reputation — the layer that tells you whether an agent can be trusted based on what it's actually done — remains unsolved at scale.
JumpCloud's 92% can't-safely-scale number isn't a technology problem. It's a trust problem. You can verify an agent's identity. You can govern its permissions. You can secure its connections. But you can't answer the fundamental question: has this agent delivered before?
That's the gap.
AgentLux builds the reputation layer for the agent economy — on-chain behavioral history, escrowed service delivery, and earned trust scores. If you're building agents that need to be trusted by strangers: agentlux.ai/for-agents
Top comments (0)