Twenty-five billion dollars has been spent securing the layers around AI agents — perimeter, identity, orchestration. The layer that proves a specific human approved a specific action remains empty. Capital allocation reveals what the market fears. The gap reveals what it has not yet imagined.
The AI agent security market grew eight-fold in quarterly funding over two years. In Q4 2025 alone, twenty-eight deals closed for a combined $2.17 billion. Identity security spending will exceed $24 billion in 2025, growing thirteen percent year-over-year. The broader AI agent software market — the category that encompasses everything agents do — hit $7.84 billion in 2025 and is projected to reach $52.62 billion by 2030, a forty-one percent compound annual growth rate according to Grand View Research.
These are not projections from consultants selling forecasts. They are checks being written by investors with money at risk. Palo Alto Networks acquired Koi for roughly $400 million to monitor what agents can access at the endpoint. CyberArk — acquired by the same company for $25 billion — governs which agents exist and what privileges they hold. Saviynt raised $700 million for AI-powered identity platforms. Persona raised $200 million from Founders Fund and Ribbit Capital specifically for identity verification in AI-driven environments. Veza closed $108 million to reimagine identity security for the agentic era.
The money is building a stack. Layer by layer, the agent security infrastructure is being assembled by different companies claiming different territory.
Four Layers, Three Claimed
The emerging agent security stack has four distinct layers, and the capital allocation reveals which ones the market considers solved — or at least solvable.
The first layer is the perimeter: what can agents access? Palo Alto Networks owns this with Koi's endpoint security, monitoring data movement and access boundaries. This is the firewall generation of agent security — necessary, familiar, well-funded.
The second layer is identity: who is this agent? CyberArk, Veza, Okta (now at a $20 billion market cap, shares up fifty percent in six months), Persona, Semperis. The question they answer is whether the entity requesting access is who it claims to be. For agents, this means cryptographic identity, privilege governance, and lifecycle management. The World project alone has raised $244 million on the bet that iris scans will differentiate humans from bots at internet scale.
The third layer is orchestration: how do agents coordinate? Trace raised $3 million from Y Combinator to build knowledge graphs that route tasks between agents and humans. Microsoft is bundling agent management into what industry analysts expect to become an E7 enterprise license. Atlassian made agents assignable to Jira tickets. This layer decides which agent does what — scheduling, routing, handoffs.
The fourth layer is financial trust: can this agent transact? Visa, Google, Mastercard, Stripe, and Coinbase have all launched protocols in the past five months. The payment rail entries in this journal have documented that race extensively.
Three of these four layers are being actively claimed. The combined investment across perimeter, identity, and orchestration exceeds $25 billion when you include CyberArk's acquisition value alone. Add Saviynt, Persona, Veza, and the dozens of smaller rounds, and the total is substantially higher.
The Open Layer
What none of these investments address — not one — is intent verification: did a specific human approve this specific action, and can that approval be cryptographically proven after the fact?
Identity answers who. Intent answers what, specifically, did they authorize? A system can verify that Agent-7 is a legitimate entity with valid credentials (identity) and still have no proof that any human reviewed or approved the $50,000 transfer Agent-7 just executed (intent). The identity layer tells you who is in the building. It does not tell you who signed the check.
This is not a theoretical distinction. The Confidence Gap documented the data: eighty-two percent of executives believe their AI agent policies work. Eighty-eight percent of organizations have experienced security incidents involving AI agents. The gap exists precisely because identity verification — knowing which agent is acting — does not guarantee authorization verification — knowing which human approved the action.
The Gravitee survey found forty-seven percent monitoring coverage for AI agent activity. The Access Equation documented that over-privileged agents experience incidents at 4.5 times the rate of least-privileged ones. These are identity problems being mistaken for authorization problems. Restricting what agents can access (perimeter) and verifying which agents exist (identity) does not address whether the actions agents take were specifically approved by the humans responsible for them.
What Capital Reveals
Investment patterns are information. Where money flows reveals what decision-makers believe the problem is. Where money does not flow reveals what they have not yet imagined — or what they consider too early, too hard, or too small.
The agent security stack is being built from the bottom up: perimeter first, then identity, then orchestration. This is the same sequence the traditional cybersecurity industry followed. Firewalls came before identity management, which came before workflow orchestration. The pattern repeats because the lower layers are more familiar — every CISO understands network perimeters and access control. The upper layers require new thinking about what agents are and how human judgment intersects with autonomous execution.
But the pattern also reveals a structural assumption: that agent security is primarily about controlling agent behavior. What agents can access. Which agents are real. How agents coordinate. Every funded layer is about constraining or managing the agent.
Intent verification reverses the direction. It is not about the agent at all. It is about the human — proving that a specific person, at a specific moment, reviewed and approved a specific action with a specific set of parameters. The EU AI Act Article 14, enforceable August 2, 2026, mandates exactly this: meaningful human oversight of high-risk AI systems. The regulation arrived before the infrastructure to implement it.
The ten largest agent software startups have raised approximately $3 billion combined. Replit alone reached a $9 billion valuation. Vanta is at $4.15 billion. Hippocratic AI at $3.5 billion. The agent economy is real, growing at forty-one percent annually, and the security investment is racing to keep up. But the race is concentrated in three layers, leaving the fourth as open territory.
Bot traffic now surpasses human activity online. By the end of the decade, projections suggest ninety percent of all internet traffic will be bots and AI agents. At that scale, the question of who authorized what stops being a compliance checkbox and becomes the fundamental trust infrastructure of the digital economy. The layers being built now — perimeter, identity, orchestration — are necessary. They are not sufficient. The land being grabbed is valuable. The land that remains open may be more valuable still.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)