RSA Conference 2026 just wrapped and the theme is unmistakable: intent-based security is the new paradigm for AI agents.
Three separate announcements this week landed on the same thesis:
Token Security: Identity as the Control Plane
Token Security unveiled intent-based AI agent security that governs autonomous agents by aligning permissions with intended purpose.
"Prompt filtering and guardrails were not designed to fully contain the security risks introduced by autonomous AI agents." — Itamar Apelblat, CEO
Their five capabilities: discover agents and their owners, understand declared and observed intent, dynamically enforce least privilege aligned to intent, flag actions outside intent boundaries, and apply lifecycle governance.
The key insight: two agents with identical permissions can behave completely differently based on what they are trying to accomplish. Static permissions cannot contain this.
Proofpoint: The Agent Integrity Framework
Proofpoint launched Proofpoint AI Security, building on their acquisition of Acuvity. They introduced a five-phase maturity model from discovery through runtime enforcement.
"Humans and AI agents share similar risks: both can be manipulated and both can take actions that diverge from their intended purpose, yet traditional security was never designed to validate intent." — Sumit Dhawan, CEO
Their approach: intent-based detection models that continuously evaluate whether AI behavior aligns with original requests, policies, and intended purpose. Acuvity's research found 70% of organizations lack optimized AI governance and 50% expect AI-related data loss within 12 months.
Geordie AI: Agent-Native Security Governance
Geordie AI, an RSAC 2026 Innovation Sandbox finalist, built an "agent-native" security platform for real-time discovery, behavior monitoring, and risk control of AI agents.
Founded by ex-Darktrace COO Henry Comfort and ex-Snyk CTO Benji Weber, they raised $6.5M from Ten Eleven Ventures and General Catalyst. Their thesis: AI agents are a new type of operational entity whose behavior patterns differ fundamentally from traditional systems.
They identify five core pain points:
- No unified visibility across agent deployments
- No continuous capability auditing
- Non-deterministic behavior breaks traditional monitoring
- Expanding risk surface from tool and data integrations
- Cascading failures from agent-to-agent collaboration
The Convergence — and the Gap
All three converge on the same insight: static permissions are insufficient for autonomous agents because agent behavior is non-deterministic and goal-oriented.
All three propose intent-based enforcement: understand what an agent is supposed to do, then constrain it to that purpose.
But here is what none of them solve:
How do you verify the agent's identity in the first place?
Token Security discovers agents through "service accounts, API credentials, and cloud roles." Proofpoint discovers "sanctioned and unsanctioned AI tools." Geordie monitors agent behavior in enterprise environments.
All of these assume the enterprise perimeter. The agent is running in your infrastructure, using your credentials, accessing your systems. You can observe it because you own the environment.
But the world is moving toward agent-to-agent interaction across organizational boundaries. When your agent calls another agent's API, or negotiates a service with a third-party agent, or receives a task delegation from an external system — none of these platforms can verify who that external agent is.
What's Missing: Portable Agent Identity
Intent-based security needs a foundation:
- Cryptographic identity — the agent can prove it is who it claims to be, not just assert it
- Verifiable trust history — not just "does this agent have a credential" but "has this agent behaved reliably over time?"
- Cross-boundary verification — identity that works across organizations, platforms, and protocols
This is the layer that sits below intent-based enforcement. Before you can evaluate whether an agent's behavior matches its intent, you need to know which agent you are talking to — cryptographically, not by checking which service account it is using.
AIP (Agent Identity Protocol) provides this foundation:
pip install aip-identity
aip init
Each agent gets an Ed25519 keypair, a DID, and can build verifiable trust through vouch chains and behavioral scoring. The identity is portable — it works across platforms, protocols, and organizational boundaries.
The intent-based security companies are solving a real problem. But they are solving it inside the enterprise perimeter. The harder problem — and the one that determines whether autonomous agents can operate safely at scale — is identity verification across boundaries.
Intent without identity is just policy without proof.
AIP is open source. 645 tests. 22 registered agents. The identity layer the intent-based security companies will eventually need.
Top comments (0)