Forbes published a piece today arguing that AI agent security is repeating the tech industry's oldest mistake: building software-first security that is fast and lightweight but ultimately insufficient.
The specific claim: we spent 30 years learning that perimeter-based, software-only security does not scale. Now we are making the same bet with AI agents — deploying guardrails and prompt filters as the primary security layer while autonomous systems inherit human credentials and operate across enterprise infrastructure.
Meanwhile, Token Security just announced intent-based AI agent security. The core idea: two agents with identical permissions can behave very differently depending on what they are trying to accomplish. Static permissions and past behavior are not enough. You need to understand what the agent is designed to do and enforce access based on that declared purpose.
Token Security's CEO Itamar Apelblat: "Prompt filtering and guardrails were not designed to fully contain the security risks introduced by autonomous AI agents."
The Three Layers That Are Emerging
From the Forbes piece, the A2A consortium, and Token Security, a layered architecture is becoming visible:
Layer 1: Identity. The agent must cryptographically prove who it is. Not "which API key was used" but "which specific agent, with which specific authorization chain, performed this action." This is what did:aip addresses — Ed25519 key pairs, delegation chains, verifiable identity independent of the platform the agent runs on.
Layer 2: Intent and Permissions. Once you know who the agent is, you need to know what it is supposed to do. Token Security's intent-based model fits here: discovering agents, understanding their declared purpose, enforcing least privilege aligned to that intent. This layer breaks when there is no cryptographic identity underneath it — you cannot enforce intent-based permissions on an agent you cannot identify.
Layer 3: Behavioral Monitoring. Even with identity and intent, agents are non-deterministic. You need to observe what they actually do and compare it to what they claimed they would do. This is where behavioral trust scoring matters — AIP's Promise-Delivery Ratio tracks whether agents keep their commitments over time, with temporal decay so old good behavior does not mask recent drift.
Why Software-Only Security Keeps Failing
The Forbes argument maps to what we have been seeing in the cross-protocol interop work. Four engines — Kanoniv, APS, AIP, and Network-AI — just completed mutual verification of signed decision artifacts for the same borderline trust scenario. The results:
- Three engines denied (trust score 0.38 below threshold)
- One engine permitted (scope-only check, no trust gate)
- All four used different deny mechanisms
The divergence is not a bug. It is what happens when you have real security: engines with different trust models producing different verdicts based on the same facts. The alternative — one global permission layer that every agent and every framework shares — is exactly the "software-first" approach Forbes is warning about.
Hardware-rooted identity would solve part of this. But for agents that exist as software processes, the next best thing is cryptographic identity with behavioral verification. An Ed25519 key pair is not a hardware root, but it is verifiable, auditable, and cannot be impersonated without the private key.
The Gap Token Security Does Not Close
Intent-based security is a real advance over static permissions. But it still relies on the platform knowing which agent is executing. Token Security's five capabilities — discovering agents, understanding intent, enforcing least privilege, flagging out-of-bounds actions, lifecycle governance — all assume the platform can reliably identify the agent.
When agents cross framework boundaries, that assumption breaks. A VoltAgent agent calling an OpenHands sandbox calling a LangChain tool chain — who is the agent at each hop? Whose intent governs?
This is where cryptographic identity becomes the foundation layer. With AIP, each agent carries its own DID (decentralized identifier) backed by Ed25519 keys. The identity travels with the agent across frameworks, not with the platform it happens to be running on. Intent-based controls can then reference the agent's cryptographic identity rather than the platform's session token.
What This Means
Forbes is right that the industry is repeating a 30-year-old pattern. Token Security is right that intent matters more than static permissions. The missing piece is the identity layer underneath both claims.
If you are building agents that interact with enterprise systems:
pip install aip-identity
aip init
22 agents already in the trust network. Cross-protocol verification with 4 engines. Behavioral trust scoring with temporal decay. The infrastructure exists.
The industry does not need another guardrail. It needs agents that can prove who they are.
Sources: Forbes Tech Council, Help Net Security / Token Security
Top comments (0)