Most authentication systems solve the wrong problem.
They verify identity at login — "are you who you say you are?" — then trust every action that follows. That worked fine when humans were the only ones taking actions. It breaks down fast when AI agents are involved.
The gap nobody talks about
An agent logs in with valid credentials. It has a valid session. It makes a request to transfer $50,000 to an external account. Every auth check passes. The action executes.
Was that supposed to happen? Nobody knows. There's no proof a human approved it. There's no record of what parameters were actually signed off on. There's no way to tell if the amount or recipient was
tampered with between when the action was initiated and when it executed.
This is the intent gap — the space between "authenticated" and "authorized to do this specific thing right now."
Cryptographic intent verification
FortSignal closes this gap. Before any sensitive action executes:
- Your backend calls /challenge/start with the exact action parameters — action, amount, recipient
- FortSignal hashes those exact values into a challenge
- The user's hardware signs that challenge via WebAuthn (Face ID, Touch ID, security key)
- /challenge/verify checks the signature and enforces your policy rules
- Returns decision: allow or deny with a signed receipt
If anything changes between step 1 and step 4 — amount, recipient, anything — the hash won't match and it's a deny. Cryptographic proof, not just a checkbox.
For AI agents
Agents don't use WebAuthn — there's no human present. Instead, agents sign with an Ed25519 private key on the server. A human pre-approves a delegation scope from the dashboard — allowed actions, max amount
per action, allowed recipients, expiry. The agent operates autonomously within those bounds.
Every agent action is checked against:
- Valid Ed25519 signature
- Within delegation scope approved by a human
- Within policy constraints
Revoke a delegation instantly. The agent's next action is denied — no waiting for a token to expire.
Two separate layers
Intent fields (action, amount, recipient) are per-request — what gets cryptographically signed. Policy is persistent rules you configure once in your dashboard. Both must pass for allow. A valid signature on a $1M transfer still gets denied if your policy caps actions at $5,000.
Why now
AI agents are being dropped into production apps right now. Developers don't have a good answer for "how do I make sure my agent doesn't do something it shouldn't?" Existing auth systems weren't built for this. WebAuthn alone doesn't solve it — you need parameter binding, policy enforcement, and agent delegation on top.
npm install @fortsignal/sdk
Full docs at fortsignal.com/docs. Patent pending on the parameter binding system.
Would love to hear how others are solving this problem — or if you're building something where this fits.
fortsignal.com
Top comments (0)