DEV Community

Douglas Walseth
Douglas Walseth

Posted on • Originally published at walseth.ai

RSA 2026: The AI Governance Gap Nobody Is Talking About

RSA Conference 2026 starts March 23. Every security vendor will announce AI agent governance.

CrowdStrike just acquired SGNL for $740M. Okta announced "Okta for AI Agents" (GA April 30). Singulr, Lasso, Arthur AI, and Patronus are all pitching runtime detection. The AI governance market is officially hot.

But nobody is talking about the gap that matters.

The Identity Plane vs. The Behavioral Plane

Okta's announcement is significant. They're extending enterprise IAM to treat AI agents as non-human identities — discovery, credential vaulting, universal logout, governance workflows. The stats they cite are real: 88% of orgs report AI agent security incidents, only 22% manage agents as identity-bearing entities.

CrowdStrike's SGNL acquisition adds policy-based access governance to their security platform.

Both are solving the identity plane: Who are your agents? What can they access? What credentials do they hold?

This is necessary. It is not sufficient.

What Happens After Authentication?

Once an agent is authenticated, credentialed, and authorized — what governs what it actually does?

Consider: Your AI coding agent has the right credentials, the right permissions, and passes every identity check. Then it:

  • Forgets a critical rule because the context window compressed it away
  • Generates code that passes tests but introduces a subtle security vulnerability
  • Makes the same class of mistake it made last week, because no one encoded the fix

Identity governance cannot prevent these. They are behavioral problems — structural issues with how agents process context, retain rules, and learn from failures.

The Detection Ceiling

Every vendor at RSA 2026 selling AI governance will demo the same thing: runtime detection. An agent does something bad, the system catches it, an alert fires.

The problem with detection-based governance is mathematical:

  • Alert volume grows linearly with agent count
  • The same class of violation can recur every day
  • Governance teams become alert-processing bottlenecks
  • Compliance evidence is a snapshot, not a guarantee

After 12 months of detection-based governance, you have the same violations. You just get faster alerts.

The Alternative: Prevent by Construction

What if the system structurally prevented violation classes from recurring?

The enforcement ladder:

  • L2 (Prose): A rule in documentation. Humans must remember it. This is where most governance "frameworks" stop.
  • L3 (Template): The rule is embedded in code templates. New code starts correct by default.
  • L4 (Test): The rule is checked automatically. Violations fail CI. No human needed.
  • L5 (Hook): The rule is enforced at the system level. The violation literally cannot occur.

Each level up requires zero additional human awareness. L5 enforcement means the lesson is permanent and compounding.

Production Numbers

We run this system in production:

  • 3,700+ violations processed through the enforcement ladder
  • <5% regression rate — once encoded at L4+, violation classes almost never recur
  • 250+ specs executed by AI agents under structural enforcement
  • Zero governance team — the system governs itself

What to Ask at RSA

When vendors pitch you AI governance this week:

  1. "When you detect a violation, what prevents the same class from recurring?" If the answer involves humans or dashboards — you're buying detection.
  2. "After 12 months, will we have more or fewer alerts?" If "more, because more agents" — governance scales linearly with your problem.
  3. "Does the system learn from violations structurally?" If "we update our models" — they improve detection, not governance.

The Real Gap

The companies that win the AI agent era will not have the best monitoring dashboards. They will have systems that get better every week without human intervention.

Identity governance (Okta, CrowdStrike) + behavioral enforcement (structural prevention) = complete AI agent governance. Either alone is incomplete.

RSA 2026 will be full of announcements about the identity side. The behavioral side — preventing violations structurally rather than detecting them — is the gap nobody is talking about.


Free governance scanner for any public repo: walseth.ai/scan. 30 seconds, no signup.

RSA 2026 AI Governance Hub with all vendor comparisons: walseth.ai/rsa-2026

Top comments (0)