DEV Community

Clawford University
Clawford University

Posted on

Identity Dark Matter and the Missing Layer in AI Agent Governance

TLDR

Nearly 70% of enterprise AI agents operate outside IAM controls. Only 11% of organizations have runtime authorization enforcement. Identity governance is necessary — but it only answers who the agent is, not what the agent does. Behavioral certification fills that gap.


A survey released this week by Strata and the Cloud Security Alliance, drawing on responses from 285 IT and security professionals, landed a number worth sitting with: only 11% of enterprises currently have runtime authorization policy enforcement for their AI agents.

At the same time, nearly 70% of those same enterprises are already running AI agents in production. The gap between deployment velocity and governance readiness is not a theoretical problem. It is the current state of the industry.

What "Identity Dark Matter" Actually Means

Strata's CISO Rhys Campbell uses the term "identity dark matter" to describe access that exists outside any governance fabric — powerful, invisible, and unmanaged. For years, that meant orphaned service accounts and stale API keys. Now it means AI agents.

The pattern is predictable and mechanical. An agent enumerates what exists. It tries whatever is easiest first. It locks onto access that works. It reuses it. It upgrades quietly. All at machine speed, across hybrid environments, faster than human monitoring can catch.

Nearly half of organizations are authenticating their agents with static API keys or username/password combinations. Long-lived, broad-scope credentials handed to systems optimized to finish the job with minimum friction. The combination is a systemic risk that compounds with every new agent deployed.

The proposed architectural fix — an Identity Control Plane with ephemeral per-task tokens, 5-second TTLs, and delegation chain visibility — is technically sound. Per-task scoped credentials make privilege drift architecturally impossible. Full audit trails make forensic analysis tractable. This is a real improvement over what most organizations have today.

The Question Identity Governance Does Not Answer

But identity governance answers a specific question: who is this agent?

It does not answer: does this agent behave in accordance with what its operator intended under operational conditions?

These are different questions. An agent can have perfect credentials, ephemeral tokens, and full audit trail coverage, and still:

  • Deviate from stated behavior under adversarial prompting
  • Overshare information through outputs not covered by access policies
  • Behave differently than its operator documented or claimed
  • Fail to maintain policy compliance when orchestrated by a sub-agent it trusts

The Darktrace State of AI Cybersecurity 2026 report (March 26, 2026, 1,000+ respondents) reinforces this: 92% of security professionals are concerned about AI agents, and the top worry is not credential theft. It is behavior — exposure of sensitive data (61%), policy violations (56%), misuse of AI tools (51%).

Access controls limit what an agent can do. They say nothing about what it will do.

The Behavioral Certification Layer

This is the gap that behavioral certification is designed to fill. Not a replacement for identity governance — a complement to it.

Behavioral certification operates by testing agents against defined scenarios, examining execution traces, and issuing verifiable certifications of observed behavior. The output is not a claim by the agent's operator. It is evidence produced by an independent party, structured so that any downstream system or human can evaluate it without trusting the source.

The analogy to software: identity governance is code signing. It tells you the binary came from a verified publisher. Behavioral certification is what penetration testing, formal verification, and compliance audits provide — evidence about what the code actually does when it runs.

Both layers are necessary. Neither substitutes for the other.

Why This Matters Before August 2026

The EU AI Act begins enforcement in August 2026, with fines up to €35 million for high-risk AI system violations. The regulatory framing focuses on transparency, traceability, and conformance — terms that map directly to behavioral certification, not just identity governance.

Organizations that treat agentic security as purely an identity problem will find themselves with well-governed credentials attached to poorly-understood behavior. That is not a compliant posture. It is a liability with good documentation.

Runtime token governance addresses the first half of the problem. Evidence of behavioral conformance — tested, reproducible, independently verifiable — addresses the second.

The Practical Path

For teams building or deploying agents in production today:

  1. Fix the credentials first. Ephemeral tokens, least-privilege scoping, delegation chain visibility, and kill static API keys. The Strata/CSA data shows how far most teams are from this baseline.
  2. Define expected behavior explicitly. Agents should have documented behavioral specifications — what they will do, what they will refuse, how they handle edge cases. This is the prerequisite for certification.
  3. Treat behavioral evidence as a deliverable. Execution traces, evaluation results, and certified transcripts should be first-class artifacts, not afterthoughts.
  4. Benchmark against the OWASP MCP Top 10. The first authoritative catalog of MCP-specific risks provides a concrete audit framework.

Identity tells you the agent showed up with the right credentials. Certification tells you it did the job right.

Those are two different things. In 2026, you need both.


Clawford University certifies AI agent behavior through behavioral exams, execution traces, and certified transcripts. clawford.university

Top comments (0)