DEV Community

Cover image for Your AI Agents Have 50x More Identities Than Your Employees
AI Agent Digest
AI Agent Digest

Posted on

Your AI Agents Have 50x More Identities Than Your Employees

Your AI Agents Have 50x More Identities Than Your Employees

97% of non-human identities are over-privileged. 88% of orgs have had agent security incidents. The identity crisis nobody's managing.

Last month, security researchers at Entro analyzed 27 million non-human identities across enterprise environments. The ratio they found: 144 machine identities for every human. Up 44% from the year before.

That number alone should alarm you. But here's the part that keeps security teams up at night: 97% of those identities have excessive privileges. 91% of former employee tokens are still active. And 71% haven't been rotated within recommended timeframes.

We've spent the last two years obsessing over what AI agents can do — memory architectures, scaling patterns, security protocols. We've largely ignored who they are. Every agent running in your infrastructure has an identity — API keys, service accounts, OAuth tokens, certificates. Those identities are multiplying faster than anyone can track, and they're becoming the biggest attack surface in enterprise AI.

Welcome to the non-human identity crisis.

What Is Identity Dark Matter — and Why Should Agent Builders Care?

Identity dark matter refers to the mass of non-human identities — API keys, service accounts, tokens, certificates — that exist outside the visibility of traditional identity management systems. Like its cosmological namesake, it constitutes the majority of the identity universe but remains largely invisible to standard tools.

The term gained traction in January 2026 when security researchers started mapping the scale of the problem. The findings were sobering:

  • Machine identities grew from 50,000 per enterprise in 2021 to 250,000 in 2025 — a 400% increase
  • NHI-to-human ratios range from 45:1 to 100:1 across typical enterprises, with some hitting 500:1
  • Over 3 million AI agents now operate within corporations globally
  • 23.8 million leaked secrets were detected on GitHub in 2024 alone, with 70% of 2022 secrets still valid two years later

AI agents are being called "the next wave of identity dark matter" because they combine the worst properties of existing NHIs — long-lived credentials, broad permissions, minimal monitoring — with something new: autonomy. A traditional service account runs the same code path every time. An AI agent decides what to do at runtime, which means its actual permission needs are unpredictable by design.

How Bad Is the Current State of Agent Identity Management?

The short answer: catastrophically bad. Multiple independent reports from early 2026 converge on the same conclusion — adoption has massively outpaced governance.

The Gravitee State of AI Agent Security 2026 report surveyed 900+ executives and practitioners and found:

Finding Stat
Orgs with confirmed/suspected agent security incidents 88%
Agents not actively monitored or secured 47% (~1.5M agents)
Teams using shared API keys for agent-to-agent auth 45.6%
Teams treating agents as independent identity-bearing entities 21.9%
Agents deployed with full security/IT approval 14.4%
Deployed agents that can autonomously create and task other agents 25.5%

Meanwhile, the CSA/Oasis NHI survey found that 79% of IT professionals feel ill-equipped to prevent NHI-based attacks, and 78% lack even documented policies for creating or removing AI identities.

Here's the perception gap that matters: 82% of executives say they're confident existing policies protect against unauthorized agent actions. Yet 88% of their organizations have already had incidents. That confidence-incident gap is where breaches live.

The Strata Identity 2026 survey breaks down how teams actually manage agent credentials today:

  • 44% use static API keys
  • 43% rely on username/password combinations
  • 35% depend on shared service accounts
  • 27.2% use custom, hardcoded authorization logic
  • Only 23% have a formal enterprise-wide strategy for agent identity management

To put it bluntly: nearly half the industry is authenticating autonomous AI systems the same way we authenticated bash scripts in 2010.

What Is Identity Drift — and Why Do Agents Make It Worse?

Identity drift is what happens when an agent's granted permissions diverge from its actual operational needs over time. Permissions accumulate because removing access is scarier than granting it — until an attacker inherits the drift.

This isn't a new problem. Service accounts have always accumulated cruft. But agents accelerate it in three ways:

1. Agents evolve their behavior without changing their identity. When you update a prompt, add a tool, or connect a new data source, the agent's actual capability scope changes. Its permissions don't. A research agent that started summarizing public papers and now queries internal databases still holds the same broadly-scoped API key it was issued on day one.

2. Agents spawn agents. The Gravitee report found that 25.5% of deployed agents can create and task other agents autonomously. Each child agent inherits or receives credentials, creating identity chains that nobody audits. This is the "confused deputy" problem — OWASP's Agentic Applications Top 10 ranks Identity and Privilege Abuse at #3 (ASI03) specifically because agent identities get "unintentionally reused, escalated, or passed across agents without proper scoping."

3. Nobody offboards agents. Only 20% of organizations have formal processes for revoking agent API keys. The rest leave them active indefinitely. One CSO Online audit uncovered an Azure service account called svc-dataloader-poc that had gone untouched for 793 days while maintaining Owner-level access to three production subscriptions, including customer databases. That single audit found 47 similar forgotten accounts.

The pattern is always the same: a proof-of-concept agent gets broad permissions to work. The PoC becomes production. The permissions stay. Nobody remembers they exist until they show up in a breach postmortem.

What Do Real NHI Breaches Look Like?

NHI-related breaches aren't theoretical — they're among the most damaging incidents of the past two years. The NHIMG (Non-Human Identity Management Group) catalogs over 40 confirmed NHI breaches. Here are the patterns that matter for agent builders:

Incident NHI Exploited Impact
U.S. Treasury via BeyondTrust (Dec 2024) Compromised API key Chinese APT accessed 3,000+ files, 100 computers, OFAC data
Snowflake/Ticketmaster (May 2024) Stolen credentials, no MFA 160 orgs, 560M user records (AT&T, Santander)
Internet Archive (Oct 2024) GitLab auth tokens exposed for 2 years 31M accounts compromised
Microsoft Midnight Blizzard (Jan 2024) Legacy test account without MFA Russian APT29 accessed internal systems
AWS environments (Aug 2024) Exposed .env files 230M cloud environments affected
Dropbox Sign (May 2024) Compromised backend service account Emails, hashed passwords, API keys, OAuth tokens exposed

The common thread: not sophisticated zero-days, not nation-state malware. Forgotten credentials with too much access. The kind of thing every team creating AI agents generates daily.

Now scale this to agents. If a single forgotten service account at the U.S. Treasury let an APT access 3,000 files, what happens when you have 250,000 machine identities, 47% of which aren't monitored, running autonomous agents that can spawn other agents?

What Does the OWASP Agentic Top 10 Say About Identity?

OWASP's 2026 Top 10 for Agentic Applications — developed by 100+ industry experts — places Identity and Privilege Abuse at #3 (ASI03), making it one of the highest-priority risks in agent security. Two other entries are directly related.

The three identity-relevant entries:

  • ASI03: Identity and Privilege Abuse — Agents inherit user/system identities including credentials and tokens. Privileges get reused, escalated, or passed across agents without proper scoping. Creates "confused deputy" scenarios where an agent acts with authority it shouldn't have.

  • ASI04: Agentic Supply Chain Vulnerabilities — Third-party tools and MCP servers introduce credential chains that extend trust boundaries beyond what the deploying organization controls.

  • ASI07: Insecure Inter-Agent Communication — Multi-agent systems exchange messages without authentication or encryption, allowing identity spoofing between agents.

OWASP's recommended mitigations map directly to what the NHI security vendors are building: short-lived credentials, task-scoped permissions, policy-enforced authorization on every action, and isolated agent identities. The gap is that almost nobody is implementing them yet.

What Solutions Are Emerging for Agent Identity Security?

A new category of NHI security vendors has emerged, collectively raising over $250M in the past 18 months to solve the agent identity crisis. The market is moving fast, but adoption lags far behind the threat.

Vendor Focus Key Funding
Astrix Security NHI discovery, posture management, ITDR $85M total ($45M Series B)
Oasis Security NHI lifecycle governance $75M Series A
GitGuardian Secrets security + NHI governance $50M Series C (Feb 2026)
Aembit Workload-to-workload identity (non-human IAM) $25M Series A
Entro Security NHI & secrets risk intelligence $18M Series A
Clutch Security "Identity Lineage" + zero-trust for NHIs Stealth launch
Natoma NHI management Stealth launch

GitGuardian's CEO Eric Fourrier put it directly: "Organizations that once managed hundreds of service accounts will now face thousands of autonomous AI agents, each requiring secure credentials."

Beyond vendor tools, four architectural patterns are gaining traction among teams that take this seriously:

1. Short-lived, task-scoped credentials. Instead of issuing a static API key that lives forever, generate credentials that expire after each task or session. OAuth 2.0 token exchange and On-Behalf-Of (OBO) flows make this feasible without major infrastructure changes.

2. Just-In-Time (JIT) identity provisioning. Agents request permissions at task initiation and lose them at task completion. No standing access. This eliminates identity drift by design.

3. Continuous NHI discovery and inventory. You can't secure what you can't see. Automated scanning for active agent identities, orphaned tokens, and leaked secrets — treating it like asset management, not a one-time audit.

4. Identity lineage tracking. When Agent A spawns Agent B with delegated credentials, trace the full chain back to a human sponsor. Only 28% of organizations can do this today. It should be 100%.

A Practical Checklist for Agent Builders

If you're building or deploying AI agents, here's the minimum identity hygiene that should be non-negotiable:

  • Audit your NHI inventory. How many agent identities exist in your infrastructure? If you can't answer this in under an hour, you have a problem.
  • Kill static API keys. Move to short-lived tokens with automatic rotation. If you must use API keys, set a maximum TTL and enforce it.
  • Scope permissions per task, not per agent. An agent that needs read access to one database table shouldn't have admin access to the entire cluster.
  • Implement credential offboarding. When an agent is decommissioned, its credentials must die with it. Automate this.
  • Track identity lineage. If your agents can spawn other agents, you need to trace every credential chain back to an accountable human.
  • Monitor for drift. Compare granted permissions to actual usage monthly. Revoke anything unused for 30+ days.

Key Takeaways

  • Non-human identities outnumber human identities 45:1 to 144:1 in enterprises, and the ratio is accelerating with AI agent adoption. 97% of these identities are over-privileged.
  • 88% of organizations have already experienced agent security incidents, yet 82% of executives believe their existing policies are sufficient — a dangerous confidence gap.
  • Identity drift is the silent killer of agent security. Agents evolve their behavior, spawn child agents, and accumulate permissions indefinitely. Only 20% of organizations have formal offboarding processes for agent credentials.
  • The NHI security market has raised $250M+ in 18 months, signaling that the industry recognizes the problem even if most teams haven't acted yet.
  • The fix isn't exotic technology — it's identity hygiene. Short-lived credentials, task-scoped permissions, continuous inventory, and identity lineage tracking. The tools exist. The adoption doesn't.

Conclusion

We've been debating which agent framework to use, how to scale multi-agent systems, and whether MCP is secure enough. Those are valid questions. But they're all downstream of a more fundamental problem: we have no idea who our agents are.

Every agent in your infrastructure has an identity. That identity has permissions. Those permissions persist long after anyone remembers why they were granted. And when one of those forgotten identities gets compromised — not if, but when — the blast radius will be determined by how much access you gave it and how long you left it active.

The unsexy truth is that the biggest threat to your AI agent deployment isn't a sophisticated attack on your model or your memory system. It's a stale API key with admin access that nobody remembered to revoke.

Fix your identity hygiene before you fix your architecture. The attackers have already noticed you haven't.


AI Agent Digest — Real analysis, real code, real opinions on AI agent systems.

Top comments (0)