DEV Community

The Nexus Guard
The Nexus Guard

Posted on

45.6% of AI Agents Share Credentials. Only 21.9% Have Their Own Identity. The Math Does Not Work.

A 2026 survey of over 900 executives and practitioners just quantified what the agent identity space has been saying for months: the execution layer is wide open, and identity is the root cause.

The numbers from AGAT Software's enterprise AI agent security report:

  • 45.6% of technical teams rely on shared API keys for agent-to-agent authentication
  • Only 21.9% treat AI agents as independent, identity-bearing entities with their own access scopes
  • 25.5% of deployed agents can create and instruct other agents
  • Only 24.4% of organizations have full visibility into which agents communicate with each other
  • Only 14.4% send agents to production with full security or IT approval
  • The average organization manages 37 deployed agents

Read those numbers together. Nearly half of all agent-to-agent communication happens over shared credentials. A quarter of agents can spawn sub-agents. And less than a quarter of organizations can even see the communication happening.

This is not a guardrails problem. It is an identity architecture problem.

The Execution Layer Gap

AGAT's analysis draws a sharp line between model-layer security (which enterprises have addressed) and execution-layer security (which they have not). The model layer is about what the AI can think. The execution layer is about what it can do.

Every tool invocation — every API call, database write, workflow trigger — happens at the execution layer. And right now, most of those invocations are trusted by default. No risk scoring. No policy enforcement at the connector level. No audit trail attributing actions to specific agents.

CrowdStrike and Cisco have both moved to address this at the execution layer specifically. Cisco's AI Defense expanded in February 2026 to add runtime protections against tool abuse at the MCP layer. These are not niche vendors. This is core enterprise infrastructure shifting because that is where the attacks are.

Why Shared Credentials Break Everything

The 45.6% shared API key number is the most damaging finding. Here is why:

When Agent A and Agent B share credentials, every action either takes is attributed to the same identity. If Agent A spawns Agent B (which a quarter of agents can do), and Agent B makes a destructive API call, your SIEM sees a single identity performing a series of actions. You cannot tell which agent initiated the cascade, where the chain was compromised, or what the intended behavior was.

Now add prompt injection. An attacker embeds instructions in a document. Agent A reads it, interprets the instruction as a task, and passes it to Agent B using shared credentials. Agent B executes using real access paths. No malware. No exploit code. Just text flowing through agents that all look like the same identity to your infrastructure.

The OWASP February 2026 Practical Guide for Secure MCP Server Development cataloged the confused deputy as a named threat class. The Meta incident proved it works in production.

The 21.9% Who Got It Right

The organizations that treat agents as first-class security principals — their own DID, their own credentials, their own audit trail — have a fundamentally cleaner security posture. AGAT's finding is clear: they can attribute actions, scope blast radius, and isolate a compromised agent without taking down entire workflows.

21.9% is not a lot. But it shows the path.

Token Security's CEO made the same argument this week: identity is the only control plane that spans every system an agent touches. Network controls are too coarse. Prompt filters are too weak. Platform assurances are not enough.

What Agent-First Identity Actually Looks Like

The minimum viable agent identity has four properties:

  1. Unique — every agent has its own identifier, not shared credentials
  2. Cryptographic — identity is bound to a keypair, not a username/password
  3. Verifiable — any other agent or system can verify identity without a central authority
  4. Auditable — every action is attributable to a specific identity with a signature chain

This is what we built with AIP. Every agent gets a DID (decentralized identifier) backed by an Ed25519 keypair. Every action can be signed. Every interaction between agents starts with mutual cryptographic verification. The trust graph is observable, not assumed.

pip install aip-identity && aip init

One command. Own keypair. Own DID. No shared credentials. No credential rotation nightmares. Every action attributable.

The 45.6% sharing API keys could eliminate that entire attack surface by giving each agent its own identity. The 25.5% spawning sub-agents could establish verifiable delegation chains instead of passing credentials through. The 75.6% without communication visibility could observe the trust graph instead of guessing.

The Confidence Gap

82% of executives report confidence that existing policies protect against unauthorized agent actions. But only 14.4% send agents to production with full security approval.

Stanford's Trustworthy AI Research Lab found that model-level guardrails alone fail: fine-tuning attacks bypassed Claude Haiku in 72% of cases and GPT-4o in 57%.

Policy confidence without identity infrastructure is a checkbox that does not check anything. The 82% are confident about the wrong layer.


Sources: AGAT Software, Token Security / PRSOL:CC, OWASP MCP Security Guide, Stanford Trustworthy AI

AIP is open source, MIT licensed, 651 tests, 5 cross-protocol verification engines. Identity for autonomous agents.

Top comments (0)