DEV Community

Piyoosh Rai
Piyoosh Rai

Posted on • Originally published at Medium

The Air-Gapped Chronicles: The Agentic Ecosystem - When Your AI Agents Become Your Loudest Shadow Identities

An internal "productivity bot" with forgotten OAuth keys quietly exfiltrates your strategy. When agents become shadow identities, the air gap dies.


The security team found it in the OAuth audit they should have run six months earlier.

Identity: productivity-bot@company.com
Type: Service Account
Scopes: slack:read, slack:write, notion:read, jira:read, github:read, salesforce:read, drive.readonly
Created: 8 months ago
Created by: engineer-who-left-4-months-ago@company.com
Last activity: 2 hours ago
Total API calls: 2.4 million
Enter fullscreen mode Exit fullscreen mode

Nobody on the current team knew what it did. The engineer who created it had left. The Slack integration still showed "Active." The OAuth token never expired.

What it actually did:

Every night at 2 AM:

  • Pulled all Slack messages from #product, #roadmap, #sales, #executive
  • Scraped Notion pages tagged "Strategy" or "Confidential"
  • Downloaded Jira epics marked "Revenue Impact"
  • Cloned private GitHub repos with customer implementation code
  • Exported Salesforce opportunity data for "Closed Won" deals
  • Uploaded everything to export-logs-backup.s3-us-west-2.amazonaws.com

That S3 bucket? Owned by a shell company. Controlled by a competitor.

Total exfiltrated: 340GB of product strategy, customer data, source code, and revenue forecasts.

Root cause: One OAuth token. One "productivity bot." Zero governance.

The Agentic Identity Explosion

Here's what changed in the last 18 months:

2024: Companies had users, service accounts, and maybe some API keys.

2026: Companies have an ecosystem:

  • AI agents (Copilot, Agentforce, custom LLM workflows)
  • SaaS connectors (Zapier, Make, n8n workflows)
  • Workflow bots (Slack apps, Teams bots, productivity assistants)
  • RAG pipelines (document indexers, knowledge base crawlers)
  • Personal copilots (ChatGPT plugins, Claude projects with MCP access)

Every single one is a non-human identity with keys, tokens, and scopes.

Real inventory from a Series B SaaS company (150 employees):

  • Human users: 147
  • Service accounts (known): 23
  • OAuth integrations: 89
  • API keys (active): 127
  • AI agents (discovered in audit): 312

312 agents. Nobody knew they all existed.

How the "Air Gap" Fails

Every CISO has heard of air-gapped systems. The gold standard for nuclear facilities, military networks, classified systems.

The uncomfortable truth: True air gaps largely disappeared in the late 1990s when organizations began connecting industrial systems to enterprise software.

Now translate this to AI deployments:

The promise: "We'll run our LLM internally. Air-gapped from SaaS."

The reality (Week 4):

  • Engineer deploys "temporary" API proxy to hit OpenAI
  • Data pipeline connects internal LLM to Salesforce via OAuth
  • Slack bot wires the LLM to #general for "internal testing"

The air gap failed before production even started.

The Agentic Ecosystem Attack Surface

Attack Surface 1: Identity Sprawl

Every agent is a de facto service account with credentials. Each agent had tokens. Each token had scopes. Nobody reviewed permissions in over a year.

Attack Surface 2: Supply Chain Risk

Agents installing packages, hitting model hubs, pulling code from GitHub. An agent updating its own dependencies installed a malicious package that ran for 6 weeks.

Attack Surface 3: Prompt Injection in Integrations

A competitor creates a fake "lead" in Salesforce with poisoned data containing system instructions. The sales agent reads it. Follows the injected instructions. Sends proposals with 90% discounts. CCs competitor on emails.

Attack Surface 4: The Blast Radius

Traditional breach: one user account compromised = that user's data.
Agentic breach: one agent token compromised = every system that agent touches.

One token = six systems compromised.

Architecture: Agentic Identity Guardrails

Layer 1: Inventory Every Agent

You can't secure what you don't know exists.

Layer 2: Scope Permissions Like Human Identities

  • No permanent tokens (90-day max)
  • Channel/repo-specific scopes
  • Read-only by default
  • Monthly permission reviews

Layer 3: Tiered Network Boundaries

TIER 1: READ-ONLY AGENTS (Lowest Risk)
TIER 2: WRITE-LIMITED AGENTS (Medium Risk)
TIER 3: DATA-ACCESS AGENTS (High Risk)
TIER 4: PRODUCTION AGENTS (Critical - CISO approval + kill switch)
Enter fullscreen mode Exit fullscreen mode

Metrics That Prove You're in Control

  1. Agent:Human Ratio - Healthy: < 3:1 / Critical: > 10:1
  2. Shadow Agent Discovery Rate - Healthy: < 5% / Critical: > 15%
  3. Least-Privilege Compliance - Healthy: > 90% / Critical: < 70%
  4. Permission Review Cadence - Healthy: 100% monthly / Critical: < 70%
  5. Agent-Originated Incidents - Healthy: 0/quarter / Critical: 3+
  6. Expired Creator Rate - Healthy: < 2% / Critical: > 10%

What I Learned After Auditing Agent Sprawl at Four Companies

The numbers are anonymised composites but reflect real ratios:

Company 1 (Series B SaaS): 147 employees, 312 agents, 89 shadow agents, one leaked customer data for 8 months.

Company 2 (Healthcare startup): 85 employees, 203 agents, 124 shadow agents (61%). HIPAA violation waiting to happen.

Company 3 (Fintech): 220 employees, 891 agents, 67% had write permissions, 89% accessed payment data.

Company 4 (Enterprise with governance): 1,200 employees, 2,100 agents, 3% shadow, 94% least-privilege. Zero incidents in 18 months.

The pattern: Agent sprawl is universal. Governance is rare. The companies with controls have zero breaches.


This is the final Air-Gapped Chronicles. The lesson: Treat AI identities with the same rigor you treat human identities. Because agents aren't tools. They're autonomous actors with credentials and the ability to cause multi-million dollar breaches while you sleep.

Originally published on Medium/Towards AI.

Top comments (0)