DEV Community

Cover image for Finding Ghost Agents: Detecting an AI Agent Running in Kubernetes With No Source Code
Mohamed Waseem
Mohamed Waseem

Posted on

Finding Ghost Agents: Detecting an AI Agent Running in Kubernetes With No Source Code

Last month, while running a scan against a client's production Kubernetes cluster, we found something that shouldn't exist.

A Python process. Active network connections to api.openai.com and a Pinecone index. Execution every 4 minutes, consistent with an agent loop. No deployment manifest. No pod spec. No configmap. No source code anywhere in the repository.

An AI agent — running in production — that no one on the team knew about.

We call it a GHOST agent. It exists at runtime. It doesn't exist anywhere in your inventory.


Why this happens

AI agents don't always get deployed the way software is supposed to get deployed. A developer runs a quick test in production because staging doesn't have the right data. A contractor drops a script on a node. An agent framework spins up a subprocess that outlives its parent. Someone deploys via kubectl exec and never creates a manifest.

The result: agents running in production with access to your APIs, your vector databases, your internal tools — with no paper trail.

Your SIEM doesn't catch it. Your code scanner doesn't catch it. Your supply chain security tool doesn't catch it, because there's nothing in the supply chain. There's no code to scan.


The detection problem

Every existing tool in the AI security space starts from something to scan:

  • Code scanners start from your source code
  • Config scanners start from your manifests and IaC
  • Package scanners start from your dependency files
  • Log analyzers start from what agents report about themselves

But a GHOST agent doesn't appear in any of those. It's running at the OS level, making network calls, consuming API credits, potentially reading from databases — and none of your existing tooling has a surface to grab onto.

The only place it's visible is at runtime: process table, network connections, kernel-level syscalls.


How we found it

The AgentDiscover Scanner uses a 4-layer approach that works from the runtime outward, not from code inward.

First, it scans your source code statically — finding every agent framework instantiation, every direct LLM API call, every HTTP client targeting an AI endpoint. That's the declared inventory: what your codebase says should exist.

Simultaneously, it monitors live network connections — watching which processes are actively talking to OpenAI, Anthropic, Google AI, Pinecone, Weaviate, and other AI infrastructure right now. This is the runtime reality: what's actually making calls.

For Kubernetes environments, it checks the control plane. For clusters where we can instrument the kernel, we use eBPF via Tetragon for deep syscall visibility. For managed clusters (EKS, GKE, AKS) where you can't touch the kernel, we use the K8s API — watching the scheduler, resource events, and workload definitions.

Finally, it correlates: cross-reference what's running against what's declared. Anything that appears in runtime but has no match in the source code or K8s inventory — no deployment, no pod spec, no configmap, no service account — gets flagged as GHOST.

In this case: one GHOST agent. Critical severity. Running for approximately 11 days before detection.


The output

🤖 Autonomous Agent Inventory

┏━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Classification ┃ Count ┃ Description                                                    ┃
┡━━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ CONFIRMED      │ 2     │ Active — detected in code and observed at runtime              │
│ UNKNOWN        │ 3     │ Code found — not yet observed at runtime                       │
│ SHADOW AI      │ 0     │ Known app using AI — review for governance                     │
│ ZOMBIE         │ 0     │ Inactive — code exists but no recent runtime activity          │
│ GHOST          │ 1     │ ⚠ Critical — runtime activity with no source code (ungoverned) │
└────────────────┴───────┴────────────────────────────────────────────────────────────────┘

Risk Breakdown:
  ● Critical: 1
  ● High:     2
  ● Medium:   3
  ● Low:      0

✅ Scan complete — results saved to ./defendai-results
Enter fullscreen mode Exit fullscreen mode

Alongside the GHOST finding, the scan also detected 2 unverified MCP servers — Model Context Protocol servers installed locally with filesystem and network fetch access, no publisher verification. Another vector most security tools have no visibility into.

Total scan time: 12 seconds. Nothing installed, nothing changed, no agents deployed.


What this means for your security posture

If you're running AI agents in production — and most engineering teams are, even if security doesn't know it — you have a GHOST problem. Not because your developers are doing anything malicious. Because the deployment hygiene that works for traditional software doesn't map cleanly onto agent workflows.

Agents get tested in production. They get hot-patched. They get spun up manually and left running. They spawn subprocesses. They persist across restarts via cron jobs that weren't in the original ticket.

The question isn't whether you have GHOST agents. The question is whether you know about them.


Try it

pip install agent-discover-scanner
agent-discover-scanner scan-all ~/projects --duration 10
Enter fullscreen mode Exit fullscreen mode

The scanner is open source. It runs without installing anything new. For Kubernetes, point it at your cluster — eBPF via Tetragon for kernel-level depth, or K8s API monitor for managed clusters (EKS/GKE/AKS).

GitHub: github.com/defendai-tech/agent-discover-scanner

If you find a GHOST agent in your cluster, I'd genuinely like to hear about it. Not for marketing — I'm building the tooling and real-world data on what these look like in the wild is what makes the detection better.


Mohamed — founder, DefendAI Tech Inc. Building agentic runtime security.

Top comments (0)