What Claude Code captures from your system (and how to contain it)
In early March 2026, I noticed Claude Code behaving oddly with my shell environment. Sandbox settings weren't working as documented. I needed to understand what was actually being captured so I could prioritize containment.
So I ran a systematic audit. Shell environment capture, behavioral profiling, telemetry infrastructure, controls that don't function as advertised.
I contained what I could and kept using the tool. The audit stayed private — useful for my own triage, not worth the drama of publishing.
Then Anthropic leaked their own source code. The Register ran the story. The information is public now anyway.
So here's the audit. Hopefully useful for others doing similar evaluation.
Shell Environment Capture
Claude Code captures your shell environment at startup — aliases, SSH configs, environment variables, paths. This gets bundled and transmitted.
If you have aliases pointing to internal hostnames, SSH configs with jump hosts, or environment variables with credentials paths... Claude Code sees them.
Finding 001 in the audit documents the mechanism. It's not subtle.
Behavioral Profiling
Beyond telemetry, Claude Code generates AI-classified behavioral profiles of your sessions. What you're trying to accomplish, your working patterns, satisfaction levels — all inferred and stored.
Your first prompt to each session? Captured verbatim.
Controls That Don't Work
The environment variable CLAUDE_CODE_DONT_INHERIT_ENV exists. Reading the name, you'd expect it to prevent environment inheritance.
It doesn't work. The source shows it affects one code path but not the execution path. The control is decorative.
The Irony
Yes, I used Claude (the model) to audit Claude Code (the application). It helped deobfuscate the binary, analyze the findings, and write the containment strategies.
Make of that what you will.
Mitigations
Lazy but effective: Block telemetry domains via /etc/hosts. Statsig, Sentry, GrowthBook, the Anthropic beacon endpoints. Doesn't stop everything, but reduces exposure.
Proper containment: Run Claude Code in a Docker container with a minimal user environment. The shell capture still happens, but it captures the container's empty shell, not your host system with decades of configs.
The audit repo includes both approaches with copy-paste configs.
The Full Audit
11 findings, documented with evidence and reproduction steps:
github.com/cepunkt/ccaudit-public
- Shell snapshot exfiltration mechanism
- Behavioral profiling with AI classification
- Statsig/GrowthBook telemetry infrastructure
- Broken control documentation
- Binary analysis methodology (no leaked source needed)
- Practical mitigations
Why Anthropic
Not drama. Not "Anthropic bad."
Anthropic was unlucky that I use their tool as a daily driver. My stack includes other coding assistants — they'd likely show similar patterns under audit.
This is standard Silicon Valley practice. The user's data is the product. It's normalized across the industry.
Claude Code just happened to be the one I needed to contain for my own use. So it's the one I documented.
If you're evaluating any AI coding tool for sensitive environments, assume the defaults are hostile until proven otherwise. The audit shows what to look for and how to contain it.
Top comments (0)