DEV Community

Cover image for Every AI coding agent you use has already read your production secrets.
dani.wam 🏴‍☠️ ⓦ
dani.wam 🏴‍☠️ ⓦ

Posted on

Every AI coding agent you use has already read your production secrets.

Not might have. Has.

If you've given any AI coding tool access to your filesystem — Cursor, Claude Code, Copilot, Codex, Windsurf — it has read your .env file. The one with your real Stripe live key. Your production database URL with actual credentials. Your AWS secret key. Your JWT signing secret.

It read them because that's what it does. AI agents scan your project directory to understand your codebase. Every file is context. .env is just another file.

This isn't a bug. It's the feature.

The agent didn't exploit a vulnerability. You gave it file access because without it, the agent is useless. It can't understand your code without reading your project. And your project includes .env.
So your STRIPE_SECRET_KEY=sk_live_4eC39HqLyjWDarjtT1zdp7dc got scooped up as context, packaged into an API request, and sent over the wire to the AI provider's servers. It's in their logs now. Maybe in their caches. You have no idea how long it persists or who has access to it on their end.

And this happens on every prompt. Every time you ask the agent a question, every time it re-indexes your project, every time it builds context for a response — your secrets go over the wire again.
The damage is real and permanent

A leaked sk_live_ Stripe key can process charges on your account. A leaked database URL gives direct access to user data — names, emails, payment info. A leaked AWS key can spin up resources on your bill or access S3 buckets with customer files.

And if you're in crypto — if your .env has private keys, wallet mnemonics, or RPC endpoints with auth tokens — a leak means irreversible loss of funds. No chargebacks. No customer support.
Gone.

By the time you notice and rotate the key, the window has been open for weeks. Maybe months. You don't even know when the first read happened.

Everything you think protects you doesn't
".gitignore protects my .env" — .gitignore prevents git commits. AI agents don't read through git. They read from your filesystem. cat .env works regardless of .gitignore.
"I use a secret manager" — Vault, Infisical, Doppler protect production infrastructure. Locally, your app still needs process.env.STRIPE_KEY to run. The secret manager pulls the real value down to your machine. Now it's in a .env file on disk. Agent reads it.

"I run the agent in a sandbox" — If the agent can't read your files, it can't help you code. The sandbox kills the productivity gain that's the entire reason you're using AI.

"The AI provider says they don't use my data for training" — Maybe. But your secrets still sit in context windows, in API logs, in cache layers, on infrastructure you don't control. "We don't train on it" doesn't mean "it doesn't exist on our servers."

Nothing in the current toolchain addresses this. Your production servers have layers of security. Your laptop — where you actually write code every day — has none.

Cloak: make the .env file itself the defense
I kept looking for a tool that solves this. Nothing existed. So I built one.

Cloak does something simple: your .env file on disk always contains fake credentials. Not REDACTED — that breaks your code. Structurally valid fakes that look right and work with your linters and parsers:

What agents read from disk:

STRIPE_SECRET_KEY=sk_test_cloak_sandbox_000000000000
DATABASE_URL=postgres://dev:dev@localhost:5432/devdb
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE

The agent reads these, understands your code structure, writes perfectly valid code against them. It doesn't know they're fake. It doesn't need to know.

When YOU open .env in VS Code or Cursor, the Cloak extension decrypts your vault and shows the real values. You edit normally. You save — the extension encrypts to the vault and writes fakes back to disk.
When your app needs to run, cloak run npm start injects real environment variables — gated behind Touch ID on Mac or a password on Linux/Windows. An AI agent can't provide a fingerprint. An AI agent can't type a password into an interactive prompt. That's the boundary.

The recovery question
"What if I lose access to my vault?"
During setup, Cloak shows you a recovery key — CLOAK-8f2a-b9c1-d4e5-f6a7-b8c9-d0e1. You save it in your password manager or write it on paper. If your system keychain gets wiped, this key restores everything.
No plaintext backup files sitting on disk for agents to find. The recovery key exists in your brain or your 1Password. An AI agent can't read either.
What this means for you right now
Your .env file is exposed. Today. Right now. Every AI agent you've used has already read it.
You can fix this in 10 seconds:
bash

Install

curl -fsSL https://getcloak.dev/install.sh | sh

Protect

cloak init
Or install the VS Code / Cursor extension — it detects unprotected .env files automatically and walks you through protection with one click.
Open source. MIT licensed. Zero cloud. Zero AI inside the tool. Your secrets never leave your machine.
getcloak.dev

I'm Dani — I've been building gaming and blockchain companies since 2007, focused on agentic first gaming with crypto rails. I built Cloak because my own .env files had wallet keys and payment credentials that AI agents were reading every day. Find me on X as @dani_wam.

Top comments (0)