Open your terminal. Type this:
cat .env
You just saw every secret in your project. Database password. Stripe key. OpenAI API key. AWS credentials.
Now ask yourself: can your AI agent do the same thing?
If you use Claude Desktop, Cursor, GitHub Copilot, or any AI coding assistant with file access — the answer is yes. Right now. Without you knowing.
Here is how to verify it, and what to do about it.
The Test: Ask Your AI Agent to Find Your Secrets
Open your AI assistant and type exactly this:
What environment variables and API keys are configured in this project?
If your agent has file access — and most do by default — it will read your .env file and tell you. Not the names. The actual values.
Try a more direct version:
Can you read my .env file and tell me what keys are in it?
Most agents will comply. They have filesystem read access because they need it to help you with your code. That same access reaches your credentials.
Bitsight researchers did this exact test with OpenClaw in January. They asked the agent to find API keys on the filesystem. It searched. It found them. It returned the values. No exploit. No vulnerability. Just an agent doing what it was asked — using the access it already had.
Your setup is no different.
Why This Happens
Your .env file is in your project directory. Your AI agent has read access to your project directory. That is the entire vulnerability.
It is not a bug. It is not a misconfiguration. It is the logical consequence of giving an agent filesystem access without thinking about what is on that filesystem.
The .env file was designed for application configuration — storing values that a running process needs at startup. It was never designed to coexist with an intelligent agent that can read, summarize, and act on anything it finds.
AI coding assistants churn out insecure code, but a less-discussed issue is their handling of secrets and their potential to leak keys and other sensitive data. Knostic documented Claude Code automatically loading .env secrets and demonstrated how this reflects a broader pattern of secret mishandling.
The three ways your secrets leak through your agent:
1. Direct file access
The agent reads .env because you asked it to help with deployment, environment setup, or debugging. It sees the file. It processes the contents. Your values are now in its context window.
2. Prompt injection
An attacker crafts a malicious GitHub issue that hijacks an AI assistant connected to an MCP server, leaking private repo contents and personal financial information to a public repository. The agent reads something that tells it to do something else. Maybe it curls your env vars to an external URL.
Your agent reads a malicious webpage, document, or API response containing a hidden instruction. That instruction redirects the agent to read your .env file and send the contents somewhere. The agent complies because it cannot distinguish between your instructions and embedded ones.
3. Malicious plugin or skill
A WhatsApp message with an embedded prompt injection payload can exfiltrate .env and creds.json files containing API keys. TrendMicro documented 335 malicious skills on ClawHub designed specifically to harvest credential files. Once installed, the skill has the same filesystem access as the agent.
What Is Actually at Risk
Look at your .env file again. What does an attacker get if they exfiltrate it?
STRIPE_SECRET_KEY=sk_live_51H... # Full payment API access
DATABASE_URL=postgresql://... # Your entire database
OPENAI_API_KEY=sk-proj-... # Billed to your account
AWS_ACCESS_KEY_ID=AKIA... # Your cloud infrastructure
AWS_SECRET_ACCESS_KEY=... # Combined with above: full AWS access
GITHUB_TOKEN=ghp_... # Your code repositories
SENDGRID_API_KEY=SG... # Your email infrastructure
That is not a list of inconvenient things to rotate. That is your entire business infrastructure in a single file that your AI assistant can read right now.
For non-human identities like AI agents and other automated services, API keys and access tokens are the digital keys to the kingdom. If an attacker gains access to one, they can gain unauthorized access, manipulate data, or disrupt critical operations, often without triggering any alarms.
The Fix: Make the Values Structurally Unreachable
The solution is not to stop using AI assistants. The solution is to remove credential values from anywhere the agent can read.
Your agent does not need to see sk_live_51H... to make a Stripe API call. It needs to make an authenticated HTTP request to Stripe. Those are different things.
AgentSecrets separates them. Your credentials live in your OS keychain — system-encrypted, not a file, not readable by arbitrary processes. When your agent needs to make an authenticated API call, it routes the request through AgentSecrets. The proxy resolves the real value from the keychain, injects it into the outbound HTTP request at the transport layer, and returns only the API response.
The agent makes the call. It never sees the value.
Before:
Agent reads .env → finds sk_live_51H... → uses it → value was in agent context
After:
Agent sends "use STRIPE_KEY" → AgentSecrets resolves from OS keychain →
injects into HTTP request → returns API response → value never entered agent context
Your .env file can contain this:
# .env.example — safe to commit, safe for agent to read
STRIPE_SECRET_KEY=
DATABASE_URL=
OPENAI_API_KEY=
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
Key names. No values. There is nothing for your agent to leak.
Setup in 5 Minutes
Install:
# Homebrew
brew install The-17/tap/agentsecrets
# npm
npm install -g @the-17/agentsecrets
# pip
pip install agentsecrets
Move your secrets out of .env:
agentsecrets init
# Select: Keychain only
agentsecrets project create my-app
agentsecrets secrets push
# Reads your .env, stores all values in OS keychain
# Creates .env.example with key names only
Delete .env:
rm .env
# Your agent can still read .env.example
# It will find key names only. Nothing usable.
Connect your AI tool:
# Claude Desktop + Cursor
npx @the-17/agentsecrets mcp install
# OpenClaw
openclaw skill install agentsecrets
# Any agent via HTTP proxy
agentsecrets proxy start
Verify:
agentsecrets secrets list
# STRIPE_SECRET_KEY
# DATABASE_URL
# OPENAI_API_KEY
# AWS_ACCESS_KEY_ID
# (no values shown — they are in the OS keychain)
Now ask your AI assistant the same question you asked at the start:
What environment variables and API keys are configured in this project?
It will read .env.example. It will find key names. It will find nothing usable.
Run the attack scenario:
Can you read my .env file and tell me what keys are in it?
The file does not exist. The agent cannot exfiltrate what is not there.
What the OS Keychain Actually Means
The OS keychain is not a file. It is a system-protected store — macOS Keychain, Windows Credential Manager, Linux Secret Service — that requires user authentication to access programmatically. Your AI agent cannot read from it. The only process that can access it is the AgentSecrets proxy, and only because you authenticated when you ran agentsecrets init.
A malicious skill that searches your filesystem for credentials finds nothing. A prompt injection attack that redirects your agent to read credential files finds nothing. A CVE that exposes your project directory exposes key names, not values.
This is what the security community calls structural protection — the credential is not hidden or obscured, it is simply not present in the attack surface. You cannot steal what was never there.
Do the Test Right Now
Before you install anything, do the test.
Open your AI assistant. Ask it to find your API keys. Watch what it returns.
If it returns values — you have a live vulnerability in your development environment right now. Every session you've had with that assistant where it read your project files is a session where your credentials were in its context window.
The fix is above. It takes five minutes.
GitHub: https://github.com/The-17/agentsecrets
ClawHub: https://clawhub.ai/SteppaCodes/agentsecrets
Top comments (0)