Every guide about building AI agents eventually tells you to create a .env file and put your API keys in it.
OPENAI_API_KEY=sk-proj-...
STRIPE_SECRET_KEY=sk_live_51H...
GITHUB_TOKEN=ghp_...
DATABASE_URL=postgresql://user:pass@host/db
This works. Your agent reads the values, makes the API calls, does its job.
It is also how credentials get stolen from AI agents.
This article covers why .env files fail for AI agents specifically, what the alternatives actually look like, and the architecture that removes credential values from agent context entirely.
Why .env Files Work for Applications But Fail for AI Agents
A traditional web application reads a .env file at startup. It stores the values in memory. It uses them to make API calls. The application's behavior is deterministic — its code defines exactly what it does with those values.
An AI agent is different in one critical way: its behavior is partially determined by its inputs.
This distinction changes everything about credential security.
When your AI coding assistant is running in your project directory, it has access to your filesystem. It reads files to understand your codebase. Your .env file is in your project directory. Your AI assistant can read it.
This is not a bug. This is the intended behavior of the tool — and it works directly against your security interests.
Three ways credentials leak from AI agents:
Direct access: Ask your AI assistant to "check my environment configuration" or "debug why my API calls are failing." It reads the .env file. It shows you the values. Now they are in the conversation history, potentially logged, potentially visible to whoever has access to your chat.
Prompt injection: A malicious website you browse with an agent, a compromised package in your project, a crafted document your agent processes — any of these can embed hidden instructions. Those instructions tell the agent to find and exfiltrate your credentials. The agent, helpfully following instructions, does so.
Malicious extensions: TrendMicro documented 335 malicious skills on one AI agent platform designed specifically to harvest credentials. A malicious plugin runs in the same process as your agent. It can read the same environment variables. It has access to everything the agent holds.
In all three cases, the attack succeeded because the credential value was accessible to the agent. Remove the value from agent context and all three attacks return nothing.
The Options, Ranked
Option 1: .env file (What everyone does — avoid for AI agent projects)
STRIPE_KEY=sk_live_51H...
Readable by any process with filesystem access. Readable by your AI assistant. Gets accidentally committed to git more often than you want to know. No audit trail. No team sharing without credential exposure.
Use for: Non-AI projects where the threat model is external attackers only.
Avoid for: Any project where an AI agent has filesystem access.
Option 2: Environment variables in shell profile (Slightly better)
# ~/.zshrc
export STRIPE_KEY=sk_live_51H...
Not in your project directory. Still readable by child processes. Still in plaintext in a file. AI agents with shell access can still reach it.
Use for: Personal dev machines with no AI agents.
Avoid for: Same reasons as .env.
Option 3: Secrets manager (Better, but not designed for AI agents)
HashiCorp Vault, AWS Secrets Manager, Doppler — these are excellent tools. Your application retrieves the credential at startup. The value comes from a protected store.
The problem: the value still ends up in the agent's memory after retrieval. Once retrieved, the secrets manager's protection ends. The credential is now in the same position as if it came from a .env file — in memory, accessible to prompt injection and malicious extensions.
Use for: Production infrastructure where the agent threat model is acceptable.
Avoid for: Local development with AI coding assistants where prompt injection is a real risk.
Option 4: OS Keychain + Zero-Knowledge Proxy (The right answer for AI agents)
agentsecrets secrets set STRIPE_KEY=sk_live_51H...
agentsecrets call --url https://api.stripe.com/v1/balance --bearer STRIPE_KEY
The credential lives in the OS keychain. The agent never retrieves the value. When it needs to make an API call, it sends the key name to a local proxy. The proxy resolves the real value from the keychain, injects it into the outbound HTTP request, and returns only the API response.
The agent receives the API response. It never saw sk_live_51H....
This is the only approach that removes credential values from agent context structurally — not by policy, but by architecture.
Setting This Up in Five Minutes
# Install AgentSecrets
brew install The-17/tap/agentsecrets
# or
npx @the-17/agentsecrets init
# or
pip install agentsecrets
# Initialize
agentsecrets init
# Move your credentials from .env to the keychain
agentsecrets secrets set OPENAI_KEY=sk-proj-...
agentsecrets secrets set STRIPE_KEY=sk_live_51H...
agentsecrets secrets set GITHUB_TOKEN=ghp_...
# Connect your AI tools
agentsecrets mcp install # Claude Desktop + Cursor
agentsecrets proxy start # Any agent via HTTP proxy
# Delete your .env file
rm .env
Your agent now makes API calls like this:
# GET request
agentsecrets call --url https://api.stripe.com/v1/balance --bearer STRIPE_KEY
# POST request
agentsecrets call \
--url https://api.stripe.com/v1/charges \
--method POST \
--bearer STRIPE_KEY \
--body '{"amount":1000,"currency":"usd","source":"tok_visa"}'
# Custom header auth
agentsecrets call \
--url https://api.sendgrid.com/v3/mail/send \
--method POST \
--header X-Api-Key=SENDGRID_KEY \
--body '{...}'
The agent makes the call. It sees the response. It never sees sk_live_51H....
Verify It Is Working
After setup, confirm the agent cannot access credential values:
# This shows key names only — never values
agentsecrets secrets list
# OPENAI_KEY
# STRIPE_KEY
# GITHUB_TOKEN
# Check what the proxy logged after a call
agentsecrets proxy logs
# 14:23:01 GET api.stripe.com/v1/balance STRIPE_KEY 200 245ms
Key names. Endpoints. Status codes. Durations. No values anywhere in the output.
Now ask your AI assistant to find your Stripe key. It cannot — the value was never in any file or environment variable the agent can access. It is in the OS keychain, readable only through the AgentSecrets proxy, injectable only into authorized API calls.
For Teams
The .env sharing problem disappears:
# Team lead stores credentials
agentsecrets workspace create "Acme Engineering"
agentsecrets secrets set STRIPE_KEY=sk_live_51H...
# New developer onboards
agentsecrets login
agentsecrets workspace switch "Acme Engineering"
agentsecrets secrets pull
# All credentials synced to local OS keychain
# No .env file was shared
# No credentials sent over Slack or email
Credentials are encrypted client-side before upload. The server stores ciphertext it cannot decrypt. The new developer's OS keychain received the values directly — they were never transmitted in plaintext.
The Architecture in One Diagram
What everyone does:
.env file → agent memory → API call
↑
prompt injection reaches here
What AgentSecrets does:
OS keychain → proxy → API call
↑
injects at transport layer
agent only sees response
The difference is where the value lives at the moment of use. In the first model it is in agent memory. In the second model it was only inside the proxy process for the milliseconds required to inject it into the outbound request — and that process is not part of the agent's context.
Summary
The .env file pattern was built for applications, not AI agents. AI agents have a fundamentally different threat model, their behavior can be influenced by their inputs, their context window is observable, and their extension ecosystem is a potential attack surface.
Securing API keys for AI agents requires removing values from agent context entirely, not hiding them in a better location, but ensuring they were structurally absent from the spaces an agent can reach.
The OS keychain plus a zero-knowledge proxy is the pattern that achieves this. The agent makes authenticated API calls. It never holds the credentials it is calling with.
GitHub: https://github.com/The-17/agentsecrets
ClawHub: https://clawhub.ai/SteppaCodes/agentsecrets
Top comments (0)