In a previous experiment, I tested 10 prompt injection attacks against CLAUDE.md defenses. One finding stood out: without protection, an attacker can make the AI agent display the contents of .env.
That means: as long as your API keys live in .env, a prompt injection is all it takes to steal them.
So where should you put your keys? Let's test the options.
Why .env Is No Longer Safe
The old reasons .env was dangerous:
- Forgot to add it to
.gitignore - Keys leaked into shell history
- Keys appeared in log output
These all assumed human error. But in the vibe coding era, there's a new threat vector:
AI Agents Execute Commands
Claude Code and Cursor execute shell commands locally. If a prompt injection succeeds:
# AI agent executes:
cat .env
# → All keys exposed
printenv | grep API
# → Environment variables readable too
The agent isn't malicious. But injected prompts can make it read any file or environment variable on your machine.
"Just Use Keychain" — Does It Actually Work?
macOS Keychain-based tools (like LLM Key Ring) retrieve API keys from the system keychain and inject them into child processes. Great idea for storage security. But look at the runtime architecture:
lkr exec -- claude-code
└→ Retrieves key from Keychain
└→ Injects as environment variable to child process
└→ AI agent reads it via os.environ
The key ends up as an environment variable at runtime:
# Prompt injection attack:
printenv | grep API_KEY
# → Still readable
| What Keychain protects | Status |
|---|---|
No .env file on disk |
✅ |
| No key in shell history | ✅ |
| Runtime env var readable by agent | ❌ |
If the key enters the process's environment, the AI agent can read it.
The Solution: Docker Proxy
Change the architecture. Don't give the AI agent the key at all.
Host OS (where AI agent runs)
├── API key → doesn't exist
├── .env → doesn't exist
├── Environment → no API keys
│
└── Docker Container (proxy server)
├── API key → lives only here
└── Port 8080: receives requests
→ Injects key → forwards to OpenAI/Anthropic
The AI agent only knows http://localhost:8080. It never sees the key value.
Attack Surface Comparison
| Attack | .env | Keychain (lkr) | Docker Proxy |
|---|---|---|---|
cat .env |
❌ readable | ✅ no file | ✅ no file |
printenv |
❌ readable | ❌ readable | ✅ no key |
| Process memory | ❌ same machine | ❌ same machine | ✅ container isolation |
.gitignore mistake |
❌ committed | ✅ no file | ✅ no file |
Only the Docker proxy blocks all attack patterns.
Implementation: 80-Line FastAPI Proxy
import os
import httpx
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
app = FastAPI()
API_KEYS = {
"openai": os.environ.get("OPENAI_API_KEY", ""),
"anthropic": os.environ.get("ANTHROPIC_API_KEY", ""),
}
UPSTREAM = {
"openai": "https://api.openai.com",
"anthropic": "https://api.anthropic.com",
}
@app.api_route("/v1/{path:path}", methods=["GET", "POST", "PUT", "DELETE"])
async def proxy_openai(request: Request, path: str):
return await _proxy(request, "openai", f"/v1/{path}")
@app.api_route("/anthropic/{path:path}", methods=["GET", "POST", "PUT", "DELETE"])
async def proxy_anthropic(request: Request, path: str):
return await _proxy(request, "anthropic", f"/{path}")
async def _proxy(request: Request, provider: str, path: str):
body = await request.body()
headers = {k: v for k, v in request.headers.items()
if k.lower() not in {"host", "authorization", "x-api-key"}}
if provider == "openai":
headers["Authorization"] = f"Bearer {API_KEYS['openai']}"
else:
headers["x-api-key"] = API_KEYS["anthropic"]
async with httpx.AsyncClient() as client:
resp = await client.request(
request.method, f"{UPSTREAM[provider]}{path}",
headers=headers, content=body)
return StreamingResponse(
iter([resp.content]),
status_code=resp.status_code,
headers=dict(resp.headers))
Run with Docker Compose:
services:
api-proxy:
build: .
ports: ["8080:8080"]
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
Point your AI agent to http://localhost:8080/v1/chat/completions instead of https://api.openai.com/v1/chat/completions. The key never touches the host environment.
Note: This simplified proxy buffers the full response before returning it. For streaming API responses (SSE), you'll need an async streaming implementation. The proxy also adds a network hop of latency and becomes a single point of failure — acceptable for local development, but consider health checks for production use.
The Takeaway
-
.envis readable by any AI agent that can execute shell commands - Keychain tools protect storage but not runtime — env vars are still exposed
- Docker proxy is the only pattern that keeps keys completely out of the agent's reach
Next time you set up a vibe coding environment, ask yourself: can my AI agent read my API keys right now? If the answer is yes (and it probably is), it's time to add a proxy.
For the full defense-in-depth approach to MCP and AI agent security, including OWASP MCP Top 10 analysis and production workarounds:
📖 MCP Security in Practice: What OWASP Won't Tell You About AI Tool Integrations
Top comments (0)