DEV Community

Cover image for Millions of Developers Are Feeding Their API Keys to AI — Are You One of Them?
Kazim Ali
Kazim Ali

Posted on

Millions of Developers Are Feeding Their API Keys to AI — Are You One of Them?

You open a project in VS Code. Claude Code is active in the terminal. You click on a line in your .env file — maybe to copy a value, maybe just to check something. You highlight the line:

STRIPE_SECRET_KEY=sk_live_4xKj2...
Enter fullscreen mode Exit fullscreen mode

That key is now in the AI's context window. It will be sent to Anthropic's servers on the next request. No warning appeared. No prompt asked for confirmation. It just happened.

This is not a theoretical vulnerability. It is the default behaviour of every AI coding assistant with file access — Claude Code, GitHub Copilot, Cursor, Codeium. The tools that make you faster are also the tools that can silently exfiltrate your production credentials.


The Three Attack Surfaces

1. File Reads

AI coding assistants operate inside your project directory. They are designed to read files to understand context — your codebase, your config, your dependencies. The problem is that the same directory where your source code lives is also where your secrets live.

Files that are routinely read and sent to AI APIs:

.env
.env.local
.env.production
service_account.json
credentials.json
*.pem
*.key
~/.ssh/id_rsa (if project is in home directory)
Enter fullscreen mode Exit fullscreen mode

Claude Code, by default, will read any file you ask it to look at. If you ask it to "fix the database connection issue" and your database URL is in .env, a capable assistant will read .env to understand the connection string. Your credentials leave your machine.

.gitignore does not help here. .gitignore tells git what not to track. It says nothing to your AI assistant.

2. IDE Selection

This is the surface most developers don't think about. When you highlight text in your editor while an AI assistant has focus, that selected text becomes context. This is a feature — it's how you tell the AI "look at this specific code." It's also how you accidentally hand over a secret.

Common scenarios:

  • You're debugging. You highlight an error trace that includes an auth token in the request headers.
  • You're checking a config file. You select the wrong line.
  • You paste a curl command with -H "Authorization: Bearer sk_live_..." into the chat to ask why the request is failing.

There is no mechanical block for this surface. If text is in your editor and you select it, it can reach the AI.

3. Paste and Conversation

Developers routinely paste error messages into AI chat. Error messages are often more revealing than the code itself.

A Django 500 error might include:

django.db.utils.OperationalError: could not connect to server
  Connection string: postgres://admin:mypassword123@prod-db.us-east-1.rds.amazonaws.com/orders
Enter fullscreen mode Exit fullscreen mode

A failed AWS SDK call might include your region, account ID, and role ARN. A Stripe webhook error might echo back the signing secret. You're asking the AI to help you fix the error — you paste the full trace — the secret is now in the conversation history, which is stored server-side.


Why This Is Different from a Git Leak

When a secret ends up in a git commit, the exposure is to whoever has access to the repository — usually your team, maybe the public if the repo is open. Bad, but bounded.

When a secret ends up in an AI context window, it is transmitted to a third-party API endpoint over HTTPS, processed by a model running on external infrastructure, potentially stored in conversation logs, and potentially used for model training depending on your plan and the provider's data retention policy.

Every major AI provider has a data use policy. Most enterprise plans have stronger protections. Most individual developer plans do not. If you are using a free tier or a standard subscription, assume your prompts are retained.

A git leak is a repository problem. An AI context leak is a network problem. It's the difference between leaving a key under your doormat and mailing a copy to a stranger.


The Fix

Defense operates on two layers: mechanical and instructional. You need both.

Mechanical Layer: Deny Rules in settings.json

Claude Code reads its permissions from .claude/settings.json at the project root. The permissions.deny block tells it which files it is not allowed to read, regardless of what you or the AI asks.

{
  "permissions": {
    "deny": [
      "Read(.env)",
      "Read(.env.*)",
      "Read(service_account.json)",
      "Read(credentials.json)",
      "Read(**/*.pem)",
      "Read(**/*.key)",
      "Read(**/*.p12)",
      "Read(**/*.pfx)",
      "Read(**/secrets/**)"
    ]
  }
}
Enter fullscreen mode Exit fullscreen mode

This file should be committed to the repository. It protects everyone on the team, not just you.

.claudeignore at Project Root

Claude Code respects a .claudeignore file using the same syntax as .gitignore. This prevents the AI from indexing or referencing those paths during context building:

.env
.env.*
service_account.json
credentials.json
*.pem
*.key
*.p12
secrets/
Enter fullscreen mode Exit fullscreen mode

This is your second line of defence. If a deny rule is misconfigured, .claudeignore may catch what slips through.

Instructional Layer: Global CLAUDE.md

Create or edit ~/.claude/CLAUDE.md — this file contains global instructions that apply to every Claude Code session, across all projects. Add this:

## Security

If you detect an API key, secret, token, password, private key, or credential
in any file you have read or any text that has been shared with you, stop
immediately and warn the user before proceeding. Do not include credential
values in any response, code block, or suggestion. Ask the user to remove
the credential from context and rotate it.

Never read .env, .env.*, service_account.json, credentials.json, *.pem,
or *.key files unless the user has explicitly confirmed they understand
the credential will be transmitted to an external API.
Enter fullscreen mode Exit fullscreen mode

This does not replace the deny rules. Instructions can be overridden by future prompts. Mechanical deny rules cannot.


The IDE Selection Gap

The deny rules and .claudeignore file protect against automated file reads. They do nothing when you manually select text and paste it into the chat.

This gap cannot be closed mechanically. No setting stops you from highlighting your .env file and asking "why is this connection string wrong?"

The only defence is awareness:

  • Before pasting an error trace, scan it for tokens, passwords, or connection strings.
  • Before selecting a block of config, check what's in the selection.
  • If you need the AI to help with a connection issue, replace the credential in the error message with a placeholder before pasting.

Treat the AI chat window the same way you treat a public Slack channel. Once it's in there, it's gone.


Five Things to Do Right Now

  • Add a permissions.deny block to .claude/settings.json in every project that has secrets. Copy the JSON block above. Commit it.
  • Create a .claudeignore at the project root listing .env, service_account.json, and any other credential files. Commit it.
  • Edit ~/.claude/CLAUDE.md to add the global security instruction block. This applies across all your projects immediately.
  • Rotate any key that has appeared in an AI chat session — including ones you're not sure about. If there's a chance it was in context, treat it as compromised.
  • Check your AI provider's data retention policy for the plan you are on. If you are on a free or individual plan with no data processing agreement, assume prompts are retained. Upgrade or adjust accordingly.

The tools are not going away. They are too useful. But "useful" and "safe by default" are not the same thing — and right now, the defaults are not in your favour. Spend ten minutes on this. The cost of a rotated key is an afternoon. The cost of a leaked production secret is significantly higher.


Want to go further? The fixes above reduce the risk of an AI reading your .env file. The deeper fix is to not have a .env file at all. Tools like direnv load secrets directly into your shell session from a vault or password manager — nothing is ever written to disk in your project directory. No file, no attack surface.

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.