DEV Community

Cover image for OneCLI vs Manual Key Management: A Security Comparison
Jonathan Fishner
Jonathan Fishner Subscriber

Posted on

OneCLI vs Manual Key Management: A Security Comparison

OneCLI vs Manual Key Management: A Security Comparison

Most developers manage API keys for AI agents the same way they manage keys for any other application: environment variables, .env files, config files, or (worse) hardcoded strings. These methods work, but they share a critical flaw when applied to AI agents - the agent process holds the raw credential in memory.

This post breaks down the risks of each approach and shows how OneCLI eliminates entire categories of credential exposure.

The approaches

1. Hardcoded keys

The key is written directly in source code.

client = OpenAI(api_key="sk-abc123...")
Enter fullscreen mode Exit fullscreen mode

Risks:

  • Keys committed to version control (including git history, even after deletion)
  • Keys visible to anyone with repository access
  • Keys in plaintext on disk
  • No rotation without code changes and redeployment
  • Keys in agent process memory

Severity: Critical. This is universally considered the worst practice, yet it still appears in tutorials, prototypes, and production code.

2. Environment variables

Keys are set in the shell environment or container runtime.

export OPENAI_API_KEY="sk-abc123..."
Enter fullscreen mode Exit fullscreen mode

Risks:

  • Visible via /proc//environ on Linux
  • Inherited by child processes (including any subprocess the agent spawns)
  • Often logged by orchestration tools (Kubernetes events, Docker inspect, CI/CD logs)
  • In agent process memory once read
  • No scoping - the agent can use the key for any endpoint

Severity: High. Better than hardcoding, but the key is still accessible to the agent and anything it spawns.

3. .env files

Keys stored in a .env file, loaded at startup.

OPENAI_API_KEY=sk-abc123...
STRIPE_SECRET_KEY=sk_live_...
Enter fullscreen mode Exit fullscreen mode

Risks:

  • File on disk in plaintext (or base64, which is not encryption)
  • Accidentally committed to git (.gitignore helps, but mistakes happen - and they are permanent in git history)
  • Readable by any process with filesystem access
  • All keys in one file, so a single file read exposes everything
  • Keys in agent process memory after loading

Severity: High. The .gitignore guard rail fails regularly enough that GitHub has a dedicated secret scanning feature to catch it.

4. Config files (JSON, YAML, TOML)

Keys embedded in application configuration.

credentials:
  openai: "sk-abc123..."
  stripe: "sk_live_..."
Enter fullscreen mode Exit fullscreen mode

Risks:

  • Same as .env files: plaintext on disk, risk of git commit, readable by processes
  • Often deployed alongside application code
  • Config files are frequently copied, backed up, or synced to places where secrets should not exist

Severity: High. Functionally equivalent to .env files with slightly different ergonomics.

5. Secret manager SDK (AWS Secrets Manager, GCP Secret Manager, etc.)

Keys fetched at runtime from a cloud secret manager.

key = boto3.client("secretsmanager").get_secret_value(SecretId="openai-key")
Enter fullscreen mode Exit fullscreen mode

Risks:

  • Requires code changes to integrate the SDK
  • The agent still receives and holds the raw credential in memory
  • If the agent is compromised, it can call the secret manager to fetch additional secrets (if IAM permissions allow)
  • Cloud-specific, not portable

Severity: Medium. Significantly better for secret storage and rotation, but the agent still possesses the raw key after retrieval.

Risk matrix

Risk Category Hardcoded Env Vars .env Files Config Files Secret Manager SDK OneCLI
Key in source control Yes No Likely Likely No No
Key on disk in plaintext Yes No Yes Yes No No (AES-256-GCM)
Key in agent process memory Yes Yes Yes Yes Yes No
Key visible to child processes Yes Yes No No Yes No
Key extractable via prompt injection Yes Yes Yes Yes Yes No
Requires agent code changes No No No No Yes No
Credential scoped to specific APIs No No No No No Yes
Audit log of credential usage No No No No Partial Yes
Rotation without agent restart No No No No Possible Yes

How OneCLI eliminates these risks

Key never in agent memory. OneCLI operates as a transparent HTTPS proxy. The agent sends requests with a placeholder key. OneCLI intercepts the request, replaces the placeholder with the real credential (decrypted from its AES-256-GCM encrypted store), and forwards the request. The real key exists only in the proxy's memory for the duration of the request.

No code changes. The agent uses a standard HTTPS_PROXY environment variable. Any HTTP client in any language respects this. No SDK, no API calls, no wrapper libraries.

Credential scoping. Each credential in OneCLI is bound to specific host and path patterns. An OpenAI key only works for api.openai.com. A Stripe key only works for api.stripe.com. Even if an attacker gains control of the agent, they cannot use a credential outside its defined scope.

Audit logging. Every proxied request is logged with the agent identity, destination, timestamp, and status. You know exactly which agent used which credential and when.

Rotation without restarts. Update a credential in the OneCLI vault and it takes effect immediately. No redeployment, no agent restart, no config file changes.

The prompt injection factor

The risk matrix above highlights one row that separates OneCLI from every other approach: "Key extractable via prompt injection."

AI agents are uniquely vulnerable to prompt injection - an attacker manipulates the input to make the LLM execute unintended actions. If the agent holds a raw API key (in an environment variable, in its config, in memory from a secret manager call), a successful prompt injection can instruct the agent to exfiltrate that key.

With OneCLI, there is nothing to exfiltrate. The agent holds a proxy authentication token that is useless outside the proxy. The real credentials never enter the agent's address space.

This is not a theoretical concern. Prompt injection is the most actively researched attack vector against LLM-based applications, and credential theft is one of the highest-impact outcomes.

When manual management is acceptable

OneCLI adds value specifically for AI agent workloads. For traditional applications where the process is fully trusted and not executing LLM-generated actions:

  • Environment variables are fine for development.
  • Cloud secret managers are appropriate for production.
  • Config files should be avoided for secrets in all cases.
  • Hardcoded keys should be avoided in all cases.

The decision point is straightforward: if your process runs untrusted or semi-trusted code (LLM tool calls, plugins, user-influenced execution paths), the process should not hold raw credentials.


Get started with OneCLI at onecli.sh. One Docker container, five minutes to set up.

Top comments (0)