Every developer building with AI agents has solved the credentials problem the same way.
You store your API keys somewhere — a .env file, a secrets manager, an environment variable. You provision those credentials to your agent at startup. The agent retrieves them, holds them, uses them to make API calls.
This works. Until it doesn't.
The problem is not where you store the credentials. The problem is the model itself — the agent as consumer. An agent that consumes credentials is an agent that holds credential values. And an agent that holds credential values is an agent that can be made to expose them.
There is a different model. One where the agent never holds values at all, where the agent is not a consumer of credentials but an operator of the entire credential lifecycle.
This article shows you both models side by side. What they look like in practice, why the difference matters, and what it means for the agents you are building or building with.
The Consumer Model: How Everyone Does It Today
In the consumer model, the credential flow looks like this:
Human provisions credentials
↓
Secrets manager stores values
↓
Agent retrieves values at startup
↓
Agent holds values in memory
↓
Agent uses values to make API calls
The agent is a passive recipient. A human sets up the credentials, the agent collects them, the agent uses them.
This is how HashiCorp Vault works with agents. How AWS Secrets Manager works. How Doppler works. How 1Password works. How .env files work. How every approach to AI agent credential management works today — the agent is the consumer at the end of the chain.
Here is what this looks like in practice. Your agent starts up:
# The consumer model
import os
from dotenv import load_dotenv
load_dotenv()
stripe_key = os.getenv("STRIPE_KEY") # sk_live_51H... now in memory
openai_key = os.getenv("OPENAI_KEY") # sk-proj-... now in memory
# Agent now holds both values
# For the entire session
# In plaintext
# In memory
# Accessible to anything that can influence the agent
The agent now holds sk_live_51H... in memory. It will hold it for the entire session. Every prompt injection attack, every malicious plugin, every compromised dependency, every CVE that exposes the agent's memory, all of them have a path to your Stripe key.
This is not a hypothetical. Check Point Research published CVE-2026-21852 in February — API key exfiltration through malicious project configs in AI coding tools. Bitsight asked an AI agent to find API keys on the filesystem. It did. TrendMicro documented 335 malicious OpenClaw skills designed specifically to harvest credentials from agent memory and config files.
Every one of these attacks worked because the agent held the value. The attack surface is the value in agent memory. You cannot patch your way out of that. You can only remove the value from agent memory entirely.
The Operator Model: What AgentSecrets Built
In the operator model, the credential flow looks fundamentally different:
Agent checks its own status
↓
Agent detects credential drift
↓
Agent pulls latest from encrypted cloud
↓
Agent lists available key names
↓
Agent routes API call through proxy
↓
Proxy resolves value from OS keychain
↓
Proxy injects into HTTP request
↓
Agent receives API response only
The agent never holds a value. Not at startup. Not in memory. Not at any step in the workflow.
Here is what this looks like in practice:
# The operator model — an AI agent running this autonomously
agentsecrets status
# Logged in as: developer@company.com
# Workspace: Acme Engineering
# Project: payments-service
# Last pull: 3 hours ago
agentsecrets secrets diff
# OUT OF SYNC: STRIPE_KEY (remote is newer)
# LOCAL ONLY: PAYSTACK_TEST_KEY
agentsecrets secrets pull
# Synced 2 secrets from cloud to OS keychain
agentsecrets secrets list
# STRIPE_KEY
# OPENAI_KEY
# DATABASE_URL
# PAYSTACK_KEY
# (names only — no values at any point)
agentsecrets call \
--url https://api.stripe.com/v1/balance \
--bearer STRIPE_KEY
# {"object":"balance","available":[{"amount":420000,"currency":"usd"}]}
agentsecrets proxy logs --last 5
# 14:23:01 GET api.stripe.com/v1/balance STRIPE_KEY 200 245ms
The agent ran the complete credentials workflow. It checked its own context. It detected that a secret was out of sync. It pulled the latest version. It listed available keys by name. It made an authenticated API call. It audited what it did.
At no point did it see sk_live_51H....
A prompt injection attack redirecting this agent to exfiltrate credentials gets: a key name. A malicious plugin searching memory finds: nothing. A CVE exposing the agent's context reveals: nothing usable.
You cannot steal what was never there.
Why This Is Infrastructure, Not a Tool
The distinction between consumer and operator is not just about security. It is about what role the agent plays in your system architecture.
A consumer tool handles one moment in the credentials lifecycle — the moment of use. You still manage everything else: provisioning, rotation, sharing, auditing, team access.
Infrastructure handles the entire lifecycle — and makes the agent a participant in managing it.
AgentSecrets is infrastructure because the agent can:
Manage its own context:
agentsecrets workspace switch production
agentsecrets project use payments-service
agentsecrets status
Detect and resolve drift without human intervention:
agentsecrets secrets diff # detects what's out of sync
agentsecrets secrets pull # resolves it
agentsecrets secrets push # syncs local changes to cloud
Handle team credential workflows:
agentsecrets workspace invite newdev@company.com
agentsecrets workspace list
agentsecrets project list
Audit its own actions:
agentsecrets proxy logs --secret STRIPE_KEY
agentsecrets proxy logs --last 50
Operate across environments:
agentsecrets workspace switch staging
agentsecrets secrets pull
# run deployment
agentsecrets workspace switch production
agentsecrets secrets pull
# run deployment
agentsecrets proxy logs # audit both
This is what infrastructure looks like. The agent is not waiting for a human to provision credentials before it can work. It manages the provisioning itself. It detects when something is wrong and fixes it. It operates across workspaces and projects. It generates its own audit trail.
No human touches the credential workflow unless something is genuinely broken.
The Zero-Knowledge Guarantee
What makes the operator model secure is not policy. It is architecture.
In the consumer model, zero-knowledge is a promise. The vault promises not to log your credentials. The secrets manager promises not to expose them. These promises can be broken by bugs, breaches, misconfiguration.
In the operator model, zero-knowledge is structural. The agent cannot see credential values because the system is not designed to show them. There is no code path that puts sk_live_51H... into the agent's context. The log struct has no value field. The secrets list command returns names. The diff command shows sync status. The pull command writes to the OS keychain, not to agent memory.
AgentSecrets encryption stack:
- Key exchange: X25519 (NaCl SealedBox)
- Secret encryption: AES-256-GCM
- Key derivation: Argon2id
- Key storage: OS keychain — macOS Keychain, Windows Credential Manager, Linux Secret Service
- Cloud server: Stores encrypted blobs only — structurally cannot decrypt
- Proxy injection: Happens at transport layer, outside agent context
- Audit log: No value field in the struct — impossible to accidentally log a value
The zero-knowledge guarantee holds at every step because every step was designed around it. Not because of a policy that says "don't log values." Because the architecture makes logging values impossible.
What This Means for Your Team
The operator model changes how teams work with AI agents.
In the consumer model, credential sharing is a human problem. Someone has to send the .env file. Someone has to add the new developer to the secrets manager. Someone has to make sure staging and production have the right keys. Someone has to audit who used what.
In the operator model, the agent handles it:
# New developer joins the team
agentsecrets login
agentsecrets workspace switch "Acme Engineering"
agentsecrets project use payments-service
agentsecrets secrets pull
# Ready. Agent has everything it needs.
# No .env file was shared. No Slack message with credentials.
# No one typed a secret into a chat window.
The credentials were encrypted client-side before upload. The server stores ciphertext it cannot decrypt. The new developer's agent pulled the secrets to their local OS keychain. The entire onboarding happened without any credential value being transmitted in plaintext anywhere.
This is what zero-knowledge team credential sharing looks like. Not a promise that the server won't look at your secrets. A guarantee that the server cannot.
The Comparison
| Consumer Model | Operator Model (AgentSecrets) | |
|---|---|---|
| Agent role | Receives credentials | Manages credentials |
| Value in agent memory | Yes — at startup | Never |
| Prompt injection risk | High — value is retrievable | None — value was never there |
| Malicious plugin risk | High | None |
| Drift detection | Manual | Agent runs secrets diff autonomously |
| Team onboarding | Human provisions credentials | Agent pulls from encrypted workspace |
| Audit trail | Depends on the tool | Built-in, key names only, no value field |
| Zero-knowledge | Promise | Architecture |
| Setup time | Hours to days | 5 minutes |
Getting Started
# Install
brew install The-17/tap/agentsecrets
# or
npx @the-17/agentsecrets init
# or
pip install agentsecrets
# Initialize
agentsecrets init
# Create a project
agentsecrets project create my-app
# Store credentials — values go to OS keychain
agentsecrets secrets set STRIPE_KEY=sk_live_51H...
agentsecrets secrets set OPENAI_KEY=sk-proj-...
# Connect your AI tool
agentsecrets mcp install # Claude Desktop + Cursor
openclaw skill install agentsecrets # OpenClaw
agentsecrets proxy start # Any agent via HTTP proxy
From this point your agent operates the credential workflow. It checks status. It detects drift. It pulls. It calls. It audits. It never sees a value.
The Category
What AgentSecrets is building is a new category: zero-knowledge secrets infrastructure for AI agents.
Not a secrets manager — those were built for humans to provision credentials to applications. Not a vault — those protect at rest but not in context. Not a proxy — that is one layer of what this is.
Infrastructure. The layer that sits between your AI agents and every API they call, that manages the complete credential lifecycle, that makes the agent an operator instead of a consumer, that maintains zero-knowledge guarantees from keychain to transport layer to audit log.
Every other secrets tool was built before AI agents existed. They are retrofitting a human-centric model onto a system where the agent is doing the work.
AgentSecrets was built for the agentic era. The agent operates it. The agent never sees it.
GitHub: https://github.com/The-17/agentsecrets
ClawHub: https://clawhub.ai/SteppaCodes/agentsecrets
Top comments (2)
This is a distinction I haven't seen anyone else articulate clearly — consumer vs operator. Most agent security discussions stop at "use a secrets manager" and call it done.
The part that resonates is the attack surface analysis. An agent that holds credential values is one prompt injection away from leaking them, and no amount of guardrails on the LLM side fully eliminates that risk. Moving the agent to an operator role where it triggers credential operations without seeing values is a fundamentally different security posture.
Curious about the latency tradeoff in practice. In my experience, every layer of indirection in the auth flow adds response time that compounds when agents are making dozens of API calls per task. Have you found that to be a real issue, or is it negligible?
In our testing the proxy adds 1 to 3ms per call for local keychain resolution. The keychain lookup is fast because it's a local OS operation, not a network round trip.
The HTTP forwarding adds roughly what you'd expect from an extra localhost hop, which on most machines is sub-millisecond.
For an agent making 50 API calls per task that's an overhead of maybe 100 to 200ms total across the entire task. Against the actual API response times, Stripe averaging 200 to 400ms, OpenAI 500ms to several seconds, the proxy overhead is noise.
Where it would compound is if you're running the proxy remotely rather than locally, or if you're making thousands of calls per session. For the local deployment model AgentSecrets is built around the tradeoff is negligible in practice.
We're watching this closely as we build out the persistent proxy mode, keeping the proxy process warm eliminates the startup overhead on repeated calls.
Good instinct to probe this though, latency is exactly where zero-knowledge guarantees tend to fall apart at scale.