When AgentSecrets showed up on Hacker News, the discussion centered on one thing: the proxy. A local HTTP server that injects credentials at the transport layer so AI agents never see API key values. People found it interesting. Some compared it to similar tools. The conversation stayed narrow.
That framing missed most of what AgentSecrets actually is.
The proxy is one feature. AgentSecrets is a complete credential infrastructure — zero-knowledge cloud sync, team workspaces, encrypted secret sharing, multi-environment support, MCP integration, audit logging, and a CLI that works across every language and agent framework. It is the first secrets manager built from the ground up for the reality that AI agents are now part of your team.
This article is the full picture.
Why Existing Tools Do Not Solve the Agent Problem
Before getting into what AgentSecrets does, it is worth being precise about what existing tools do not do, because the gap is architectural, not cosmetic.
HashiCorp Vault, AWS Secrets Manager, Doppler, 1Password — these are all vault-style tools. The model is: secrets live in a secure store, your application retrieves them at runtime, your application uses them. That model made sense when "your application" was code you wrote and trusted.
AI agents break this model in one specific way: agents can be manipulated.
When your application code retrieves a Stripe key from Vault, the key enters your application's memory. That is fine because your application cannot be prompt-injected. It cannot be redirected by a malicious instruction in a webpage it reads. It cannot be tricked by a rogue plugin. It does exactly what you programmed it to do.
An AI agent can be all of these things. It processes external inputs constantly — webpages, documents, API responses, emails, plugin outputs. Any of those inputs can contain a malicious instruction that redirects the agent's behavior. And if the agent holds credential values when that happens, the attacker has your credentials.
The threat model for AI agents is fundamentally different. The security solution needs to be fundamentally different too.
Vault-style tools protect credentials at rest. AgentSecrets protects credentials from an agent that should never have held them at all.
The Full Infrastructure
Layer 1: Zero-Knowledge Cloud Sync
AgentSecrets is not a local-only tool. Your credentials are encrypted client-side using X25519 key exchange and AES-256-GCM before they ever leave your machine. The encrypted blobs are stored in the cloud. Your encryption key lives in your OS keychain — macOS Keychain, Linux Secret Service, Windows Credential Manager.
The server stores ciphertext it cannot decrypt. Not "we promise we do not look", structurally cannot, by design. This is what zero-knowledge means.
agentsecrets init
agentsecrets secrets set STRIPE_KEY=sk_live_51H...
agentsecrets secrets push
Your key is encrypted on your machine, uploaded as ciphertext, and retrievable only by someone with your keychain key. That is the foundation everything else builds on.
Layer 2: Team Workspaces
This is the feature that makes AgentSecrets a team infrastructure.
A workspace is a shared environment. You create it, invite teammates, and they get access to the projects inside it. Credentials are shared through the zero-knowledge encrypted layer, nobody shares values directly, nobody emails .env files, nobody pastes keys into Slack.
# Create your team workspace
agentsecrets workspace create "Acme Engineering"
# Invite everyone
agentsecrets workspace invite alice@acme.com
agentsecrets workspace invite bob@acme.com
agentsecrets workspace invite carol@acme.com
# New developer onboards
agentsecrets login
agentsecrets workspace switch "Acme Engineering"
agentsecrets project use payments-service
agentsecrets secrets pull
# Ready to work. Nobody shared a single value.
No "can someone send me the production keys" Slack message. No .env.example that is actually a .env with real values that someone forgot to scrub. No spreadsheet of credentials living in someone's personal Google Drive. The new developer authenticates, joins the workspace, and pulls secrets.
And every AI agent every teammate runs has access to the credentials it needs through the proxy, without any of them ever seeing the values.
Layer 3: Projects
Within a workspace, projects partition credentials by service, environment, or whatever makes sense for your architecture.
agentsecrets project create payments-service
agentsecrets project create auth-service
agentsecrets project create data-pipeline
agentsecrets project create infra
Each project has its own secret set. Your payments agent has access to STRIPE_KEY and PAYSTACK_KEY. Your auth service has JWT_SECRET and OAUTH_CLIENT_SECRET. They do not bleed into each other. The agent working on your data pipeline cannot accidentally reach your payment credentials because it is in a different project.
This is the principle of least privilege applied to AI agents, working at the team level rather than just the individual level.
Layer 4: The Credential Proxy
AgentSecrets runs a local HTTP proxy on localhost:8765. When an agent needs to make an authenticated API call, it routes the request through the proxy with injection headers. The proxy resolves the real credential from the OS keychain, injects it into the outbound HTTP request at the transport layer, and returns only the response.
Agent: "call Stripe with STRIPE_KEY"
Proxy: resolves sk_live_51H... from OS keychain
injects Authorization: Bearer sk_live_51H...
forwards to api.stripe.com
returns {"balance": {...}}
Agent: sees the balance
never saw sk_live_51H...
Six authentication styles cover every API pattern you will encounter:
# Bearer (Stripe, OpenAI, GitHub, most modern APIs)
agentsecrets call --url https://api.stripe.com/v1/balance --bearer STRIPE_KEY
# Custom header (SendGrid, Twilio, API Gateway)
agentsecrets call --url https://api.sendgrid.com/v3/mail/send --header X-Api-Key=SENDGRID_KEY
# Query parameter (Google Maps, weather APIs)
agentsecrets call --url "https://maps.googleapis.com/maps/api/geocode/json" --query key=GMAP_KEY
# Basic auth (Jira, legacy REST APIs)
agentsecrets call --url https://jira.example.com/rest/api/2/issue --basic JIRA_CREDS
# JSON body injection
agentsecrets call --url https://api.example.com/auth --body-field client_secret=SECRET
# Form field injection
agentsecrets call --url https://oauth.example.com/token --form-field api_key=KEY
Security built into the proxy layer:
- Session token — generated at startup, required on every request, blocks rogue processes on the same machine from using the proxy
- Zero-trust domain allowlist — every outbound request must target a domain explicitly authorized in your workspace allowlist. Unauthorized domains are blocked before credential resolution happens. A prompt injection attack that tries to redirect an authenticated call to an attacker-controlled server hits a wall.
-
Response body redaction — if an external API echoes back the injected credential in its response body, the proxy automatically replaces the value with
[REDACTED_BY_AGENTSECRETS]before the response reaches the agent. This closes the credential echo exfiltration attack path. - SSRF protection — blocks requests to private IP ranges and non-HTTPS targets
- Redirect stripping — auth headers are not forwarded on HTTP redirects
- Uniform errors — identical error responses whether a secret exists or not, prevents enumeration ### Layer 5: Environment Variable Injection
For tools that manage their own HTTP calls and need credentials as environment variables — the Stripe CLI, Node servers, Django apps — the env command wraps the process and injects credentials from the OS keychain at spawn time:
agentsecrets env -- stripe mcp
agentsecrets env -- node server.js
agentsecrets env -- npm run dev
agentsecrets env -- python manage.py runserver
The values exist only in child process memory for the duration of the process. Nothing is written to disk. When the process exits, they are gone.
Layer 6: MCP Integration
AgentSecrets ships as a first-class MCP server. One command auto-configures Claude Desktop and Cursor:
npx @the-17/agentsecrets mcp install
Your claude_desktop_config.json goes from this:
{
"mcpServers": {
"stripe": {
"env": { "STRIPE_SECRET_KEY": "sk_live_51H..." }
}
}
}
To this:
{
"mcpServers": {
"agentsecrets": {
"command": "/usr/local/bin/agentsecrets",
"args": ["mcp", "serve"]
}
}
}
No credential values in any config file. Nothing to steal from your project directory. Nothing for CVE-2026-21852 — the Check Point vulnerability that harvests keys from AI tool config files — to find.
Layer 7: The Workflow File
agentsecrets init creates .agent/workflows/api-call.md — a workflow instruction file that teaches any AI assistant how to use AgentSecrets automatically. Not just Claude. Not just Cursor. Any AI tool that supports workflow or instruction files picks it up and knows how to route authenticated calls through AgentSecrets without you having to explain it.
Layer 8: Audit Logging
Every authenticated call is logged locally in JSONL format. Key names only. The log struct has no value field — it is structurally impossible to accidentally log a credential value.
agentsecrets proxy logs
agentsecrets proxy logs --secret STRIPE_KEY
agentsecrets proxy logs --last 50
Time Method Target Secret Status Duration
01:15:00 GET https://api.stripe.com/v1/balance STRIPE_KEY 200 245ms
01:31:45 POST https://api.stripe.com/v1/charges STRIPE_KEY 200 412ms
02:14:03 POST https://api.openai.com/v1/chat/... OPENAI_KEY 200 1203ms
If a malicious skill tries to use your Stripe key against an unexpected endpoint, the log shows it. If your agent hits a rate limit, the log tells you why. If something fails at 2am, the log tells you exactly what your agent was doing.
Layer 9: Secrets Sync and Diff
agentsecrets secrets push # Upload local .env to cloud (encrypted)
agentsecrets secrets pull # Download cloud secrets to .env
agentsecrets secrets diff # See what's out of sync between local and cloud
The diff command is particularly useful for teams — it shows exactly which keys exist locally but not in the cloud, which exist in the cloud but not locally, and which values have diverged. No more "why is staging broken" debugging sessions that turn out to be a stale local secret.
Layer 10: Agent Identity
In a single-agent setup, the audit log is enough. In a multi-agent setup — billing tool, research agent, publishing workflow, new integration being tested — the critical question is which agent made a specific call. Without named identities, the log cannot answer that.
AgentSecrets supports three identity levels. Anonymous calls continue to work as before. Declared identity adds a name with one line:
client = AgentSecrets(agent_id="billing-tool")
Issued identity uses a signed token the proxy verifies cryptographically on every call:
agentsecrets agent token issue "billing-tool"
# → agt_ws01hxyz_4kR9mNpQ...
client = AgentSecrets(agent_token="agt_ws01hxyz_4kR9mNpQ...")
Every log entry now carries the agent name, the identity level, and for issued identity, the specific token that authenticated the call. You can filter the log by agent, find anonymous calls across the fleet, and revoke individual tokens without touching anything else.
agentsecrets log list --agent billing-tool
agentsecrets log list --identity anonymous # find the gaps in coverage
Layer 11: The Governance Log
The standard audit log records what happened. The governance log records what happened and what the system was permitted to do at the exact moment it happened.
Every log entry now captures the domain allowlist as it existed at the time of the call, not as it exists today. If the allowlist changes after an incident, every historical entry still reflects the exact policy state at time of execution. If something needs to be investigated six months from now, the data is there.
agentsecrets log summary --since 7d # aggregate statistics across the workspace
agentsecrets log export --format csv --since 30d # export for compliance review
agentsecrets log detail <id> # full record including allowlist snapshot
The full entry includes agent identity, identity level, credential reference, target URL, status, and the allowlist snapshot — everything needed to reconstruct what an agent did and what it was authorized to do when it did it.
Who AgentSecrets Is For
The solo developer using Claude Desktop or Cursor
You have API keys in your MCP config files. Those files are in your project directory. Your AI assistant reads your project directory. You are one malicious repository away from credential exfiltration — Check Point proved this with CVE-2026-21852.
Five minutes with AgentSecrets removes every credential from every config file. Your AI assistant has full API access. You have an audit trail. Nothing is exposed.
The team building AI-powered products
Your team is building agents that make authenticated API calls. Right now those credentials live in .env files on individual developer machines, shared via Slack or email, out of sync across environments, readable by every AI tool everyone on the team runs.
AgentSecrets gives you a shared encrypted credential store, workspace-based access control, and a proxy that ensures no agent on any developer machine ever holds a credential value. New developers onboard without credential sharing. Offboarding removes access without credential rotation.
The security-conscious developer who has read the breach reports
You saw the ClawHavoc campaign. You read the Cisco proof-of-concept. You know CVE-2026-26326 is not the last OpenClaw vulnerability and that every future one will also try to harvest credentials.
You understand that the fix is not patching each exploit. It is making credentials structurally unavailable to the agent layer. AgentSecrets is that fix.
Building on AgentSecrets
The layers above describe what AgentSecrets does for you directly. The Python SDK is what lets you embed those guarantees into anything you build.
A LangChain tool built on the SDK makes authenticated API calls without credential values entering the agent's context. An MCP server built on the SDK ships with zero-knowledge credential management before the first tool is written. Every user who installs what you build inherits the guarantee without knowing AgentSecrets exists.
from agentsecrets import AgentSecrets
client = AgentSecrets()
response = client.call(
"https://api.stripe.com/v1/balance",
bearer="STRIPE_KEY"
)
The Zero-Knowledge MCP template at github.com/The-17/zero-knowledge-mcp is a working MCP server built on the SDK with GitHub tools already implemented. Clone it, replace the tools with your own, and publish. The credential infrastructure is in place before you write a line of business logic.
The Honest Comparison
| AgentSecrets | HashiCorp Vault | AWS Secrets Manager | Doppler | |
|---|---|---|---|---|
| Agent never sees values | ✅ Proxy injects | ❌ Agent retrieves | ❌ Agent retrieves | ❌ Agent retrieves |
| Prompt injection protection | ✅ Structural | ❌ | ❌ | ❌ |
| Zero-knowledge server | ✅ | ❌ | ❌ | ❌ |
| Team workspaces | ✅ Built-in | ⚠️ Complex | ⚠️ IAM roles | ✅ |
| OS keychain storage | ✅ | ❌ | ❌ | ❌ |
| MCP / AI native | ✅ First-class | ❌ | ❌ | ❌ |
| Setup time | ⚡ 1 minute | ⏱️ Hours | ⏱️ 30+ min | ⏱️ 10 min |
| Secret rotation | ❌ Coming soon | ✅ | ✅ | ✅ |
| Enterprise SSO | ❌ Coming soon | ✅ | ✅ | ✅ |
| Free | ✅ | ✅ OSS | ⚠️ AWS costs | ⚠️ Limited |
The gaps are real. AgentSecrets does not have secret rotation yet. It does not have enterprise SSO. For production server-side workloads at scale, Vault or AWS Secrets Manager remain the right answer.
But for the specific problem of AI agents that can be prompt-injected, manipulated by malicious plugins, or compromised through config file CVEs — AgentSecrets is the only tool that solves it architecturally. Traditional vaults protect credentials at rest. Once an agent retrieves a credential to use it, that credential is in the agent's memory and vulnerable. AgentSecrets never gives the agent the credential. That is a different security model entirely.
The Security Model in Full
Encryption: X25519 (key exchange) + AES-256-GCM (symmetric) + Argon2id (KDF)
Key storage: OS keychain — system-encrypted, requires user auth to access
Cloud: Zero-knowledge — server stores ciphertext only, structurally cannot decrypt
Proxy: Session token + SSRF protection + redirect stripping + uniform errors
Audit: JSONL local log — key names only, no value field in struct
Three layers of protection against the specific threat model of AI agents:
- Credentials never written to disk as plaintext — OS keychain only
- Credentials never given to the agent — proxy injects at transport layer
- Every credential use logged — key names, endpoints, status, duration
Getting Started
# Install
npx @the-17/agentsecrets # npx (no install required)
brew install The-17/tap/agentsecrets # Homebrew (macOS)
pip install agentsecrets-cli # pip (Python)
go install github.com/The-17/agentsecrets/cmd/agentsecrets@latest # Go
# Setup
agentsecrets init
agentsecrets project create my-app
agentsecrets secrets set STRIPE_KEY=sk_live_...
agentsecrets secrets set OPENAI_KEY=sk-proj-...
# Connect your AI tool
npx @the-17/agentsecrets mcp install # Claude Desktop + Cursor
agentsecrets proxy start # Any agent via HTTP
openclaw skill install agentsecrets # OpenClaw
# Check your status
agentsecrets status
The HN discussion framed AgentSecrets as a proxy. The proxy is one layer of eight. The complete infrastructure covers credential encryption, team access, project partitioning, transport-layer injection, environment variable injection, MCP integration, workflow files, audit logging, agent identity, and governance logs — all of it in one binary, under a minute to set up.
Your agent makes the call. It never holds the key.
The full architecture is at agentsecrets.theseventeen.co. The engineering deep dive is in the Building AgentSecrets series. The repository is at github.com/The-17/agentsecrets. The OpenClaw skill is at clawhub.ai/SteppaCodes/agentsecrets.
Top comments (2)
The proxy-as-platform model has untapped potential. There's so much that can be done at this layer that most are trying to prompt around. The popular usecase for the proxy is to attach agents to models they were not developed for (pointing Claude Code at OpenAI violating AUP). Using the proxy as a deterministic middleware is genius. This is a PERFECT use of this layer, can't wait to test it out. I moved my keys into the keychain and Ansible, but they're still accessible to the agent if it scratches the surface of things and recognizing this is eyeopening.
Exactly. And you've already done the hard part by moving keys to the keychain. The gap is that the agent still has a path to them through environment variables or config files that read from the keychain at startup. AgentSecrets closes that gap by making the agent never be the one resolving the value at all. The proxy resolves, injects into the outbound HTTP request, and returns only the response. The agent stays blind to the value the entire time.
The deterministic middleware framing is spot on. There's a lot of non-determinism in what agents do but credential injection at the transport layer is one place where you want zero ambiguity about what happened and what was exposed. The audit log is built around that — key names, endpoints, status codes, no values anywhere in the struct.
Would love your feedback once you test it, you'll hit edge cases I haven't thought of given your setup with Ansible.