DEV Community

Cover image for OneCLI vs HashiCorp Vault: Why AI Agents Need a Different Approach
Jonathan Fishner
Jonathan Fishner Subscriber

Posted on

OneCLI vs HashiCorp Vault: Why AI Agents Need a Different Approach

OneCLI vs HashiCorp Vault: why AI agents need a different approach

HashiCorp Vault is one of the most respected tools in infrastructure security. It handles secrets rotation, dynamic credentials, encryption as a service, and access policies at massive scale. If you are running a traditional microservices architecture, Vault is a proven choice.

But AI agents are not traditional microservices. They introduce a fundamentally different trust model, and that changes the requirements for credential management.

This post explains why OneCLI exists alongside Vault - not as a replacement, but as a purpose-built layer for the specific problem of giving AI agents access to external services without exposing raw secrets.

The core problem with AI agents

When you deploy an AI agent (whether it is a LangChain pipeline, an AutoGPT instance, or a custom orchestration layer), you typically need it to call external APIs: OpenAI, Stripe, GitHub, Slack, databases, internal services. The standard approach is to pass API keys through environment variables or config files.

This creates a problem. The agent process has direct access to the raw credential. If the agent is compromised through prompt injection, a malicious plugin, or a supply chain attack on one of its dependencies, the attacker can exfiltrate every key the agent has access to.

Vault does not solve this by itself. Vault is a secret store - it hands the secret to the requesting process, and from that point the process holds the raw credential in memory. The threat model assumes the requesting process is trusted. AI agents, by their nature, run untrusted or semi-trusted code (LLM-generated tool calls, third-party plugins, user-provided prompts that influence execution).

How OneCLI takes a different approach

OneCLI never hands the raw credential to the agent. Instead, it acts as a transparent HTTPS proxy:

  1. The agent makes a normal HTTP request with a placeholder key.
  2. The request routes through OneCLI (via standard HTTPS_PROXY environment variable).
  3. OneCLI authenticates the agent using a Proxy-Authorization header (a scoped, low-privilege token).
  4. OneCLI matches the request's host and path to a stored credential.
  5. The real credential is decrypted from the vault (AES-256-GCM), injected into the request header, and the request is forwarded to the destination.

The agent never sees the real key. It is never in the agent's memory, never in its logs, never extractable through prompt injection.

Feature comparison

Capability HashiCorp Vault OneCLI
Primary purpose General secret management AI agent credential injection
Agent code changes required Yes - must integrate Vault SDK or API No - uses standard HTTPS_PROXY
Credential exposure to agent Yes - agent receives raw secret No - proxy injects at request time
Credential scoping Policy-based (path ACLs) Host/path pattern matching per credential
Dynamic secrets Yes (databases, cloud IAM, PKI) No (static credential injection)
Secret rotation Built-in Update in vault, agents unaffected
Encryption at rest Shamir/auto-unseal AES-256-GCM
Setup complexity High (cluster, unseal, policies, auth backends) Low (Docker Compose (gateway + PostgreSQL))
Self-hosted Yes Yes
Open source Yes (BSL since 1.14+) Yes (Apache 2.0)
Audit logging Yes Yes (all proxied requests)
Infrastructure overhead Consul/Raft cluster, HA setup Docker Compose (gateway + PostgreSQL)
Learning curve Steep (HCL policies, auth methods, secret engines) Minimal (add credentials, set proxy env var)
Language/framework support SDKs for major languages Any language (HTTP proxy is universal)
Enterprise features Namespaces, Sentinel, replication Cloud dashboard, team management
Price Free (OSS) / Paid (Enterprise) Free (OSS) / Paid (Cloud)

Where Vault excels

Vault is the better choice when you need:

  • Dynamic database credentials that are created on demand and automatically revoked.
  • PKI certificate issuance for service mesh or internal TLS.
  • Encryption as a service (transit secret engine) for application-level encryption without managing keys in app code.
  • Multi-datacenter secret replication across large infrastructure.
  • Compliance frameworks that specifically require Vault's audit and policy model.

These are capabilities OneCLI does not attempt to replicate. Vault is a general-purpose secret management platform; OneCLI is a focused tool for a specific use case.

Where OneCLI excels

OneCLI is the better choice when you need:

  • Zero-code credential management for AI agents. No SDK integration, no Vault API calls. Set an environment variable and the agent works.
  • Credential isolation from untrusted processes. The agent never holds the raw secret, which matters when the process runs LLM-generated code.
  • Fast setup for developer and small-team environments. Docker Compose with gateway and PostgreSQL, ready in minutes.
  • Host/path scoped credentials. Each credential is locked to specific API endpoints, so even if an agent's proxy token is compromised, it can only reach the services you have explicitly allowed.

Using Vault and OneCLI together

The strongest architecture for security-conscious teams combines both:

  1. Vault stores and rotates your master credentials, issues dynamic secrets, and manages your PKI.
  2. OneCLI pulls credentials from Vault (via planned integrations) and acts as the injection proxy for AI agents.

This gives you Vault's secret lifecycle management without exposing raw credentials to agent processes. Vault handles the "store and rotate" layer. OneCLI handles the "inject without exposing" layer.

This integration is on the OneCLI roadmap. Today, you can manually sync credentials from Vault into OneCLI's encrypted store. Native Vault backend support will allow OneCLI to fetch credentials directly from Vault at request time.

When to use what

Use Vault alone if you have no AI agents and need enterprise secret management for traditional services.

Use OneCLI alone if you are a small team running AI agents and want the simplest path to keeping credentials out of agent memory.

Use both together if you are running AI agents at scale and want Vault's secret lifecycle management combined with OneCLI's agent-specific credential isolation.

Summary

Vault and OneCLI solve different problems with some overlap. Vault is about storing and managing secrets across your infrastructure. OneCLI is about ensuring AI agents can use credentials without ever possessing them. The proxy-based injection model is what makes the difference - it is not a pattern Vault was designed for, and retrofitting it onto Vault would mean building most of what OneCLI already provides.

If you are giving API keys to AI agents today, the question is not whether to replace Vault. It is whether your agents should hold raw credentials at all.


Learn more at onecli.sh or read the docs.

Top comments (0)