Self-Hosting OneCLI in 5 Minutes with Docker
OneCLI is an open-source credential vault and gateway for AI agents. It sits between your agents and the APIs they call, injecting real credentials at the proxy layer so agents never see your secrets.
This guide walks you through running OneCLI locally with Docker, adding your first credential, connecting an agent, and verifying it works end to end.
Prerequisites
You'll need:
- Docker installed and running (Docker Desktop on macOS/Windows, or Docker Engine on Linux)
- An API key you want to protect (we'll use an OpenAI key in this example, but any API key works)
- An agent or script that makes API calls (we'll provide a test script)
That's it. No Kubernetes, no external database, no cloud account.
Step 1: Start OneCLI
Clone the repo and start OneCLI with Docker Compose:
git clone https://github.com/onecli/onecli.git
cd onecli
docker compose -f docker/docker-compose.yml up
This starts two services:
- Port 10254: The web dashboard (where you manage credentials)
- Port 10255: The gateway (where agents send requests)
Docker Compose handles PostgreSQL, the gateway, and the dashboard together. Data is persisted via Docker volumes.
Verify it's running by opening http://localhost:10254 in your browser.
Step 2: Initial setup
Open your browser and navigate to http://localhost:10254. You'll see the OneCLI dashboard setup screen.
Create an admin account. This is local to your OneCLI instance - no external authentication service is involved. Your credentials are stored in the PostgreSQL database inside the container.
After logging in, you'll see the main dashboard with three sections:
- Credentials: Your encrypted API keys and secrets
- Agents: Agent identities and their access permissions
- Logs: Audit trail of all proxied requests
Step 3: Create an agent identity
Before adding credentials, create an agent identity. This is how OneCLI authenticates and authorizes agents that connect to the proxy.
- Go to Agents in the dashboard
- Click Create Agent
- Give it a name (e.g., "my-test-agent")
- Copy the generated agent token - you'll need this in Step 5
The agent token is used in the Proxy-Authorization header. Different agents can have different tokens and different access to credentials.
Step 4: Add your first credential
Now, store an API key in OneCLI's encrypted vault.
- Go to Credentials in the dashboard
- Click Add Credential
- Fill in the fields:
- Name: "OpenAI API Key" (for your reference)
-
Host pattern:
api.openai.com -
Path pattern:
/v1/* -
Header name:
Authorization -
Header value format:
Bearer {secret}(OneCLI will inject your key where{secret}appears) -
Secret: Paste your actual OpenAI API key (e.g.,
sk-proj-abc123...)
- Under Access, grant access to the agent you created in Step 3
- Click Save
The credential is now encrypted with AES-256-GCM and stored in the vault. The plaintext key exists only in the dashboard's memory during this form submission - once saved, it's encrypted.
The host and path patterns tell OneCLI when to inject this credential. Any request to api.openai.com/v1/* from the authorized agent will get the real API key injected into the Authorization header.
Step 5: Install the CA certificate
OneCLI works by intercepting HTTPS traffic, which requires the agent to trust OneCLI's local CA certificate.
Download the CA cert from the dashboard:
- Go to Settings in the dashboard
- Click Download CA Certificate
- Save the file (e.g.,
onecli-ca.pem)
For a quick test, you can pass this cert directly to your HTTP client. For production use, you'd install it in the system trust store or the agent's container trust store.
Step 6: Test with curl
Before connecting a real agent, verify the proxy works with a simple curl command:
curl -x http://localhost:10255 \
--proxy-header "Proxy-Authorization: Basic $(echo -n 'my-test-agent:' | base64)" \
--cacert onecli-ca.pem \
-H "Authorization: Bearer placeholder" \
https://api.openai.com/v1/models
Let's break this down:
-
-x http://localhost:10255- route through OneCLI's proxy -
--proxy-header "Proxy-Authorization: ..."- authenticate as the agent you created -
--cacert onecli-ca.pem- trust OneCLI's CA certificate -
-H "Authorization: Bearer placeholder"- the placeholder key (OneCLI replaces this) -
https://api.openai.com/v1/models- a simple OpenAI endpoint to list models
If everything is configured correctly, you'll get back a JSON response listing OpenAI models. The placeholder value was replaced with your real API key by OneCLI before the request reached OpenAI.
Check the Logs section in the dashboard - you should see the request logged with the agent identity, target host, and timestamp.
Step 7: Connect a real agent
Now connect an actual AI agent. The setup is the same regardless of the framework - set environment variables and the agent's HTTP calls will route through OneCLI.
Python agent (LangChain, custom, etc.)
export HTTPS_PROXY=http://localhost:10255
export REQUESTS_CA_BUNDLE=/path/to/onecli-ca.pem
export OPENAI_API_KEY=placeholder
from openai import OpenAI
# The client uses HTTPS_PROXY automatically
client = OpenAI(api_key="placeholder")
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello, world!"}]
)
print(response.choices[0].message.content)
The OpenAI SDK respects the HTTPS_PROXY environment variable. The api_key="placeholder" is replaced by OneCLI before the request hits OpenAI's servers.
Docker-based agent
If your agent runs in Docker, pass the proxy config at container startup:
docker run -d \
--name my-agent \
-e HTTPS_PROXY=http://host.docker.internal:10255 \
-e OPENAI_API_KEY=placeholder \
-v /path/to/onecli-ca.pem:/etc/ssl/certs/onecli-ca.pem \
my-agent-image
Note: host.docker.internal resolves to the host machine from inside a Docker container on macOS and Windows. On Linux, you may need --network host or the host's IP address.
n8n
In n8n, set the proxy in the environment:
docker run -d \
--name n8n \
-e HTTPS_PROXY=http://host.docker.internal:10255 \
-e NODE_EXTRA_CA_CERTS=/etc/ssl/certs/onecli-ca.pem \
-v /path/to/onecli-ca.pem:/etc/ssl/certs/onecli-ca.pem \
n8nio/n8n
Then configure your n8n credentials with placeholder values. OneCLI handles the rest.
Verifying it works
After your agent makes a few API calls, check the following:
Dashboard Logs: Each proxied request should appear with the agent name, target host, path, and timestamp. This is your audit trail.
Agent logs: If you inspect the agent's own logs or environment, you should only see "placeholder" - never the real API key.
Credential injection: The API calls succeed, which means OneCLI is correctly injecting the real credentials. If you see authentication errors, double-check the host pattern, path pattern, and header format in your credential configuration.
Tips for production deployment
The Docker quickstart above is fine for development and testing. For production, consider these adjustments:
Use Docker Compose
OneCLI ships with a docker/docker-compose.yml that handles the gateway, dashboard, and PostgreSQL together. For production, customize the compose file with your own SECRET_ENCRYPTION_KEY and DATABASE_URL.
Set an encryption key
By default, OneCLI auto-generates an encryption key. For production, set it explicitly via the SECRET_ENCRYPTION_KEY environment variable. This allows you to manage the key separately (e.g., in your cloud provider's KMS).
Restrict dashboard access
The web dashboard should not be exposed to the internet. Bind it to localhost or put it behind a VPN by modifying the port mapping in docker-compose.yml to 127.0.0.1:10254:10254.
Use an external database
The default Docker Compose setup includes PostgreSQL. For production workloads, point DATABASE_URL to your own managed PostgreSQL instance for proper backups, replication, and monitoring.
Enable TLS for the proxy port
If agents connect to OneCLI over a network (not localhost), enable TLS on the proxy port itself to protect the Proxy-Authorization header in transit. See the docs for TLS configuration options.
Next steps
You now have OneCLI running locally with a credential secured and an agent connected. From here you can:
- Add more credentials for Stripe, GitHub, AWS, or any API that uses header-based authentication
- Create more agents, each with its own identity and scoped access to specific credentials
- Try the cloud version at app.onecli.sh if you'd rather not self-host
For detailed configuration options, API reference, and advanced deployment patterns, visit the documentation.
OneCLI is an open-source credential vault and gateway for AI agents. Give your agents access, not your secrets.
Top comments (0)