The Feature That Doesn't Exist
I'm building something that no secrets manager has.
A proxy layer that lets AI make authenticated API calls without seeing credentials.
Let me explain how and why.
The Current State (Broken)
Scenario 1: AI Needs to Call an API
You: "Use Stripe to process a payment"
ChatGPT: "I'll need your Stripe secret key to do that"
You: "sk_live_abc123xyz789..."
ChatGPT: makes API call
ChatGPT: now has your key in chat logs
Problems:
- Key in chat logs
- Key potentially in training data
- Key in OpenAI's systems
- Insecure
Scenario 2: You Try to Be Secure
You: "Use Stripe to process a payment"
ChatGPT: "I'll need your Stripe secret key"
You: "I'm not comfortable sharing that"
ChatGPT: "I understand. You'll need to make the API call manually then."
Problems:
- Can't use AI for automation
- Back to manual workflows
- Defeats the purpose of AI assistance
The Gap
Secrets managers help you store credentials.
They don't help you USE credentials with AI, that's the problem I'm solving.
** The Solution: Proxy Layer**
How It Works
1. AI Makes Request with Variable Reference
curl -X POST https://api.stripe.com/v1/customers \
-H "Authorization: Bearer $STRIPE_SECRET_KEY" \
-d "email=customer@example.com"
2. Proxy Intercepts
func (p *Proxy) handleRequest(req *http.Request) {
// Parse Authorization header
auth := req.Header.Get("Authorization")
// Check for variable reference
if strings.Contains(auth, "$") {
// Extract variable name
varName := extractVariable(auth) // "$STRIPE_SECRET_KEY"
// Fetch actual secret (encrypted, from keyring)
secret, err := p.secrets.Get(varName)
// Inject into request
req.Header.Set("Authorization",
strings.Replace(auth, "$"+varName, secret, 1))
}
// Forward modified request
resp, err := p.client.Do(req)
// Return response to caller
}
3. API Receives
Authenticated Request
The external API receives a normal, authenticated request.
4. Response Returns to AI
AI gets the response but never sees the secret.
Use Cases
1. AI-Powered Deployments
You: "Deploy to Railway"
Claude:
1. Reads deployment config
2. POST https://backboard.railway.app/graphql
Authorization: Bearer $RAILWAY_TOKEN
3. Proxy injects actual token
4. Deployment succeeds
Claude: "✓ Deployed to production"
2. Payment Processing
You: "Charge customer $50"
ChatGPT:
1. POST https://api.stripe.com/v1/charges
Authorization: Bearer $STRIPE_SECRET_KEY
amount=5000¤cy=usd&customer=cus_abc
2. Proxy injects secret key
3. Charge succeeds
ChatGPT: "✓ Charged $50 to customer"
3. Infrastructure Automation
You: "Scale to 5 instances"
Copilot suggests:
aws autoscaling set-desired-capacity \
--auto-scaling-group-name my-asg \
--desired-capacity 5 \
--region us-east-1
With credentials from $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY
Proxy injects credentials
Command succeeds
https://github.com/The-17/agentsecrets
Questions?
What APIs would you want AI to call securely?
What concerns do you have about this approach?
Drop a comment 👇
Top comments (0)