Three weeks ago I got tired of pasting API keys into .env files every time I spun up a new AI agent. GitHub, Linear, Stripe, Notion, Slack, Vercel — each agent ended up with god-mode credentials to half my stack, with no approval flow, no audit trail, no revocation story.
I went looking for a tool to fix this. Secrets managers like Vault and 1Password store secrets well but don't model agents, approvals, or agent-initiated tool requests. So I built one. It's called AgentKey, and it launched today. Here's the interesting stuff under the hood.
The model: zero-access by default
Every agent starts with zero access. To use a tool, it does this over plain HTTP:
GET /api/tools
Authorization: Bearer {agent_key}
The response lists the catalog and this agent's access status per tool. If it needs something it doesn't have:
POST /api/tools/{tool_id}/request
{ "reason": "Need to open PRs on behalf of the user" }
A human approves once in a dashboard. From that moment, the agent can fetch the credential on demand:
GET /api/tools/{tool_id}/credentials
— and only at the moment of fetch. The agent never stores the credential; it's vended fresh each call, rate-limited, and logged.
Encryption: AES-256-GCM with per-record IV
Naive encryption schemes reuse the IV. AgentKey generates a fresh 12-byte IV for every secret using crypto.randomBytes(12), appends the 16-byte GCM auth tag, and stores the tuple base64url-encoded:
// app/src/lib/crypto.ts
const iv = crypto.randomBytes(12);
const cipher = crypto.createCipheriv("aes-256-gcm", key, iv);
const ciphertext = Buffer.concat([
cipher.update(plaintext, "utf8"),
cipher.final(),
]);
const tag = cipher.getAuthTag();
return base64url(Buffer.concat([iv, tag, ciphertext]));
Decryption pulls the IV and tag off the front, calls setAuthTag() before final(), and fails cleanly on tamper. Every secret in the database has its own IV — compromise of one doesn't weaken the others.
Timing-safe API key verification
Agent API keys are SHA-256 hashed at rest. Verification uses crypto.timingSafeEqual() to prevent timing-based key recovery:
export function verifyAgentApiKey(provided: string, stored: string): boolean {
const providedHash = hashAgentApiKey(provided);
return crypto.timingSafeEqual(
Buffer.from(providedHash, "hex"),
Buffer.from(stored, "hex"),
);
}
This is table-stakes but it's worth saying: if you're comparing hashes with ===, a patient attacker can recover the key one byte at a time.
Append-only audit log
Every action — request submitted, approval granted, credential vended, grant revoked — goes in an audit log that doesn't allow UPDATE or DELETE at the schema level. This is enforced in Drizzle's type system and would need a destructive migration to break. Audit entries link actor (agent, human, system) to target (tool, grant, secret) with signed metadata.
The trick I like: the fetch of a credential is an audited event. If an agent's key is compromised and an attacker starts fetching tools, you see it in the log the moment they do — they can't just pull the secret and go silent.
The wild part: agent-driven catalog
This is the one I'm least sure about. When an agent hits a tool that isn't in the catalog, it can submit a suggestion:
POST /api/tools/suggest
{ "name": "Linear", "url": "https://linear.app", "reason": "..." }
Multiple agents can back the same suggestion. The admin sees aggregated demand — "Onboarding-Agent + 2 others want Linear" — not one-off tickets. You approve once, the whole fleet gets access.
The bet: in a world where you have 50 agents doing 50 jobs, the catalog should reflect what they actually need, not what an admin guessed they'd need a month ago.
Stack
- Next.js 16 (App Router, Server Components)
- Drizzle ORM + Neon Postgres
- Upstash Redis — 4-tier rate limiting (agent reads, requests, credential fetches, admin ops)
- Clerk — human auth (org members manage the dashboard)
- Vercel — hosting, with AI Gateway for one AI feature below
-
AES-256-GCM via
node:crypto, no third-party crypto lib
Total: ~26,000 lines of TypeScript, 87 commits, single developer.
The one AI feature
I tried hard to not pile AI on AI. One feature earned its keep: paste a product's docs URL, and the admin gets a streaming setup guide — markdown instructions for creating an API key, scopes to select, where to find it. Streamed via SSE, powered by Vercel AI Gateway. Cold-start pain is real when your "tool catalog" is empty, so this makes first-add fast.
The license choice: BSL 1.1 → Apache 2.0 in 2030
This is the one I debate with myself. I went with BSL 1.1 — source-available, self-hostable, permissive for everyone except someone running it as a competing managed service. On April 1, 2030, it auto-converts to Apache 2.0.
Why: I want people to trust the code, read it, modify it, self-host it. I don't want AWS launching "Amazon AgentKey" six months in. 2030 gives enough runway to figure out what this is. Then it's truly open.
I know BSL has vocal critics. I'm open to being wrong.
What's rough (honest)
- No pre-seeded integrations. You build your catalog from your docs URLs. The AI setup guide helps, but cold start is still real.
- No RBAC in V1. All org members are full admins. Fine for small teams, won't fly enterprise.
- Shared credential rotation is manual. Admin updates the secret, agents fetch the new one on next call. Automatic rotation + secrets-manager integration is on the roadmap.
- MCP is not a first-class primitive yet. It probably should be.
Try it
agentkey.dev — free forever managed, or self-host (docker-compose, Neon + Upstash + Clerk marketplace integrations handle the infra).
If you're building AI agents and you've been ignoring the credential problem, this is the nudge to stop. If you've solved it a different way — Vault pattern, 1Password SDK, custom — I genuinely want to hear how.
Launched on Product Hunt today: AgentKey on PH. Feedback, roasts, and war stories welcome.
Top comments (0)