How a forgotten UX pattern from 2019 could solve the biggest unsolved problem in AI agent security.
TL;DR: LinkAuth lets your AI agent ask users for credentials (API keys, passwords, OAuth tokens) through a simple link, without any callback server, custom portal or custom SDK. The user clicks, enters credentials, and the agent receives them end-to-end encrypted.
The server in the middle never sees a thing.
There’s a moment every AI agent developer hits. A wall. A wall so absurd, so embarrassingly mundane, that nobody talks about it at conferences. Because it is not trivial to solve in this AI Hype Cycle.
Your agent can write code. It can summarize legal documents. It can plan a trip to Tokyo with connecting flights and restaurant recommendations. But ask it to check your email?
It freezes. Because it can’t log in.
Not “can’t” as in lacking intelligence. “Can’t” as in there is no good way to give an AI agent your password without doing something deeply stupid.
And yet, here we are, in 2026, building autonomous agents that are supposed to run for hours, days, weeks on our behalf. Agents that book flights, manage cloud infrastructure, send invoices. Every single one of them needs credentials. OAuth tokens. API keys. Passwords.
How are we solving this today?
We’re not. We’re duct-taping it.
Everyone building own proxies, injections and custom solutions of some sort.
The Five Flavors of “This Is Fine”
Let’s be honest about the current state of affairs. Every approach to giving AI agents credentials has a fatal flaw:
1. Tokens in environment variables. The developer’s comfort food. Easy, familiar, and about as secure as writing your PIN on a sticky note in a locked room. Good luck asking a non-technical user to “just set OPENAI_API_KEY in his terminal.”
2. OAuth with callbacks. The gold standard for web apps and completely useless for an agent running in a Docker container, a cron job, or a Telegram bot. OAuth assumes your application has a web server with a publicly reachable callback URL. Most agents don’t. Many shouldn’t.
3. Managed platforms like Composio. They don’t just handle auth, they take over tool execution, sandboxing, session management, everything. You wanted to solve a login problem and ended up outsourcing your entire agent to someone else’s infrastructure. Credentials included.
4. Auth0’s Device Flow. Clever. But it’s designed for authentication, not for handing arbitrary credentials to an agent. It doesn’t support API keys, passwords, or certificates. And it’s not self-hostable.
5. Local credential managers. Great if your agent runs on the same machine as the user. Most don’t. The whole point of autonomous agents is that they run somewhere else.
Five approaches. Five dead ends.
And the gap between what AI agents need to do and what they can securely access keeps growing wider every week.
What If Your Agent Could Just… Ask?
Picture this.
Your AI agent needs your OpenAI API key to complete a task. Instead of crashing, asking you to edit a config file, or requiring a callback server, it simply says:
I need your OpenAI API key. Please open this link and enter it: https://broker.example.com/connect/XKCD-4271 -verify the code XKCD-4271 matches.
You click the link. You see the code. It matches. You paste your API key. Done.
Thirty seconds. Without any callback server or SDKs or vendor lock-in.
And here’s the part that matters:
The broker that facilitated this exchange never saw your API key.
Not in transit. Not in storage. Not ever. The broker only delegates.
Sound too good to be true? It’s not. It’s cryptography.
Introducing LinkAuth: A Zero-Knowledge Credential Broker for AI Agents
LinkAuth is an open-source credential broker built on a simple insight: the Device Authorization Flow (RFC 8628), that pattern your smart TV uses when it asks you to visit a URL and enter a code, is almost the perfect UX for AI agents requesting credentials.
Almost. Because smart TVs don’t need zero-knowledge encryption. AI agents do.
Here’s why: when you enter a credential on a website, you’re trusting that server. For a TV streaming login, that’s fine. For an AI agent’s credential broker, a server that potentially handles thousands of API keys, passwords, and OAuth tokens from different users, “just trust the server” isn’t good enough.
So LinkAuth adds a critical layer:
Hybrid encryption with the agent’s public key
The Flow in 60 Seconds
- Agent generates an RSA keypair. Private key stays local.
- Agent asks the broker: “I need credentials.” -> Sends its public key. <- Receives a URL + human-readable code (like “XKCD-4271”).
- Agent shows the URL and code to the user (via chat, console, whatever).
- User opens the URL in their browser. -> Sees the code, confirms it matches. -> Enters credentials (API key, password, or starts OAuth).
- The browser encrypts the credentials with the agent’s public key. (RSA-OAEP + AES-256-GCM hybrid encryption, all client-side)
- Encrypted blob is sent to the broker. Broker stores ciphertext.
- Agent polls the broker, retrieves the encrypted blob, decrypts locally.
- Agent has the credentials.
Broker never saw them.
That’s it. The entire security model rests on one elegant principle:
The broker is a mailbox, not a vault.
It holds an encrypted letter it can’t read, waiting for the only entity with the private key to pick it up.
But Wait - What About OAuth?
Good question. OAuth is the elephant in the room.
For direct credentials (API keys, passwords, certificates), LinkAuth achieves true zero-knowledge. The browser encrypts everything client-side with Web Crypto API before anything leaves the page. The broker literally cannot see what the user entered.
For OAuth flows (Google, GitHub, Slack), there’s an inherent limitation, The broker must act as the OAuth confidential client. It receives the OAuth tokens from the provider’s callback, encrypts them with the agent’s public key, and stores the ciphertext. The broker briefly sees the tokens in memory.
This isn’t a flaw in LinkAuth, it’s a fundamental constraint of OAuth itself. The tokens have to go somewhere before they can be encrypted. LinkAuth minimizes this exposure window and is transparent about the trade-off.
The honest security posture:
- Form credentials (API keys, passwords) -> True zero-knowledge
- OAuth tokens -> Encrypted at rest, briefly visible in broker memory
We document this openly. Because security through obscurity isn’t security at all.
Why Not Just Build Another SDK?
Here’s where LinkAuth diverges from everything else on the market.
LinkAuth has no SDK requirement. None. Zero. The entire protocol is plain HTTP. Any language, any framework, any runtime that can make HTTP requests and do RSA decryption can be a LinkAuth agent. That’s Python, Go, Rust, JavaScript, no-code tools like n8n, maker, zapier, a bash script with curl and openssl - literally anything.
This is a deliberate design choice. The AI agent ecosystem is fragmented across dozens of frameworks: LangChain, CrewAI, AutoGen, custom solutions. Requiring an SDK means picking winners and losers. Requiring HTTP means everyone wins.
# Create a session - that's it, that's the "SDK"
curl -X POST https://broker.example.com/v1/sessions \
-H "Content-Type: application/json" \
-d '{"public_key": "…", "template": "openai"}'
The IETF Connection
This isn’t just open-source software chasing GitHub stars. LinkAuth is built on established IETF standards:
| RFC 8628 | Device Authorization Grant - the URL + Code UX |
| RFC 8017 | RSA-OAEP encryption for key wrapping |
| RFC 5116 | AES-256-GCM authenticated encryption |
| RFC 9457 | Problem Details for HTTP API errors |
| RFC 7636 | PKCE for OAuth security |
| RFC 6797 | HSTS for transport security |
And here’s where it gets interesting. The IETF is actively working on standards for AI agent authentication. Multiple drafts are circulating:
draft-klrc-aiagent-auth (March 2026) AI Agent Authentication and Authorization. The freshest draft, backed by authors from AWS, Zscaler, and Ping Identity. Builds on the WIMSE (Workload Identity in Multi-System Environments) framework.
draft-rosenberg-oauth-aauth AAuth: Agentic Authorization for OAuth 2.1. Active, focused on agents that interact via phone or text channels.
LinkAuth isn’t waiting for these standards to be finalized. It’s providing running code - which, in IETF culture, is the strongest argument you can make. The plan is to bring LinkAuth to an IETF Hackathon, demonstrate the protocol, and potentially submit an Internet-Draft: Zero-Knowledge Credential Brokering for Autonomous Agents.
But it is not only for agents, for your n8n workflows as well.
If you like join me on the IETF 126 Hackathon in Wien on July, 18–19
Under the Hood: Security That Doesn’t Ask You to Trust Anyone
Let’s talk about what happens when things go wrong. Because they will. Always will. It’s Murphys Law…
Scenario: The broker’s database gets breached.
Result: The attacker gets… encrypted blobs. Ciphertext that requires each individual agent’s private RSA key to decrypt. Every session uses a different keypair. There is no master key. There is no “decrypt everything” button. The breach is, for all practical purposes, useless.
Scenario: Someone intercepts the network traffic.
Result: Even without TLS, the credentials are encrypted with the agent’s public key before leaving the browser. An eavesdropper sees ciphertext, not credentials. (TLS is still required in production to protect the integrity of the JavaScript that performs the encryption, but the encryption itself provides defense in depth.)
Scenario: An agent is compromised.
Result: The attacker gets access to that agent’s private key and can decrypt that agent’s credentials. This is the same threat model as any application that holds secrets in memory and it’s explicitly scoped. Compromising one agent doesn’t compromise any other agent or any other user.
What the attacker gets
Database breach => Useless ciphertext
Network interception => Useless ciphertext (even without TLS)
Broker server compromise => Useless ciphertext (for form credentials)
Agent compromise => That agent’s credentials only
The security model isn’t “trust us.” It’s “you don’t have to.”
Try It in Two Minutes
Really. Two minutes.
Prerequisites: Python 3.11+ and uv (or pip).
Start the Broker:
git clone https://github.com/LinkAuth/LinkAuth.git
cd linkauth && uv sync - all-extras
# Terminal 1
PYTHONPATH=src python -m uvicorn broker.main:app - port 8080
Run the Agent Simulation:
# Terminal 2
python examples/agent_simulation.py
Type anything. The mock LLM pretends you asked for your emails. The agent requests credentials, shows you a link, you open it in your browser, enter any test credentials, and the agent decrypts them locally. The broker never saw a thing.
The simulation includes a full roundtrip: session creation, hybrid encryption in the browser, polling, decryption. It’s the entire protocol in a single script. Aworking proof of concept you can touch, break, and rebuild.
What LinkAuth Is Not
Transparency matters. So let’s be clear about what LinkAuth doesn’t do:
It’s not an identity provider.
LinkAuth doesn’t authenticate users. It doesn’t issue JWTs. It doesn’t manage user accounts.
It’s a credential relay - a secure pipe between a user who has credentials and an agent that needs them.
It’s not a secrets manager.
Credentials are ephemeral. Sessions expire in 5–15 minutes. Ciphertext is deleted after the agent retrieves it. If you need long-term secret storage, that’s a different tool. LinkAuth handles the initial handoff.
It’s not production-hardened yet.
This is v0.1. Early development. The architecture is sound, the cryptography is standard, but it hasn’t been through a formal security audit. We’re building in the open because that’s how trust is earned.
The Bigger Picture: Why This Matters Now
We’re at an inflection point. AI agents are moving from demos to production. From “look what GPT can do” to “this agent manages my AWS infrastructure while I sleep.”
And every single one of these production agents will need credentials. Not toy credentials. Real API keys connected to real money. Real OAuth tokens with access to real email, real calendars, real codebases for a varianty of users.
The question isn’t whether we need a standard for agent credential exchange. The question is whether that standard will be:
(a) A proprietary protocol controlled by whichever platform moves fastest
or
(b) An open, IETF-backed standard built on running code and rough consensus.
LinkAuth is a bet on (b).
It’s a bet that the developer community, the people actually building these agents, would rather have an open protocol they can self-host, inspect, and extend, than a managed service they have to trust and pay for.
It’s a bet that the IETF process, slow as it is, produces better standards than any single company racing to lock in customers.
And it’s a bet that zero-knowledge architecture isn’t a nice-to-have. It’s the baseline. Because in a world where AI agents handle our most sensitive credentials, “the server never sees your password” shouldn’t be a feature. It should be a requirement.
Get Involved
LinkAuth is open-source under the Apache 2.0 license.
GitHub: github.com/LinkAuth/LinkAuth
Try it: Clone, run the broker, run the simulation. Two minutes.
Contribute: The DAO pattern makes it easy to add new storage backends. The template system makes it easy to add new credential types. OAuth provider support is actively being built.
IETF: I am preparing for hackathon participation and an Internet-Draft submission. If you care about open standards for AI agent auth, join the OAuth Working Group and WIMSE mailing list.
The best time to shape a standard is before it’s finalized. This is that time.
LinkAuth is in early development (v0.1). The architecture is stable, the cryptography follows established RFCs, and the protocol is designed for extensibility. Stars, issues, and pull requests are welcome. So is honest criticis, that’s how open source gets better.

Top comments (0)