I kept running into the same uncomfortable gap when I thought about “real” AI agents: they only become useful when they can touch your tools (email, repos, Slack), but I never wanted those OAuth tokens anywhere near a model context, a log line, or a browser bundle I don’t fully control. So I built Tether around a simple rule I actually believe: the agent can ask; the platform decides; the secrets stay server-side.
For me, the Token Vault angle isn’t buzzword bingo. It’s the shape of the problem Auth0 is trying to solve. I send people through normal OAuth consent via Auth0 for GitHub, Google (Gmail and Calendar), and Slack. The messy part (codes, refresh tokens, rotation) gets handled behind the SPA. I encrypt what has to sit in my database and only decrypt it inside Edge Functions when a mission and policy say the action is allowed. When I demo MCP or a REST client, I’m not handing anyone a Gmail token. I’m handing the system a user JWT and a mission id, and either the call goes out or it gets blocked and logged. That felt like the honest version of “authorized to act.”
What I’m proudest of is how it changes the story for the person using it. They see what the mission will and won’t do, they approve on purpose, and if something is scary (delete repo, bulk export), they hit step-up instead of the model “just trying.” I’m not claiming we solved agent safety in one repo, but I am arguing for a pattern I’d actually ship: identity and consent live where humans already trust them, and the agent never becomes the custodian of third-party keys.
If this resonates with the Auth0 community, I hope it’s as a practical reference: Token Vault thinking, for me, means custody and refresh belong in the identity layer, and every tool call still needs runtime authorization, not a one-time “connect and pray.”
Top comments (0)