DEV Community

Cover image for How to Add Authorization to Your AI Agent (LangChain, CrewAI, OpenAI Agents, and More)
Sanjeev Kumar
Sanjeev Kumar

Posted on

How to Add Authorization to Your AI Agent (LangChain, CrewAI, OpenAI Agents, and More)

How to Add Authorization to Your AI Agent (LangChain, CrewAI, OpenAI Agents, and More)

Tags: ai, security, python, typescript
Series: AI Agent Security
Canonical URL: https://grantex.dev/for/langchain (or leave blank)


AI agents are booking flights, sending emails, and moving money. Most of them run on all-or-nothing API keys.

This is where the web was before OAuth 2.0 — and it's exactly as dangerous as it sounds.

The Problem

When you connect an AI agent to a real service — Stripe, Gmail, Salesforce — you typically give it an API key with full access. The agent can do anything the key allows. There's no scoping ("read emails but don't send"), no audit trail ("what did the agent do?"), and no revocation ("stop this agent NOW").

This was fine when agents were demos. It's not fine when they're in production.

What We Need

The web solved this 15 years ago with OAuth 2.0: users grant scoped, revocable access to applications. But OAuth was designed for human users clicking consent buttons. Agents are different:

  • Agents spawn sub-agents — a travel agent delegates to a flight booker and a hotel booker
  • Agents operate autonomously — there's no human in the loop for every API call
  • Agents chain actions — one agent's output becomes another agent's input

We need OAuth-level security with agent-native primitives.

Enter Grantex

Grantex is an open authorization protocol (Apache 2.0) for AI agents. The core flow:

  1. A human approves a scoped, time-limited grant for an agent
  2. The agent receives a signed JWT it presents to any service
  3. Services verify offline via JWKS — no Grantex account needed
  4. Every action is logged in an append-only, hash-chained audit trail
  5. The human can revoke access instantly — effective in < 1 second

Adding Authorization to Your Agent

LangChain

npm install @grantex/langchain @grantex/sdk
Enter fullscreen mode Exit fullscreen mode
import { Grantex } from '@grantex/sdk';
import { GrantexToolkit } from '@grantex/langchain';

const gx = new Grantex({ apiKey: process.env.GRANTEX_API_KEY });

// 1. Register your agent
const agent = await gx.agents.register({
  name: 'research-agent',
  scopes: ['search:read', 'docs:write'],
});

// 2. Create an authorization request
const auth = await gx.authorize({
  agentId: agent.id,
  userId: 'user_alice',
  scopes: ['search:read', 'docs:write'],
});
// User approves at auth.consentUrl

// 3. Exchange the code for a JWT
const { grantToken } = await gx.tokens.exchange({
  code,
  agentId: agent.id,
});

// 4. Wrap your LangChain tools
const toolkit = new GrantexToolkit({
  client: gx,
  grantToken,
  tools: [searchTool, docsTool],
});

// Tools now verify scopes before executing
const agent = createToolCallingAgent({ llm, tools: toolkit.tools });
Enter fullscreen mode Exit fullscreen mode

What this gives you: Every tool call checks that the agent's JWT has the required scopes. If the agent tries to call a tool it doesn't have permission for, it gets a clear error — not a silent failure.

Full LangChain guide →

CrewAI

pip install grantex-crewai grantex
Enter fullscreen mode Exit fullscreen mode
from grantex import Grantex
from grantex_crewai import GrantexCrewTools

gx = Grantex(api_key=os.environ["GRANTEX_API_KEY"])

# Each crew member gets its own scoped token
researcher = GrantexCrewTools(
    client=gx,
    grant_token=researcher_token,  # "search:read" only
    tools=[search_tool],
)

writer = GrantexCrewTools(
    client=gx,
    grant_token=writer_token,  # "docs:write" only
    tools=[write_tool],
)
Enter fullscreen mode Exit fullscreen mode

The researcher can search but not write. The writer can write but not search. Revoke any crew member without stopping the whole crew.

Full CrewAI guide →

OpenAI Agents SDK

pip install grantex-openai-agents grantex
Enter fullscreen mode Exit fullscreen mode
from grantex_openai_agents import GrantexTools

tools = GrantexTools(
    client=gx,
    grant_token=token,
    tools=[search_tool, email_tool],
)

agent = Agent(name="assistant", tools=tools.wrapped)
Enter fullscreen mode Exit fullscreen mode

Full OpenAI Agents guide →

Google ADK

pip install grantex-adk grantex
Enter fullscreen mode Exit fullscreen mode
from grantex_adk import GrantexADKTools

tools = GrantexADKTools(
    client=gx,
    grant_token=token,
    tools=[calendar_tool],
)

agent = Agent(model="gemini-2.0-flash", tools=tools.wrapped)
Enter fullscreen mode Exit fullscreen mode

Full Google ADK guide →

Vercel AI SDK

npm install @grantex/vercel-ai @grantex/sdk
Enter fullscreen mode Exit fullscreen mode
import { createGrantexTools } from '@grantex/vercel-ai';

const tools = createGrantexTools({
  client: gx,
  grantToken: token,
  tools: { search: searchTool, email: emailTool },
});

const result = await generateText({
  model: openai('gpt-4o'),
  tools,
  prompt: 'Book the cheapest flight to NYC',
});
Enter fullscreen mode Exit fullscreen mode

Edge-compatible. Works with streaming. TypeScript-first.

Full Vercel AI guide →

Protecting Your API (Service Side)

If you're building the API that agents call, you need to verify their tokens.

Express.js

npm install @grantex/express
Enter fullscreen mode Exit fullscreen mode
import { grantexMiddleware, requireScopes } from '@grantex/express';

app.use('/api', grantexMiddleware({
  jwksUrl: 'https://your-auth/.well-known/jwks.json',
}));

app.get('/api/emails', requireScopes(['email:read']), (req, res) => {
  console.log(req.grantex.agent);  // agent DID
  console.log(req.grantex.scopes); // ['email:read']
  res.json({ emails: [] });
});
Enter fullscreen mode Exit fullscreen mode

Sub-millisecond overhead. JWKS cached locally. Express guide →

FastAPI

pip install grantex-fastapi
Enter fullscreen mode Exit fullscreen mode
from grantex_fastapi import GrantexAuth, GrantContext

auth = GrantexAuth(jwks_url="https://your-auth/.well-known/jwks.json")

@app.get("/api/emails")
async def list_emails(
    grant: GrantContext = Depends(auth.require_scopes(["email:read"]))
):
    return {"emails": []}
Enter fullscreen mode Exit fullscreen mode

Async-native. Pydantic models. OpenAPI integration. FastAPI guide →

Beyond Scoping: What Else You Get

Delegation chains — A travel agent can delegate narrower permissions to a flight-booking sub-agent. The delegation depth is tracked in the JWT. Revoking the parent cascades to all children.

Budget controls — Set spending limits per agent. When the budget hits a threshold (50%, 80%), you get an alert. When it's exhausted, the agent is automatically cut off.

Real-time event streaming — SSE and WebSocket streams for grant creation, token issuance, budget alerts. Build dashboards and monitoring.

Policy engine — Pluggable authorization backends: builtin rules, OPA, or Cedar. Sync policies from git.

End-user permission dashboard — Users can view and revoke agent access from a self-serve dashboard. Embed it in your app.

Getting Started

# TypeScript / Node.js
npm install @grantex/sdk

# Python
pip install grantex

# Go
go get github.com/mishrasanjeev/grantex-go
Enter fullscreen mode Exit fullscreen mode

The protocol spec is public and frozen at v1.0. Everything is Apache 2.0. No vendor lock-in — verification is offline via JWKS.


What authorization challenges are you hitting with AI agents? I'd love to hear about your use cases in the comments.

Top comments (0)