LangChain is great at building agents that act. Search the web. Write files. Send emails. Call APIs. The framework handles orchestration beautifully.
What it doesn't handle: authorization.
Your agent might be able to send an email, delete a file, and call a payment API. That doesn't mean it should — on every invocation, for every user, in every context.
This is the gap. And it's the one that causes incidents.
The Problem with Tool-Level Permissions
The typical LangChain pattern is to control what an agent can do by giving it certain tools. Don't want it deleting files? Don't add the delete_file tool.
That works until:
- A user delegates a narrow task ("just check my inbox") but your tool has broad capabilities ("read, reply, forward, delete")
- Two users have the same agent with different trust levels
- An agent is mid-task and encounters a situation outside what was intended
- You need to prove, after the fact, what the agent was authorized to do
Tool-level permissions are coarse. They answer "what can this agent ever do" — not "what is this agent authorized to do right now, for this user, for this task."
What Scope Verification Adds
Scope verification sits between your agent's intent and its action. Before it does anything consequential, it checks:
"Was this specific action included in what the delegating user actually authorized?"
If yes → signed permit, proceed.
If no → denied, stop, log it.
Every check is logged. You get an audit trail of not just what the agent could do, but what it actually did and whether each action was authorized.
Here's what that looks like in a LangChain workflow.
Setup
pip install langchain langchain-openai scopegate-client
import os
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain.tools import tool
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from scopegate_client import ScopeGateClient
sg = ScopeGateClient(api_key=os.environ["SCOPEGATE_KEY"])
Step 1 — Issue a Grant When the User Delegates a Task
When your user kicks off an agent session, issue a grant that defines exactly what they're authorizing:
def start_agent_session(user_id: str, task: str, allowed_actions: list[str]) -> str:
"""Issue a scoped grant when a user starts a task."""
grant = sg.issue(
delegator_id=user_id,
agent_id="email-assistant",
allowed_actions=allowed_actions,
ttl_minutes=60,
context={"task": task}
)
return grant["grant_id"]
# User says: "Check my inbox and draft replies to anything urgent"
grant_id = start_agent_session(
user_id="alice",
task="check inbox and draft replies",
allowed_actions=["read_email", "create_draft"]
# NOT "send_email", "delete_email", "forward_email"
)
The grant captures delegated intent at the moment of delegation. Not what the agent could do — what this user, for this task, actually authorized.
Step 2 — Wrap Your LangChain Tools with Scope Checks
Now add a thin verification wrapper around any tool that has side effects:
def scoped_tool(action_name: str, grant_id: str):
"""Decorator factory that adds scope verification to a LangChain tool."""
def decorator(func):
def wrapper(*args, **kwargs):
result = sg.verify(
grant_id=grant_id,
agent_id="email-assistant",
requested_action=action_name
)
if not result["permitted"]:
return f"❌ Action '{action_name}' is not in scope for this task. Reason: {result['reason']}"
# Permitted — proceed with the actual tool
return func(*args, **kwargs)
wrapper.__name__ = func.__name__
wrapper.__doc__ = func.__doc__
return wrapper
return decorator
# Your actual tools, with scope verification baked in
def make_email_tools(grant_id: str):
@tool
@scoped_tool("read_email", grant_id)
def read_inbox(query: str) -> str:
"""Search and read emails from the inbox."""
# ... your actual Gmail/Outlook API call here
return f"Found 3 urgent emails matching '{query}'"
@tool
@scoped_tool("create_draft", grant_id)
def create_draft(to: str, subject: str, body: str) -> str:
"""Create a draft email reply."""
# ... your actual draft creation logic here
return f"Draft created to {to}"
@tool
@scoped_tool("send_email", grant_id) # This will be denied for alice's task
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email immediately."""
# ... your actual send logic here
return f"Email sent to {to}"
return [read_inbox, create_draft, send_email]
Step 3 — Build and Run the Agent
def build_email_agent(grant_id: str) -> AgentExecutor:
tools = make_email_tools(grant_id)
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are an email assistant. Help the user manage their inbox efficiently."),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
agent = create_openai_tools_agent(llm, tools, prompt)
return AgentExecutor(agent=agent, tools=tools, verbose=True)
# Run it
grant_id = start_agent_session(
user_id="alice",
task="check inbox and draft replies",
allowed_actions=["read_email", "create_draft"]
)
agent = build_email_agent(grant_id)
result = agent.invoke({
"input": "Check my inbox for anything urgent and draft replies where needed"
})
When the agent tries send_email (which it might, if it thinks that's helpful), it gets:
❌ Action 'send_email' is not in scope for this task. Reason: action_not_in_scope
The agent sees this, understands it can only draft, and adjusts. Alice's emails don't go anywhere without her explicit approval.
What the Audit Trail Looks Like
Every verify() call is logged on ScopeGate's side. You can pull the audit log for any grant:
# After the session, pull the audit trail
audit = sg.audit(grant_id=grant_id)
for entry in audit["entries"]:
print(f"{entry['timestamp']} | {entry['action']} | {'✅' if entry['permitted'] else '❌'} | {entry.get('reason', '')}")
2026-05-05T14:02:13Z | read_email | ✅ |
2026-05-05T14:02:18Z | create_draft | ✅ |
2026-05-05T14:02:31Z | send_email | ❌ | action_not_in_scope
2026-05-05T14:02:35Z | create_draft | ✅ |
This is what enterprise compliance teams are asking for. Not "what did the agent have access to" — but "what did it actually try to do, and was each action authorized?"
Why This Pattern Matters at Scale
When you're running agents for a single user on a single task, permissions feel manageable. When you're running agents for thousands of users across dozens of task types, things get complicated fast.
- User A's "email assistant" is authorized to send. User B's is not.
- The CRM agent in the sales department can write records. The marketing department's can only read.
- An agent mid-task encounters a situation requiring elevated permissions — it should stop and ask, not improvise.
Scope verification makes this tractable. You issue grants with intent at delegation time. The verification layer enforces them automatically. You get an audit trail without instrumenting every tool individually.
The One-Line Version
If you don't want the decorator pattern, the core check is just:
result = sg.verify(grant_id=grant_id, agent_id="email-assistant", requested_action="send_email")
if not result["permitted"]:
raise PermissionError(f"Out of scope: {result['reason']}")
Drop that before any consequential operation. You're done.
Getting Started
pip install scopegate-client
from scopegate_client import ScopeGateClient
sg = ScopeGateClient(api_key="your-key")
Starter plan is free — 1,000 verifications included, no credit card required.
👉 scopegate.ai — get your API key in 30 seconds, drop it into your LangChain agent, and ship with confidence.
If you found this useful, I'm publishing weekly on AI agent architecture, safety patterns, and the infrastructure layer the agentic era needs. Follow along.
Top comments (0)