The 5-Minute Guide to Runtime Security for LangChain Agents
LangChain makes it easy to build powerful AI agents. It does not make it easy to secure them.
This guide shows you how to add runtime security to any LangChain agent in under 5 minutes — enforcing policies before execution and logging every decision with a tamper-evident audit trail.
Why LangChain Agents Need Runtime Security
LangChain gives your agent access to tools. Tools have consequences — they call APIs, write to databases, send emails, process payments.
The agent decides when and how to use those tools based on what the LLM outputs. That output is probabilistic. It can be manipulated (prompt injection). It can drift (long conversations). It can misinterpret your instructions.
You need a layer that evaluates every tool call before execution — deterministically, not probabilistically.
Quick Setup
Install
pip install agentguard-tech langchain langchain-openai
Get your API key
# Free tier — 10,000 evaluations/month
# Get your key at agentguard.tech
export AGENTGUARD_API_KEY="ag_live_your_key_here"
Wrap your agent
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_openai import ChatOpenAI
from langchain.tools import tool
from agentguard import AgentGuard
# Your existing tools
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email."""
# your implementation
return f"Email sent to {to}"
@tool
def process_payment(amount: float, account_id: str) -> str:
"""Process a payment."""
# your implementation
return f"Payment of ${amount} processed"
# Wrap with AgentGuard
guard = AgentGuard(
api_key="ag_live_your_key_here",
policy="./policy.yaml" # or inline dict
)
# Your existing agent setup
llm = ChatOpenAI(model="gpt-4o")
tools = [send_email, process_payment]
agent = create_openai_functions_agent(llm, tools, prompt)
# Guard intercepts every tool call before execution
executor = AgentExecutor(
agent=agent,
tools=guard.wrap_tools(tools), # one line change
verbose=True
)
Define your policy
# policy.yaml
id: my-agent-policy
version: 1.0.0
rules:
# Block emails to external domains
- id: internal-email-only
action: block
match:
tool: send_email
param.to:
notContains: "@yourcompany.com"
reason: "External email sending not permitted"
# Require human approval for large payments
- id: large-payment-gate
action: require_approval
match:
tool: process_payment
param.amount:
greaterThan: 500
# Rate limit all tool calls
- id: rate-limit
action: rate_limit
limit: 100
window: 3600 # per hour
default: allow
Run it
result = executor.invoke({
"input": "Send the Q1 report to the team and process the monthly subscription payment of $299"
})
# Every tool call is now:
# 1. Evaluated against your policy (before execution)
# 2. Logged with full context, decision, risk score
# 3. Allowed, blocked, or escalated based on your rules
What the Audit Log Looks Like
{
"eventId": "evt_01HZ9XK2B",
"timestamp": "2026-03-01T14:23:01.847Z",
"agentId": "my-langchain-agent",
"tool": "process_payment",
"params": { "amount": 299, "account_id": "acc_abc123" },
"decision": "allow",
"riskScore": 28,
"matchedRuleId": null,
"policyId": "my-agent-policy-v1.0.0",
"durationMs": 0.49,
"prevHash": "sha256:a3f9b2...",
"hash": "sha256:7c8d9e..."
}
Every event. Hash-chained. Tamper-evident. EU AI Act Article 12 compliant.
What Happens When a Rule Triggers
# Tool call blocked by policy:
{
"result": "block",
"matchedRuleId": "internal-email-only",
"riskScore": 85,
"reason": "External email sending not permitted",
"durationMs": 0.52
}
# The tool is never called. The agent receives the block reason
# and can handle it gracefully or escalate to the user.
Production Checklist
- [ ] Policy file version-controlled in your repo
- [ ] Policy reviewed in PRs (treat it like IAM policy)
- [ ] Alerts configured for blocked actions (Slack, PagerDuty)
- [ ] Audit retention set to match your compliance requirements
- [ ] Rate limits configured per agent type
- [ ] Approval gates set for high-risk actions
Next Steps
- AgentGuard docs
- Policy template library
- Free API key — 10,000 evaluations/month, no card required
Questions? Drop them in the comments. Building something interesting with LangChain? I'd love to hear about it.
Follow The Bot Club for more practical AI agent security guides.
Top comments (0)