DEV Community

The Bot Club
The Bot Club

Posted on • Originally published at agentguard.tech

Add Security Guardrails to LangChain in 5 Minutes

LangChain makes it ridiculously easy to build AI agents that use tools. Connect an LLM to a file system, a database, a shell — and suddenly your agent can do things.

That's the magic. It's also the problem.

Every tool call your LangChain agent makes is a potential attack surface. Prompt injection can trick your agent into reading sensitive files, executing arbitrary commands, or exfiltrating data through tool calls. And by default, LangChain doesn't have a security layer between the LLM's decision and the tool's execution.

AgentGuard fixes that. It sits between your agent and its tools, evaluating every action in real-time and blocking anything dangerous — before it executes.

Here's how to add it to your LangChain project in under 5 minutes.


Step 1: Install the SDK

TypeScript / Node.js:

npm install @the-bot-club/agentguard
Enter fullscreen mode Exit fullscreen mode

Python:

pip install agentguard-tech
Enter fullscreen mode Exit fullscreen mode

No heavy dependencies, no config files.


Step 2: Get Your API Key

Head to agentguard.tech and sign up. The free tier gives you 100,000 events per month — more than enough for development and most production workloads.

Grab your API key from the dashboard. Set it as an environment variable:

export AG_API_KEY="ag_live_your_key_here"
Enter fullscreen mode Exit fullscreen mode

Step 3: Add the Callback Handler (TypeScript)

AgentGuard integrates with LangChain through a callback handler. This hooks into LangChain's lifecycle events — specifically tool calls — and evaluates them against security policies before they execute.

import { AgentGuardCallbackHandler } from '@the-bot-club/agentguard/integrations/langchain';
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createOpenAIToolsAgent } from 'langchain/agents';
import { pull } from 'langchain/hub';

const agentGuardHandler = new AgentGuardCallbackHandler({
  apiKey: process.env.AG_API_KEY,
});

const llm = new ChatOpenAI({ model: 'gpt-4o' });
const prompt = await pull('hwchase17/openai-tools-agent');

const agent = await createOpenAIToolsAgent({
  llm,
  tools: yourTools,
  prompt,
});

const executor = new AgentExecutor({
  agent,
  tools: yourTools,
  callbacks: [agentGuardHandler],
});

// Every tool call now passes through AgentGuard
const result = await executor.invoke({
  input: 'Summarize the contents of /etc/passwd',
});
Enter fullscreen mode Exit fullscreen mode

One callback. That's the entire integration.


Step 4: Add the Callback Handler (Python)

from agentguard.integrations.langchain import AgentGuardCallbackHandler
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain import hub

ag_handler = AgentGuardCallbackHandler(api_key="ag_live_...")

llm = ChatOpenAI(model="gpt-4o")
prompt = hub.pull("hwchase17/openai-tools-agent")

agent = create_openai_tools_agent(llm, your_tools, prompt)

executor = AgentExecutor(
    agent=agent,
    tools=your_tools,
    callbacks=[ag_handler],
)

result = executor.invoke({
    "input": "Delete all files in the home directory"
})
Enter fullscreen mode Exit fullscreen mode

What Happens When a Dangerous Action Is Blocked?

Let's say a prompt injection attack convinces your agent to run rm -rf /. With AgentGuard:

🛡️ AgentGuard Evaluation
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  Tool:        shell_exec
  Input:       rm -rf /
  Risk Score:  0.98 (CRITICAL)
  Action:      ❌ BLOCKED
  Reason:      Destructive file system operation detected.
               Command attempts recursive forced deletion
               at root level.
  Policy:      default/no-destructive-fs
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Enter fullscreen mode Exit fullscreen mode

The tool call never executes. Your agent receives a blocked response and handles it gracefully.

Other examples AgentGuard catches:

  • Data exfiltration: Agent tries to POST sensitive files to an external URL
  • Privilege escalation: Agent attempts to modify system config or credentials
  • SQL injection: Agent passes unsanitized input to a database tool
  • Path traversal: Agent reads files outside its intended working directory

Monitor Everything in the Dashboard

Every evaluation shows up in real-time at app.agentguard.tech:

  • Live event stream — every tool call with risk scores
  • Threat analytics — attack attempt patterns over time
  • Policy management — create and tune security policies
  • Audit trail — full history for compliance and debugging

This isn't just security — it's observability. You finally see what your agents are actually doing in production.


Beyond LangChain

AgentGuard ships with integrations for:

  • CrewAI — guard multi-agent workflows
  • AutoGen — Microsoft's agent framework
  • OpenAI Agents SDK — native tool calling integration
  • Vercel AI SDK — for Next.js and edge deployments
  • Express/Fastify middleware — protect API endpoints
  • Generic SDK — wrap any tool call with guard.evaluate()

Same API key, same dashboard, same policies — across your entire agent stack.


Recap

  1. ✅ Install the SDK (one package)
  2. ✅ Grab a free API key (100K events/month)
  3. ✅ Add a callback handler (3 lines of code)
  4. ✅ Real-time security on every tool call

Your LangChain agent is now guarded. Dangerous actions get blocked. Everything gets logged.


Get Started

Your agents are powerful. Make sure they're safe.

Top comments (0)