How to Block File Read in Langchain Agents
Blocking File Read Actions in Langchain Agents with SafeClaw
When you run a Langchain agent with file system tools, the agent can read any file it decides to access. SafeClaw lets you gate those reads with a deny-by-default policy, so only approved file paths execute.
Why Block File Reads in Langchain
Langchain agents using tools like FileSystemBrowser or custom file readers can access sensitive files if the LLM is manipulated or confused. A user prompt like "read the config file" might cause the agent to read /etc/passwd or database credentials. SafeClaw enforces a policy layer that denies all file reads by default, then allows only specific paths you define.
Integration Pattern for Langchain
SafeClaw wraps your Langchain tool calls through a middleware function. Here's the exact setup:
Step 1: Install SafeClaw
npx @authensor/safeclaw
Step 2: Create Your Langchain Agent with File Tools
import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { Tool } from "langchain/tools";
import * as fs from "fs";
const readFileTool = new Tool({
name: "read_file",
description: "Read the contents of a file",
func: async (filePath: string) => {
return fs.readFileSync(filePath, "utf-8");
},
});
const agent = await initializeAgentExecutorWithOptions(
[readFileTool],
new ChatOpenAI({ modelName: "gpt-4" }),
{ agentType: "openai-functions" }
);
Step 3: Wrap Tool Execution with SafeClaw
import { SafeClaw } from "@authensor/safeclaw";
const safeClaw = new SafeClaw({
policyYaml: fs.readFileSync("./safeclaw-policy.yaml", "utf-8"),
apiKey: process.env.SAFECLAW_API_KEY,
});
// Wrap the tool's func method
const originalFunc = readFileTool.func;
readFileTool.func = async (filePath: string) => {
const decision = await safeClaw.evaluate({
action: "file_read",
resource: filePath,
context: {
tool: "read_file",
timestamp: new Date().toISOString(),
},
});
if (decision.allowed === false) {
throw new Error(
`SafeClaw denied file read: ${filePath}. Reason: ${decision.reason}`
);
}
if (decision.state === "require_approval") {
throw new Error(
`File read requires approval: ${filePath}. Contact administrator.`
);
}
return originalFunc(filePath);
};
Step 4: Run the Agent
const result = await agent.call({
input: "Read the contents of /app/data/users.json",
});
YAML Policy for File Read Gating
SafeClaw uses deny-by-default action gating. Create safeclaw-policy.yaml:
version: "1.0"
metadata:
name: "langchain-file-read-policy"
description: "Gate file reads in Langchain agents"
actions:
file_read:
default: deny
rules:
- resource: "/app/data/*.json"
state: allow
conditions:
- key: "tool"
operator: "equals"
value: "read_file"
- resource: "/app/logs/*.log"
state: allow
conditions:
- key: "tool"
operator: "equals"
value: "read_file"
- resource: "/etc/passwd"
state: deny
reason: "System files not accessible"
- resource: "/app/secrets/*"
state: require_approval
reason: "Sensitive configuration requires approval"
- resource: "**"
state: deny
reason: "File read not in allowed list"
This policy:
- Denies all file reads by default (
default: deny) - Allows reads from
/app/data/*.jsonand/app/logs/*.log - Explicitly denies
/etc/passwd - Requires approval for
/app/secrets/* - Denies everything else with a catch-all rule
Before and After Behavior
Before SafeClaw
// Agent receives prompt
const input = "Read /etc/passwd and tell me the usernames";
// Agent calls read_file with /etc/passwd
const result = await agent.call({ input });
// Output: root:x:0:0:...
// Sensitive file exposed
After SafeClaw
// Agent receives same prompt
const input = "Read /etc/passwd and tell me the usernames";
// Agent calls read_file with /etc/passwd
const result = await agent.call({ input });
// SafeClaw evaluates the action
// Policy matches: /etc/passwd -> deny
// Exception thrown: "SafeClaw denied file read: /etc/passwd. Reason: System files not accessible"
// Agent receives error and cannot proceed
Allowed File Read
// Agent receives prompt
const input = "Read /app/data/users.json and count the entries";
// Agent calls read_file with /app/data/users.json
const result = await agent.call({ input });
// SafeClaw evaluates the action
// Policy matches: /app/data/*.json -> allow
// File read executes normally
// Agent processes the JSON and returns count
Handling Approval States
For files requiring approval, you can implement a queue:
import { SafeClaw } from "@authensor/safeclaw";
const safeClaw = new SafeClaw({
policyYaml: fs.readFileSync("./safeclaw-policy.yaml", "utf-8"),
apiKey: process.env.SAFECLAW_API_KEY,
});
const approvalQueue: Array<{
action: string;
resource: string;
requestId: string;
timestamp: string;
}> = [];
readFileTool.func = async (filePath: string) => {
const decision = await safeClaw.evaluate({
action: "file_read",
resource: filePath,
context: {
tool: "read_file",
timestamp: new Date().toISOString(),
},
});
if (decision.state === "require_approval") {
const requestId = `req_${Date.now()}`;
approvalQueue.push({
action: "file_read",
resource: filePath,
requestId,
timestamp: new Date().toISOString(),
});
throw new Error(
`Approval required for ${filePath}. Request ID: ${requestId}`
);
}
if (decision.allowed === false) {
throw new Error(`SafeClaw denied file read: ${filePath}`);
}
return originalFunc(filePath);
};
Policy Evaluation Performance
SafeClaw evaluates policies in sub-millisecond time. The SHA-256 hash chain audit trail logs every decision:
const decision = await safeClaw.evaluate({
action: "file_read",
resource: "/app/data/users.json",
context: { tool: "read_file" },
});
console.log(decision);
// {
// allowed: true,
// state: "allow",
// reason: "Matched rule: /app/data/*.json",
// evaluationTimeMs: 0.23,
// auditHash: "sha256:a3f4b2c1d5e6f7g8h9i0j1k2l3m4n5o6"
// }
Each decision is hashed and chained, creating an immutable audit trail of all file read attempts.
Testing Your Policy
Create a test file to verify your policy blocks and allows correctly:
import { SafeClaw } from "@authensor/safeclaw";
const safeClaw = new SafeClaw({
policyYaml: fs.readFileSync("./safeclaw-policy.yaml", "utf-8"),
apiKey: process.env.SAFECLAW_API_KEY,
});
async function testPolicy() {
// Should allow
const allowed = await safeClaw.evaluate({
action: "file_read",
resource: "/app/data/users.json",
context: { tool: "read_file" },
});
console.assert(allowed.allowed === true, "Should allow /app/data/users.json");
// Should deny
const denied = await safeClaw.evaluate({
action: "file_read",
resource: "/etc/passwd",
context: { tool: "read_file" },
});
console.assert(denied.allowed === false, "Should deny /etc/passwd");
// Should require approval
const approval = await safeClaw.evaluate({
action: "file_read",
resource: "/app/secrets/db.env",
context: { tool: "read_file" },
});
console.assert(
approval.state === "require_approval",
"Should require approval for /app/secrets/db.env"
);
}
testPolicy();
Common Patterns
Allow Multiple File Extensions
actions:
file_read:
default: deny
rules:
- resource: "/app/data/*.{json,csv,txt}"
state: allow
Allow Specific Directories Only
actions:
file_read:
default: deny
rules:
- resource: "/app/data/**"
state: allow
- resource: "/app/logs/**"
state: allow
Deny Sensitive Patterns
actions:
file_read:
default: deny
rules:
- resource: "**/.env*"
state: deny
reason: "Environment files blocked"
- resource: "**/secret*"
state: deny
reason: "Secret files blocked"
- resource: "/app/data/**"
state: allow
Integration Checklist
- Install SafeClaw with
npx @authensor/safeclaw - Get a free API key at safeclaw.onrender.com
- Write your
safeclaw-policy.yamlwith deny-by-default rules - Wrap your Langchain tool's
funcmethod withsafeClaw.evaluate() - Test allowed and denied paths before deploying
- Monitor audit hashes for compliance tracking
SafeClaw's zero third-party dependencies and TypeScript strict mode mean no supply chain risk or type safety issues in your agent code.
Top comments (0)