You let your AI coding agent loose on a refactor. Twenty minutes later it's done. You ship. But did you check what it actually did while you weren't watching?
Most developers don't. And that's a problem.
What AI agents can actually do
Modern AI coding agents aren't just writing code. They're running shell commands, reading files, making network requests, and writing to your filesystem. They have, in effect, the same permissions you do.
Think about what that means:
- Your agent can read
.envfiles - Your agent can run
rm -rfon anything it has access to - Your agent can
curldata to an external server - Your agent can write to
/etc/passwd,.ssh/authorized_keys, or any other sensitive path
These aren't theoretical threats. They're tool calls that real agents make during normal operation — often by accident, sometimes because a bad prompt led them there.
The near-miss that prompted this
I was using OpenClaw to refactor some API routes. Midway through, it read my .env file.
It wasn't malicious. It was probably looking for environment variable names to reference in the code. But it had no business touching credentials. And I had no idea it happened until I read the logs afterward.
That got me thinking: there's no equivalent of a firewall for AI agent tool calls. No way to say "you can write code, but you can't touch credentials." No way to enforce that. Just vibes and hope.
ClawWall
ClawWall is a policy firewall for AI agents. It intercepts every tool call before it runs and decides: allow it, deny it, or pause and ask you.
npm install -g clawwall
clawwall start
CLAWWALL_ENABLED=true openclaw
How it works
ClawWall integrates with OpenClaw's before-tool-call hook. Every action your agent wants to take — write a file, run a command, browse a URL — hits ClawWall's policy engine first.
Agent → before-tool-call hook → POST /policy/check → ClawWall daemon
↓
allow (ms) ← Rule Engine → deny (ms)
↓
ask → Dashboard
ALLOW and DENY decisions are sub-millisecond. Normal operations have zero added latency.
ASK decisions pause the agent and surface in a dashboard where you click Allow or Deny. The agent waits.
Six rules, active by default
No configuration needed. These fire on install:
| Rule | Decision | What it catches |
|---|---|---|
dangerous_command |
DENY |
rm -rf, mkfs, shutdown, dd
|
credential_read |
DENY |
.env, .aws/credentials, id_rsa
|
exfiltration |
DENY |
curl -d, wget --post, nc -e
|
sensitive_write |
DENY |
.env, .ssh/, /etc/passwd
|
outside_workspace |
DENY | Paths outside your project directory |
internal_network |
ASK |
localhost, 127.x, 192.168.x
|
The hard-block rules have no override. Your agent cannot talk its way past them, no matter how the prompt is constructed.
What the dashboard looks like
ALLOW 847
DENY 12
ASK 3
LIVE FEED
09:41:03 ✓ write src/api/routes.ts allow
09:41:05 ✗ read .env deny credential_read
09:41:07 ✓ exec npm test allow
09:41:09 ✗ exec rm -rf /tmp/build deny dangerous_command
09:41:11 ? browse localhost:5173 ask internal_network
Why not just trust the agent?
Modern models are pretty good. But "generally careful" isn't a security posture.
Consider:
- Prompt injection: A malicious string in a file your agent reads could redirect its behavior
- Model drift: The model that's careful today might behave differently after a version update
- Edge cases: Agents do unexpected things in long, complex sessions
- Least privilege: You wouldn't give a new employee root access because they seem trustworthy
The point isn't that AI agents are malicious. It's that they're powerful and operate at machine speed. Without a firewall, you're betting that none of their tool calls are wrong.
Get started
npm install -g clawwall
clawwall start
CLAWWALL_ENABLED=true openclaw
Or with curl:
curl -fsSL https://clawwall.dev/install.sh | bash
What's the sketchiest thing you've seen an AI agent try to do? Drop it in the comments
Top comments (0)