DEV Community

Cover image for Clawdbot Leaked 1.5 Million API Keys. Here Is What I Built to Stop It Happening to You.
John Kearney
John Kearney

Posted on

Clawdbot Leaked 1.5 Million API Keys. Here Is What I Built to Stop It Happening to You.

Clawdbot has leaked over 1.5 million API keys in under a month.

That number is not hypothetical. AI coding agents run shell commands, write files, and make network requests with zero oversight. They operate with your permissions. If an agent can read your .env file and make a network request, your keys are one bad action away from being exposed.

The people most at risk are non-technical users who just want AI help with their code. They do not know what the agent is doing under the hood. They should not have to.

Why Monitoring and Sandboxing Are Not Enough

Monitoring watches actions after they execute. It tells you what happened. It does not prevent it. "Your agent read your AWS credentials 10 minutes ago" is a useful log entry, but the data already left.

Sandboxing (Docker, containers) draws a boundary. Everything inside the boundary gets equal access. The container cannot distinguish a safe log write from a credential read in the same directory.

Neither approach stops the action before it happens.

Action-Level Gating

SafeClaw sits between your AI agent and every action it tries to take. Nothing executes until it clears your policy. Deny-by-default. If the control plane is unreachable, everything is blocked.

The agent wants to read /etc/hosts? Policy says deny. Blocked. Wants to write to ~/projects/app.ts? Policy says allow. Proceeds. Wants to run a shell command with sudo? Policy says require approval. You decide.

How It Works

Deny-by-default. The default state is: nothing runs. You build up permissions from zero, not lock down from full access.

Conditional rules. Each rule has a condition and an effect:

condition: type=file_write AND path starts with /etc
effect: DENY


condition: type=shell_exec AND command contains sudo
effect: REQUIRE_APPROVAL


condition: type=file_write AND path starts with ~/projects
effect: ALLOW
Enter fullscreen mode Exit fullscreen mode

Rules are evaluated top-to-bottom. First match wins.

Three effects:

  • ALLOW — action proceeds normally
  • DENY — action is blocked and logged
  • REQUIRE_APPROVAL — action pauses, you decide in the dashboard

Simulation Mode

Toggle simulation on, run your agent normally, and every action gets evaluated but never blocked. The dashboard shows:

  • Green: would be allowed
  • Red: would be denied
  • Yellow: would require approval

Run for a day. Review the results. Tune your rules. When the results look right, switch to enforcement. No guessing.

The Audit Trail

Every action, allowed or denied, gets recorded with:

  • The action request (what the agent tried)
  • The policy decision
  • Timestamp
  • A SHA-256 hash of the previous entry

Alter any entry and the chain breaks. The entire history is verifiable.

Validation

Before SafeClaw, I shipped Authensor for OpenClaw as a marketplace item. It hit 300 downloads in a couple of days. That confirmed the demand for this kind of tool.

What You Get

  • The entire client is 100% open source. The Authensor control plane is hosted and only sees action metadata, never your keys or data.
  • 446 tests. TypeScript, strict mode.
  • Works with Claude and OpenAI out of the box.
  • Browser dashboard with setup wizard. No config files, no CLI expertise required.
  • Free tier with renewable 7-day keys, no credit card.

Try It

npx @authensor/safeclaw
Enter fullscreen mode Exit fullscreen mode

Browser opens. Dashboard loads. The setup wizard walks you through creating your first policy.

GitHub: github.com/AUTHENSOR/SafeClaw

Request a free token through the setup wizard.


Built over 4 months by an independent developer. Just one person who saw 1.5 million keys leak and built the thing that should have existed already.

Top comments (0)