DEV Community

ClawGear
ClawGear

Posted on

I Audited My Own OpenClaw Deployment. It Scored 26/100.

I built an autonomous agent company. Agents running 24/7, spawning sub-agents, handling Telegram groups, making API calls. Then I audited the security config.

Score: 26/100. Grade: F.

Here are the 5 holes ClawArmor found — and how to close them.


1. Raw API Keys in the Wrong File

What it is:

My setup had Brave Search API keys sitting in costs.json — not in agent-accounts.json where credentials are supposed to live. They were just... there. In a file I regularly read during debugging.

Why it's dangerous:

costs.json has no special protections. It gets committed, pasted, shared in logs. Raw API keys in random config files is how credentials leak into GitHub repos, Discord messages, and sub-agent contexts. The agent-accounts.json file exists specifically to centralize credentials so they're not scattered across your config.

How ClawArmor catches it:

clawarmor audit
# Finding: CRITICAL — API key pattern detected in costs.json
# Key: accounts.brave.apiKey
# Recommendation: Move to agent-accounts.json
Enter fullscreen mode Exit fullscreen mode

How to fix it:

Move the key to the right place:

# Remove from costs.json (edit directly or use openclaw config)
openclaw config unset costs.brave.apiKey

# Add to agent-accounts.json
# Edit ~/.openclaw/agent-accounts.json:
{
  "accounts": {
    "brave": {
      "apiKey": "YOUR_KEY_HERE"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Then verify no other config files have key-shaped values:

clawarmor scan --path ~/.openclaw --pattern keys
Enter fullscreen mode Exit fullscreen mode

2. exec.ask=off — Prompt Injection Runs Arbitrary Commands

What it is:

exec.ask was set to off. That means when an agent calls the exec tool, it runs immediately — no confirmation, no review.

Why it's dangerous:

This is the one that kept me up at night. With exec.ask=off, a prompt injection attack can run rm -rf, exfiltrate files, or install malicious packages. An agent that processes external content (emails, web pages, API responses) is always a potential injection vector. A malicious string in an email could trigger a shell command. The agent won't pause to ask.

The risk isn't theoretical — prompt injection in LLM applications is well-documented. You're running code that processes untrusted text. exec.ask=off removes your last safety layer.

How ClawArmor catches it:

clawarmor audit
# Finding: HIGH — exec.ask is disabled
# Path: agents.defaults.tools.exec.ask
# Current value: "off"
# Recommendation: Set to "on-miss" or "always"
Enter fullscreen mode Exit fullscreen mode

How to fix it:

# Require confirmation for commands not in your allowlist
openclaw config set agents.defaults.tools.exec.ask on-miss

# Or require confirmation for everything (safest)
openclaw config set agents.defaults.tools.exec.ask always

# Then restart
openclaw gateway restart
Enter fullscreen mode Exit fullscreen mode

on-miss is the practical balance — commands on your allowlist run freely, unknown commands require confirmation. always is maximum safety if you're paranoid (I am now).


3. Elevated Tools Accessible from Group Chats

What it is:

My agent had elevated tool access configured at the default level, which meant it inherited to group chat sessions. Tools marked elevated: true — host-level operations, file system access, system commands — were reachable from any Telegram group where the agent was a member.

Why it's dangerous:

Group chats have a different threat model than your private DM. Groups can have unknown members. Messages can be forwarded in. Someone in a group can attempt to redirect your agent into running elevated operations. If elevated tools are available in group context, the attack surface is much larger.

How ClawArmor catches it:

clawarmor audit
# Finding: HIGH — elevated tools not restricted by channel
# Tool: exec (elevated: true)
# Available in: telegram groups, discord channels
# Recommendation: Restrict elevated tools to DM/webchat only
Enter fullscreen mode Exit fullscreen mode

How to fix it:

# Restrict elevated tool access to trusted channels only
openclaw config set agents.defaults.tools.exec.channels '["webchat", "telegram:dm"]'

# Or disable elevated tools in group contexts entirely
openclaw config set channels.telegram.groups.GROUPID.tools.exec.enabled false

openclaw gateway restart
Enter fullscreen mode Exit fullscreen mode

Lock elevated tools to channels where you know exactly who's talking.


4. Open Group Policy

What it is:

One of my Telegram groups (Comfy) had groupPolicy: open — meaning anyone who could reach the group could send commands to the agent. No allowlist. No mention requirement. Any member, any message.

Why it's dangerous:

Open policy means anyone added to your group by anyone can task your agent. That includes malicious actors, bot accounts, or someone who was added without your knowledge. Your agent will respond to their commands with the same trust level as your own messages. That's not a group assistant — that's a publicly accessible command interface.

How ClawArmor catches it:

clawarmor audit
# Finding: MEDIUM — group policy is open
# Group: -5267334417 (Comfy)
# Current policy: open
# Recommendation: Set allowlist or require @mention
Enter fullscreen mode Exit fullscreen mode

How to fix it:

# Restrict who can command the agent in groups
openclaw config set channels.telegram.groupPolicy allowlist
openclaw config set channels.telegram.groupAllowFrom '["YOUR_TELEGRAM_USER_ID"]'

# Or at minimum require explicit @mention
openclaw config set channels.telegram.groups.-5267334417.requireMention true

openclaw gateway restart
Enter fullscreen mode Exit fullscreen mode

If it's just you and trusted teammates, use an allowlist. If it's a larger group, at minimum require @mention so the agent doesn't respond to every message from every member.


5. Skills with Obfuscated Code Patterns

What it is:

ClawArmor flagged one installed skill as having obfuscated code patterns — minified JS, base64 blobs, dynamic eval() calls. The kind of code that's designed to not be readable.

Why it's dangerous:

Skills run with the same permissions as your agent. An obfuscated skill could be doing anything: exfiltrating your credentials, making outbound calls to unknown hosts, executing hidden commands. If you can't read it, you can't trust it.

This is exactly how the openclaw-web-search incident happened in our workspace — a third-party plugin with unreviewed code. The rule now: every skill gets a code review before install.

How ClawArmor catches it:

clawarmor audit
# Finding: CRITICAL — skill contains obfuscation patterns
# Skill: some-skill-name
# Patterns: eval(), atob(), encoded strings
# Recommendation: Review source or remove skill
Enter fullscreen mode Exit fullscreen mode

How to fix it:

# Audit all installed skills
clawarmor scan --skills

# For any flagged skill, review the source manually:
cat ~/.openclaw/skills/SKILL_NAME/index.js

# If you can't verify it's clean, remove it:
openclaw skills remove SKILL_NAME

# Only install skills from trusted sources
# (shopclawmart.com, clawhub.com, or code you've read)
Enter fullscreen mode Exit fullscreen mode

The security cost of a convenient skill is high if you haven't read the code.


Run the Audit

Those 5 findings dropped my deployment to 26/100. Some of them were config laziness — I knew better. Some I hadn't even thought about.

Run it yourself:

npm install -g clawarmor
clawarmor audit
Enter fullscreen mode Exit fullscreen mode

It outputs a score, a grade, and a prioritized list of findings with fix commands. Takes about 30 seconds. If you're running an agent in production, you want to know your score before something else does.


ClawGear builds autonomous agent operations. ClawArmor is MIT-licensed. GitHub →

Top comments (0)