TL;DR: ClawSec is a native OpenClaw skill that turns your agent into a personal recon operator. Type
/clawsec scanme.nmap.orgin Telegram, and ~30 seconds later you get a full AI-analyzed attack-surface report β with CVE-aware risk scoring, WHOIS, and subdomain enumeration β right in your chat. No terminals. No copy-pasting. No grep-fu.
The Problem
If you've done any penetration testing or played HackTheBox, you know the loop:
- Run
nmap, wait - Copy the output
- Paste into ChatGPT: "what does this mean?"
- Run
whois, copy, paste again - Manually write a report
- Repeat for every target It's tedious. It's slow. And honestly, it's the part nobody enjoys. The fun is the exploitation β not parsing nmap XML at midnight.
When I saw OpenClaw, I immediately thought: this is the missing orchestration layer. What if I could just message my agent, have it run all the tools, feed the output to an LLM, and get back a structured security report β right in Telegram?
That's ClawSec.
What is ClawSec?
ClawSec is a native OpenClaw skill that converts any OpenClaw instance into an offensive reconnaissance assistant. It implements Closed-Loop Recon Orchestration: the entire cycle from natural language trigger to actionable intelligence report runs automatically, without manual intervention.
You β Telegram β OpenClaw β scope_guard.py β recon.py β LLM analysis β Report β Telegram
In practice:
/clawsec scanme.nmap.org
π¦ ClawSec Report | Vertex Coders LLC
ββββββββββββββββββββββββββββββββββ
Target: scanme.nmap.org | quick scan | 2026-04-23T21:36:00Z
π Executive Summary
Target exposes 2 open ports, both running EOL software with known CVEs.
OpenSSH 6.6.1 and Apache 2.4.7 are both flagged Critical β immediate
patching or decommission recommended. Attack surface: HIGH.
π Open Ports & Services
| Port | Service | Version | Risk | Reason |
|------|---------|-----------------|----------|-------------------------------|
| 22 | ssh | OpenSSH 6.6.1 | Critical | OpenSSH <= 6.6 (CVE-laden, EOL)|
| 80 | http | Apache httpd 2.4.7 | Critical | Apache httpd <= 2.4.29 |
π― Key Findings
- CVE-laden OpenSSH 6.6.1: multiple auth bypass and memory corruption CVEs
- Apache 2.4.7: predates CVE-2021-41773 fix β path traversal risk
- SSH banner not suppressed: version info leaked
π§ Next Steps
- Test path traversal: curl http://scanme.nmap.org/cgi-bin/.%2e/etc/passwd
- Enumerate SSH keys / check for weak ciphers: ssh-audit scanme.nmap.org
ββββββββββββββββββββββββββββββββββ
β οΈ For authorized use only | github.com/Denisijcu/clawsec
One Telegram message. One AI-analyzed report. Zero manual work.
Architecture
ClawSec is a native OpenClaw skill β a folder with a SKILL.md manifest plus Python scripts:
~/.openclaw/skills/clawsec/
βββ SKILL.md # Agent workflow β the brain
βββ recon.py # Recon engine (nmap XML + whois + subdomains)
βββ scope_guard.py # Target validation
βββ tests/
β βββ test_scope_guard.py # 19 unit tests
β βββ test_risk.py # 11 unit tests
βββ wordlists/
βββ subdomains-top200.txt # 200-entry bundled wordlist
SKILL.md β The Brain
The SKILL.md is where OpenClaw magic happens. It's a markdown file with YAML frontmatter that tells the agent:
-
When to activate β trigger keywords:
scan,recon,nmap,hackthebox,htb,ctf target -
What binaries it needs β
python3,nmap,whois - The 5-step workflow β parse request β scope guard β run recon β LLM analysis β send report The key insight: skills are operational instructions, not code. You write a playbook in plain English/markdown, and the agent executes it. No SDK. No compilation.
recon.py β The Engine (v0.2.0)
Three modules, pure Python stdlib, no heavy dependencies:
Module 1: Nmap with XML parsing
SCAN_PROFILES = {
"quick": ["-sV", "-T4", "--open", "-F"],
"full": ["-sV", "-sC", "-T3", "--open", "-p-"],
"stealth": ["-sS", "-sV", "-T2", "--open", "-F"],
}
We parse nmap's -oX - (XML to stdout) instead of regex on text output. This gives us product, version, cpe, extrainfo, scripts, os_matches, and hostnames β everything the LLM needs for deep analysis.
Module 2: WHOIS with TLD fallback
Standard whois often returns malformed data for .org, .io, .dev, .ai domains. ClawSec retries with the authoritative registrar server:
WHOIS_SERVERS = {
"org": "whois.publicinterestregistry.org",
"io": "whois.nic.io",
"dev": "whois.nic.google",
"ai": "whois.nic.ai",
...
}
Module 3: Subdomain enumeration
Auto-detects the best available method:
| Priority | Method | When |
|---|---|---|
| 1 |
subfinder (passive) |
If installed on $PATH
|
| 2 | amass enum -passive |
If installed, subfinder not |
| 3 | Built-in DNS bruteforce | Always available |
Bundled 200-entry wordlist included. Bring your own with --wordlist.
Version-aware risk scoring
This is the part that makes the reports actually useful. Every open port gets scored using three signals:
-
Known-bad version patterns β Critical with a human-readable reason:
- OpenSSH β€ 6.6 (CVE-laden, EOL)
- Apache httpd β€ 2.4.29
- vsftpd 2.3.4 backdoor (CVE-2011-2523)
- PHP 5.x (EOL since 2018)
- Samba 3.x (EternalBlue range)
- MySQL/MariaDB β€ 5.5
- IIS β€ 7.x
- Port category β High/Medium/Low (RDP, SMB, FTP, MSSQL, Redis β High; SSH, HTTP β Medium)
- Everything else β Info ### scope_guard.py β The Safety Layer
Runs before every scan. Returns ALLOWED or BLOCKED:
$ python3 scope_guard.py 192.168.1.1
BLOCKED
$ python3 scope_guard.py scanme.nmap.org
ALLOWED
$ python3 scope_guard.py --allow-lab 10.10.11.42
ALLOWED
Default blocks: RFC1918, loopback, link-local, multicast, cloud metadata endpoints (169.254.169.254, metadata.google.internal), .local/.internal domains.
--allow-lab flag: Opens only well-known lab ranges for HTB/Offsec without disabling the rest of the guard:
-
10.10.0.0/16β HackTheBox classic -
10.129.0.0/16β HackTheBox Enterprise -
10.11.0.0/16β Offsec/OSCP labs Metadata endpoints and loopback are hardcoded always-blocked β--allow-labcannot override them.
Setup (Kali / Ubuntu 24.04)
git clone https://github.com/Denisijcu/clawsec.git
cd clawsec
bash setup_vm.sh
The script handles everything: nmap, whois, python3, Node.js 20, pnpm, OpenClaw installation, skill copy to ~/.openclaw/skills/clawsec/, allowlist seed, and self-tests.
Then onboard OpenClaw:
openclaw onboard # set API key + connect Telegram/Discord
openclaw gateway start
Usage
From any connected channel:
/clawsec scanme.nmap.org # quick scan
/clawsec example.com full # all 65535 ports
/clawsec 10.10.11.42 quick --htb # HTB machine
/clawsec example.com stealth --modules ports,whois,subdomains
Or run the scripts directly for debugging:
python3 ~/.openclaw/skills/clawsec/scope_guard.py scanme.nmap.org
python3 ~/.openclaw/skills/clawsec/recon.py --target scanme.nmap.org --scan quick
cat /tmp/clawsec_results.json
Tests & CI
30 unit tests across two suites:
python3 tests/test_scope_guard.py # 19 tests β scope validation
python3 tests/test_risk.py # 11 tests β version risk scoring
Ran 19 tests in 0.021s β OK
Ran 11 tests in 0.001s β OK
CI runs on push/PR via GitHub Actions across Python 3.11, 3.12, and 3.13 with nmap + whois installed, including CLI smoke tests.
What I Learned Building This
1. The SKILL.md description field is everything.
OpenClaw uses it to decide whether to activate the skill. Vague description = skill gets ignored. I spent more time on 5 lines of frontmatter than on the entire scope_guard implementation.
2. Parse XML, not text.
Our first version parsed nmap's text output with regex. It worked until it didn't β edge cases everywhere. Switching to -oX - (XML to stdout) made the parser bulletproof and gave us CPE data for free, which the LLM uses for CVE correlation.
3. Scope guard is not optional.
If your skill executes shell commands based on user input, you need a validation layer. Especially via Telegram where anyone can message your bot. The --allow-lab flag was the cleanest solution for the HTB use case β open only what you need, keep everything else locked.
4. WHOIS is a mess.
The default whois command returns garbage for .org, .io, .dev, and .ai domains. The TLD fallback to authoritative servers fixed it cleanly.
5. The VM constraint is a feature.
Isolated environment = reproducible results, clean snapshots, and no risk to your main machine. Build your recon tools in a VM. Always.
Roadmap
The MVP covers the full recon-to-report loop. What's next:
V1.0 β Async Engine
- Celery + Redis task queue for parallel scans without blocking
- LangChain ReAct agent for intelligent tool chaining
- PostgreSQL for scan history and long-term context V2.0 β Intelligence Platform
- Automatic tool chaining: LLM decides next step based on findings
(port 80 found β auto-run
ffuf; SMB found β auto-runenum4linux) - ChromaDB RAG: "What did you find on this host last week?"
- PDF/JSON report export directly from Telegram chat
Repo
GitHub: github.com/Denisijcu/clawsec
Built by Denis @ Vertex Coders LLC β Miami, FL
AI Automation & Cybersecurity | HTB Creator Program
Submitted to the DEV Community OpenClaw in Action Challenge β April 2026
"Automate the tedious recon. Focus on the exploitation." π¦

Top comments (0)