Executive Summary
The approach outlined for decoy-hunter is solid and necessary in environments where all ports open deception techniques attempt to confuse attackers and automated scanners. The original proposal presents the problem clearly and offers a coherent technical design. I endorse this approach and submit this consolidated contribution to reinforce the proposal, clarify the architecture, document operational controls, and provide a reproducible validation mindset for professional adoption.
- Motivation
Defensive deception techniques that make every TCP port appear open—implemented via iptables redirects, portspoof-like tools, or static banner emulators—seek to waste an adversary’s time and to cause false positives in automated scanning. However, many practical deployments are superficial and leave detectable artifacts:
Banners change between scans without consistent protocol behavior.
Static banner emulators lack stateful protocol logic.
Fake services commonly reply identically to any input; real services do not behave that way.
Mismatched protocols across ports (e.g., SSH banners returned on HTTP ports) betray deception.
These weaknesses provide a practical opportunity for a counter-deception methodology that validates protocol behavior rather than trusting banners alone.
- Technical Objective
decoy-hunter is conceived as a counter-deception framework with two primary objectives:
Detect genuine, exploitable services hidden among high-noise deception layers.
Provide defenders with a pragmatic tool to evaluate the effectiveness of their deception deployment and to identify misconfigurations that create false assurance.
The tool is designed for authorized security assessments: red teams operating with permission, internal security validation, and joint offense-defense exercises.
- Design Principles
decoy-hunter’s design rests on five core principles:
Realistic Probing. Use legitimate client-style requests derived from a service probe database (e.g., nmap-service-probes), not ad-hoc strings. Examples: HTTP GET/HEAD with valid User-Agent, TLS ClientHello with SNI and ALPN, SMTP EHLO sequence, FTP USER/STAT, Redis PING, etc.
Stateful Protocol Validation. Verify protocol state transitions and expected behaviors (e.g., SSH key-exchange sequence, TLS handshake completion and certificate properties, SMTP command sequences and reply codes). Banner matching alone is insufficient.
Traffic Obfuscation (Anti-Detection). Use randomized delays, low concurrency defaults, varied request patterns and non-repeating sequences so traffic resembles legitimate client activity and reduces detection by simplistic scanner signatures.
Data Minimization & Auditability. Default to not persisting response bodies. When payload capture is enabled it must be explicitly opted in and recorded under governance controls. Audit logs and exported reports should be signed to preserve integrity.
Artifact Integrity. Maintain cryptographic integrity of the probes database and released artifacts (for example, sign the nmap-service-probes file). Verify signatures at runtime before using probes.
- Components & Architecture
A modular architecture keeps responsibilities separated and facilitates review and adoption:
decoy_hunter.py — CLI entrypoint and orchestration: argument parsing, execution policies, and reporting.
probes.py — Probe implementation: sends requests, enforces jitter and concurrency, handles retries and timeouts. Implements async I/O (asyncio) for scalability.
service_probes_parser.py — Parses the probe database (nmap-service-probes), performs signature verification, and normalizes probe definitions.
fingerprints.py — Passive fingerprinting module: JA3/JA3S hashes for TLS, TCP/IP stack attributes (TTL, window sizes) and optional heuristics for OS/stack inference.
scoring & assessment module — Aggregates probe results and computes a confidence score with an explanation vector (banner match, handshake success, multi-probe consistency, certificate validity, etc.).
audit/forensics export — Produces signed JSON reports and optional PCAPs for evidence; supports encryption at rest for captured artifacts.
testbed directory (recommended) — Docker compose setup to reproduce evaluation scenarios (real services, simple decoys, stateful fake services, IDS sensor).
- How It Works — Operational Flow
Initialization & Integrity Check. Verify the signature of the probe database before parsing. Abort if verification fails.
Targeting & Configuration. Accept host(s), port ranges, TCP/UDP mode, concurrency, and aversion settings (timing, stealth). By default, concurrency and timing are conservative.
Probing Stage. Execute realistic probes according to the parsed probe definitions. For TLS endpoints, perform ClientHello and examine server responses. For SSH endpoints, attempt controlled handshake progress to validate KEX behavior without authenticating.
Stateful Validation. Confirm that responses progress through expected protocol states and that malformed or variant inputs elicit protocol-consistent error handling.
Cross-Probe Correlation. Compare responses across different probe types and ports on the same host for coherence (e.g., identical banners on multiple unrelated ports is suspicious).
Scoring & Classification. Assign classification labels: REAL, FAKE, AMBIGUOUS. Provide confidence score (0.0–1.0) and the rationale decomposition.
Reporting & Evidence. Emit console output and optional signed JSON report. If enabled, export PCAP and signed artifacts for audit.
- Example CLI & Sample Output
Example invocations (illustrative):
Scan top 10k TCP ports
python3 decoy_hunter.py 192.168.1.10
Include UDP scanning (slower)
python3 decoy_hunter.py target.com --udp --concurrency 5
Custom ports
python3 decoy_hunter.py 10.0.0.5 --ports 22,80,443,8080,1234
Sample output:
[REAL] 22/tcp open ssh → SSH-2.0-OpenSSH_8.9p1 (confidence: 0.92)
[FAKE] 8080/tcp open http → SSH-2.0-OpenSSH_8.9p1 ← 🚩 protocol mismatch (confidence: 0.12)
[REAL] 443/tcp open https → HTTP/1.1 200 OK (confidence: 0.88)
An SSH banner on port 8080 flagged as suspicious is an immediate indicator of deceptive configuration.
- Evaluation & Validation Strategy
A robust validation approach is essential to prove efficacy and to quantify false positives/negatives:
Testbed: Create a reproducible environment (Docker compose recommended) including:
Real service containers: OpenSSH, nginx, SMTP server, Redis.
Simple decoy containers: static banner servers returning a variety of banners on many ports.
Stateful decoy containers: emulate partial protocol progress for TLS/SSH to challenge stateful checks.
IDS sensor: Suricata/Zeek to measure detection patterns and to evaluate stealth settings.
Metrics to collect:
Precision and Recall for detection of real services.
Mean time per host and scan duration under stealth settings.
IDS activation rate: number of triggered alerts per scan configuration.
False positive / false negative counts per decoy category.
Test methodology: run systematic experiments varying:
Decoy sophistication (static → stateful partial → stateful near-complete).
Timing/concurrency profiles.
Middlebox presence (load balancer, reverse proxy, NAT).
- Security, Privacy & Governance
decoy-hunter is intended for authorized use only. Operational controls and governance must be enforced:
Authorization & Evidence. Require documented authorization prior to scans. Record the authorization token reference or signed authorization artifact in audit records.
Artifact Integrity. Sign probe databases and release artifacts using approved cryptographic primitives (ECDSA P-256 / SHA-256 recommended). Verify signatures at runtime.
TLS Certificate Validation. Validate certificate chains and, where feasible, check revocation status (OCSP / CRL). Mark endpoints with certificate validation failures as ambiguous and include in the rationale.
Data Minimization. Do not persist response bodies by default. If payloads are captured, apply encryption at rest and limit retention periods.
Auditing & Signed Evidence. Export signed JSON reports and optional PCAPs for forensic review. Maintain chain-of-custody metadata where required by policy.
Operational Safety. Avoid aggressive probing modes by default; do not perform destructive or intrusive actions unless explicitly authorized and documented.
- Contribution Statement
I value the original design and approach of decoy-hunter. After a technical review, I contributed this consolidated document to clarify architecture, operational controls, and validation methodology so that the project can be adopted responsibly by professional teams. My contribution emphasizes integrity of artifacts, protocol-aware validation, evidence capture under governance, and reproducible testing.
- Implementation Notes (Operational Defaults)
The following operational defaults are recommended for a conservative, professional posture:
Default concurrency: low (e.g., 8).
Default jitter between probes: randomized within a conservative window (e.g., 0.3–1.5s).
Default payload persistence: disabled (opt-in via explicit flag).
Require explicit interactive acknowledgment or an authorization token for production scans.
Validate probes database signature prior to use.
These defaults reduce the risk of accidental disruption and increase the likelihood that scans remain within acceptable detection bounds in a controlled authorized engagement.
- Final Remarks
Deception can be an effective defensive tool, but bad deception can create a false sense of security. Practical evaluation requires tools that validate behavior rather than banners alone. decoy-hunter is a pragmatic counter-deception framework that pairs realistic probing with stateful protocol validation, traffic obfuscation, evidence integrity, and a reproducible validation strategy.
I appreciate the original proposal and endorse this consolidation as a step toward a professional, auditable, and useful counter-deception capability. Real security demands resilience and verifiability — not artifice.
Top comments (0)