Subdomain Enumeration in 2026: Tools, Techniques, and What Actually Works
Disclosure: Parts of this article were drafted with AI assistance.
Every successful bug bounty starts the same way: you know nothing about the target. The program hands you a scope like *.example.com and expects you to find vulnerabilities before professional red teamers do.
The first question is always: what's actually running under that wildcard?
Subdomain enumeration is how you answer it. And in 2026, the landscape of tools and techniques has evolved — some approaches that dominated five years ago have become noise, while others have quietly become essential. This is what actually works.
Why Subdomain Recon Matters
Before diving into tools: why does subdomain enumeration deserve this much attention?
Because most companies have terrible hygiene on secondary infrastructure. The main domain — example.com — gets penetration tested, audited, and hardened. The forgotten legacy-api.example.com running an old Express app gets none of that.
In bug bounty terms, subdomains are where the real findings live:
- Forgotten staging servers with debug endpoints
- Development environments with relaxed authentication
- Admin panels not intended for external access
- Misconfigured cloud storage (
assets.example.compointing to a public S3 bucket) - Subdomain takeover opportunities (dangling CNAMEs)
A thorough subdomain sweep is how you find the soft underbelly of a hardened target.
Passive Enumeration: No Packets to the Target
Passive enumeration collects subdomain data from public sources without touching the target's servers. This is stealthy — it doesn't trigger WAF alerts or IDS logs — and it's often surprisingly productive.
Certificate Transparency Logs
Every TLS certificate issued by a trusted CA is logged to a public CT log. This means every subdomain that's ever had HTTPS (which in 2026 is almost all of them) is permanently discoverable.
Tools to query CT logs:
-
crt.sh — free, public UI and JSON API. Search
%.example.com - certspotter — streams new cert issuances in real-time
- subfinder includes CT log sources by default
A single curl query to crt.sh gives you a historical map of everything the target has ever SSL'd:
curl -s "https://crt.sh/?q=%.example.com&output=json" | \
jq -r '.[].name_value' | \
sort -u | \
grep -v '\*'
That one command has found subdomains professional teams missed.
Shodan and Censys
Shodan and Censys scan the entire internet continuously and index what they find — including TLS certificates, HTTP headers, and server banners.
# Install shodan CLI
pip install shodan
shodan init YOUR_API_KEY
# Find all subdomains in Shodan's database
shodan search --fields hostnames "ssl.cert.subject.cn:example.com" | tr ',' '\n' | sort -u
The free Censys UI lets you run: parsed.names: example.com and see every cert containing the target domain. This catches IPs that serve multiple vhosts — something DNS-only tools miss entirely.
SecurityTrails, VirusTotal, and Passive DNS
These aggregate historical DNS data from their own resolvers:
- SecurityTrails — 50 free queries/month, rich historical data
-
VirusTotal (
https://www.virustotal.com/vtapi/v2/domain/report?domain=example.com&apikey=...) -
HackerTarget (
https://hackertarget.com/find-dns-host-records/) — free, no key needed - OWASP Amass in passive mode aggregates all of these automatically
GitHub Dorking
This is chronically underused. Developers leak internal subdomain references in:
- Hardcoded API endpoints in JavaScript
- Environment variable examples in READMEs
- Terraform/CloudFormation configs in public repos
- CI/CD pipeline configs
Search GitHub for "staging.example.com" or "internal.example.com" — you'll be surprised what surfaces.
site:github.com "example.com" ext:env
site:github.com "example.com" inurl:config
Active Enumeration: The DNS Brute Force
Active enumeration sends queries to DNS servers to discover subdomains that aren't in any public database. You're essentially guessing names and checking if they resolve.
subfinder — The Industry Standard
subfinder by ProjectDiscovery is what most serious hunters use as their primary tool. It combines passive sources (CT logs, APIs) with a simple, fast interface.
# Install
go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest
# Run
subfinder -d example.com -o subdomains.txt
# With API keys configured (~/.config/subfinder/provider-config.yaml)
subfinder -d example.com -all -o subdomains.txt
Configure API keys for SecurityTrails, Shodan, VirusTotal, etc. in ~/.config/subfinder/provider-config.yaml and subfinder will query all of them automatically. The -all flag enables every configured source.
Amass — Deep Recon When You Have Time
amass is the most comprehensive tool but much slower. Use it for high-value targets when you have time to run it overnight.
# Passive mode only (slow but thorough)
amass enum -passive -d example.com -o amass-passive.txt
# Active mode with brute force
amass enum -active -d example.com -brute -o amass-active.txt
Amass builds a graph database of the target's DNS topology. This helps with visualizing the attack surface beyond just a flat list of subdomains.
DNS Brute Force with Custom Wordlists
For pure brute force, puredns paired with massdns is fast and accurate.
Wordlists that actually matter in 2026:
-
SecLists/Discovery/DNS/ — start with
subdomains-top1million-110000.txt - n0kovo_subdomains — 3M entries, generated from real CT log data
- commonspeak2 — built from actual Google BigQuery DNS traffic data
# Fast resolution with puredns
puredns bruteforce wordlist.txt example.com -r resolvers.txt -o resolved.txt
Use public DNS resolvers (8.8.8.8, 1.1.1.1, etc.) — never brute-force through the target's own nameservers, which is noisy and can trigger rate limits.
Permutation and Mutation
Once you have a base list of confirmed subdomains, you can generate likely variants using permutation tools.
alterx is the current best-in-class:
echo "api.example.com" | alterx | puredns resolve -r resolvers.txt
alterx generates variations like api-v2.example.com, api-staging.example.com, api-internal.example.com based on real-world naming patterns. Run it against every discovered subdomain and you often surface 10-30% more results.
The Techniques Most People Miss
CSP Header Mining
Many sites have a Content-Security-Policy header that lists approved domains for loading scripts, images, and fonts. These approved domains are often internal infrastructure accidentally listed publicly:
curl -s -I https://example.com | grep -i content-security-policy
A CSP like default-src 'self' cdn.example.com api.example.com analytics.internal.example.com has just handed you three subdomains worth investigating.
JavaScript Source Analysis
Modern web apps load dozens of JavaScript files that contain hardcoded API endpoints, internal hostnames, and environment-specific URLs that the developers never thought about securing:
# Download the page, extract all JS URLs, then grep for domain references
curl -s https://example.com | grep -oP 'src="[^"]*\.js[^"]*"' | \
xargs -I{} curl -s {} | \
grep -oP '[a-zA-Z0-9-]+\.example\.com'
Tools like gau (Get All URLs) can pull historical JavaScript URLs from Wayback Machine and then you can grep the archived content.
robots.txt, sitemap.xml, and Security.txt
These often contain references to subdomains:
-
https://example.com/robots.txt— frequently lists admin and staging paths -
https://example.com/sitemap.xml— can reference staging subdomains -
https://example.com/.well-known/security.txt— sometimes lists internal contact endpoints
httpx: Turning Subdomains Into a Live Attack Surface Map
A list of subdomains is meaningless if most of them don't resolve or don't serve HTTP. httpx filters your list to only alive hosts and enriches each with status codes, titles, and technology fingerprints:
cat subdomains.txt | httpx -sc -title -tech-detect -o live-subdomains.txt
Output looks like:
https://admin.example.com [200] [Admin Panel] [React,nginx]
https://legacy-api.example.com [200] [Broken References] [Express 3.x]
https://staging.example.com [200] [Staging - example.com] [WordPress 5.2]
That Express 3.x hits an instinct. Legacy versions of Express have well-known vulnerabilities. That's your next click.
What Actually Works vs. What's Overrated
Works well in 2026:
- Certificate transparency (crt.sh, subfinder) — consistently surfaces 80%+ of subdomains
- httpx for live host filtering — essential for reducing noise
- CSP header mining — underused, frequently productive
- GitHub dorking — finds what automated tools can't
Overrated or declining:
- Pure wordlist brute force without permutation — diminishing returns as companies move to random-suffix naming
- Shodan alone — better as a complement to CT-based approaches than a primary source
- Zone transfer attempts (
dig AXFR) — still worth trying but almost never succeeds on external-facing nameservers anymore
Time sinks to avoid:
- Running Amass in active mode on every target — save it for high-value programs
- Manually browsing every discovered subdomain — use httpx to filter first
Building a Simple Pipeline
Here's the workflow that consistently delivers results:
#!/bin/bash
TARGET="example.com"
# Step 1: Passive enumeration
subfinder -d $TARGET -all -silent > subs-passive.txt
curl -s "https://crt.sh/?q=%.$TARGET&output=json" | jq -r '.[].name_value' | \
sort -u | grep -v '\*' >> subs-passive.txt
sort -u subs-passive.txt -o subs-passive.txt
# Step 2: Permutation
cat subs-passive.txt | alterx -silent | \
puredns resolve -r resolvers.txt -q >> subs-passive.txt
# Step 3: DNS resolution (filter to live)
puredns resolve subs-passive.txt -r resolvers.txt -o subs-resolved.txt
# Step 4: HTTP probing
cat subs-resolved.txt | httpx -sc -title -tech-detect -o live-hosts.txt
echo "Done. $(wc -l < live-hosts.txt) live hosts found."
Run this against a real target and you'll have a prioritized, live attack surface map in under an hour.
From Subdomains to Bug Bounty Findings
Finding subdomains is reconnaissance, not a finding. What you do next determines whether you get paid:
Check for subdomain takeover: If a subdomain resolves but the backend (S3, Heroku, Fastly, GitHub Pages) has been deprovisioned, you can often claim it. Tools like nuclei with the takeovers template scan for this automatically.
Look for authentication differences: Staging environments often have weaker or missing auth. Try accessing admin functions that the production site gates properly.
Check for outdated software: httpx's -tech-detect tells you the stack. An old Struts or Jenkins version on a forgotten subdomain is more valuable than a well-patched main app.
Check for information disclosure: Development subdomains often expose stack traces, debug endpoints, or verbose error messages. /api/debug, /.env, /phpinfo.php, and similar paths are worth trying.
Map the API surface: Every subdomain that returns JSON is a potential API target. Look for IDOR opportunities — can you access other users' data by swapping numeric IDs?
Staying Ethical and In-Scope
Always check the bug bounty program's scope definition before testing any subdomain you discover. Many programs define scope as specific subdomains rather than *, and testing out-of-scope hosts — even if discoverable — can get you banned.
The safe rule: if a subdomain isn't explicitly in scope or clearly implied by a wildcard scope, ask the program's security team before testing it. Most programs have a triager who can clarify quickly.
Subdomain enumeration isn't glamorous work. It's systematic, methodical, and sometimes tedious. But it's also where most successful bug bounty hunters find their consistent edge — not in exotic attack techniques, but in mapping more of the attack surface than anyone else before they start looking.
The target's main domain is probably locked down. Something in that subdomain list won't be.
Want more bug bounty methodology? I've been writing about XSS patterns, IDOR vulnerabilities, and security header auditing — all in the kai_learner series.
All tools mentioned are open-source and intended for authorized security testing only. Never test systems you don't have explicit permission to test.
Top comments (0)