A private bug bounty program I am on had a scope increase recently.
Many subdomains were extended in the form:
amass does not seem to have a feature that handles this sort of search, so I generated that list in bash/zsh:
for i in $(cat mylist.txt); do echo $itarget.com >> potential-targets; done
This is for only one target. If your list containing multiple targets is in targets.txt, do:
for i in $(cat targets.txt); do for j in $(cat mylist.txt); do echo $j$i >> potential-targets.txt; done done
This can yield a big file. To compress it after creation:
gzip -9 potential-targets.txt
To zip them during creation:
for i in $(cat targets.txt); do for j in $(cat mylist.txt); do echo $j$i | gzip -9 >> potential-targets.txt.gz; done done
Once you've created the list of targets and zipped them, lets say we want to find all of the potentials yielding actual pages with content. Fire up httpx:
zcat potential-targets.txt.gz | \ httpx -no-fallback -o httpx.txt
Because this can take a long time, you might consider lowering the timeout, or using more or less threads:
zcat potential-targets.txt.gz | \ httpx -no-fallback -o httpx.txt -threads 200 -timeout 1
Lets say you didn't zip potential-targets. That's ok. httpx has an option to read directly from file. Everyone online is so obsessed with piping output because they think it is cool, but encouraging people to merely copy/paste data results in bad thinking and approach.
httpx -l potential-targets.txt -no-fallback \ -o httpx.txt -threads 200 -timeout 1
This will run for a long time. I was running it with default timeout and 8 threads on 300,000+ possible names and it had not completed after several hours. I'm running it now on a much larger namespace and even though it is taking a long time, I am already yielding new names that did not show up in amass.
If this helps you at all, or you want to lurk for other content or whatever, here is my twitter