DEV Community

Trumpiter
Trumpiter

Posted on

🧭 Selecting the Right Bug Bounty Targets & Reconnaissance

🎯 Target Prioritization

Not all targets are created equal. Prioritization helps you allocate your limited time and resources efficiently.

  • Risk vs. Effort

    • Detail: This involves assessing the potential security impact of a vulnerability on an endpoint versus the amount of time and effort required to find and exploit it.
    • Endpoints by potential impact:
      • IDOR (Insecure Direct Object Reference): Vulnerabilities where an attacker can access or modify resources they shouldn't have access to by changing an identifier (e.g., userID=123 to userID=124). High impact if it exposes sensitive data or allows unauthorized actions.
      • Auth Bypass (Authentication Bypass): Flaws that allow attackers to circumvent login mechanisms or access restricted functionalities without proper credentials. Critical impact.
      • Data Leaks: Unintentional exposure of sensitive information (e.g., PII, API keys, proprietary code). Impact varies based on data sensitivity.
    • Balance: The goal is to find a sweet spot. A very high-impact target that might take weeks to crack might be less efficient than several medium-impact targets found more quickly, or vice-versa depending on your strategy.
  • Breadth vs. Depth

    • Detail: This is a strategic decision.
      • Breadth: Covering a wide range of targets or functionalities superficially to find many lower-severity issues or low-hanging fruit. This can be good for building a reputation or consistent payouts.
      • Depth: Focusing intensely on a single application or feature to uncover complex, critical vulnerabilities that others might miss. This can lead to higher individual payouts but might be more time-consuming.
  • Business Logic Importance

    • Detail: Business logic vulnerabilities are flaws in the design and implementation of an application's rules and workflows. These are often unique to the application and not discoverable by generic scanners.
    • Focus Areas:
      • Payments: Any flaw here can have direct financial impact.
      • User Data: Unauthorized access or modification of user information is a high-impact area.
      • Privileged Actions: Functions restricted to administrators or specific user roles (e.g., user management, configuration changes).
  • Historical Findings

    • Detail: If the program has public disclosures (e.g., on HackerOne or Bugcrowd), reviewing them can be invaluable.
    • Benefits:
      • Avoid Reinventing the Wheel: See what has already been found and fixed.
      • Identify Untouched Areas: Notice patterns in findings or areas that seem to have received less attention, which could be fruitful hunting grounds.
      • Understand Common Weaknesses: Get a feel for the types of vulnerabilities the target has been susceptible to in the past.

πŸ”¬ Testing Techniques

These are the methods used to actively probe targets for vulnerabilities.

  • Manual Interaction

    • Detail: This involves using the application as an end-user would, but with a security mindset.
    • Proxy Capture: Tools like Burp Suite or OWASP ZAP are used to intercept and inspect all HTTP/S requests and responses between your browser and the target application.
    • Tweak Parameters: Modify parameters in captured requests (e.g., change values, add special characters, test different data types) to see how the application responds.
    • Chain Dependent Calls: Test sequences of actions that rely on each other (e.g., creating an item, then modifying it, then deleting it). Flaws can emerge in the interactions between these steps, revealing broken business logic (e.g., being able to modify an item after it's "deleted" from the UI but not the backend).
  • Parameterized Fuzzing

    • Detail: Fuzzing is an automated software testing technique that involves providing invalid, unexpected, or random data as input to a program. Parameterized fuzzing focuses this on specific HTTP parameters or API endpoints.
    • Custom Wordlists: Instead of generic fuzzing lists, create or use wordlists tailored to the target (e.g., common parameter names, known technologies, business-specific terms).
    • Tools:
      • Kiterunner: A tool specifically designed for API/endpoint content discovery and fuzzing.
      • FFUF (Fuzz Faster U Fool): A fast web fuzzer used for discovering hidden directories, files, and parameters by brute-forcing with wordlists.
    • Goals:
      • Forced Browse: Discovering resources not directly linked from the application.
      • Parameter Fuzzing: Uncovering hidden or unhandled parameters that might lead to vulnerabilities like SQL injection, XSS, or command injection.
  • Chaining Tools

    • Detail: Combining the output of one tool as the input for another to create more powerful and comprehensive testing workflows.
    • Examples:
      • Amass β†’ FFUF: Amass is a powerful tool for subdomain enumeration and network mapping. Its output (list of subdomains) can be fed into FFUF to perform directory/file fuzzing on each discovered subdomain.
      • Shodan Results β†’ Pinpoint Unusual Protocols/Admin Panels: Shodan is a search engine for internet-connected devices. If Shodan reveals open services on non-standard ports or specific server banners, you can investigate these for unusual protocols (e.g., FTP, SSH, databases) or hidden admin panels.
  • Interactive Analysis

    • Detail: Using features within security proxies to intelligently test for specific vulnerabilities.
    • Burp Suite's Scanner/Intruder "Smart" Mode:
      • Scanner: Automated vulnerability scanning that can be configured for different levels of intensity and types of checks.
      • Intruder: A highly configurable tool for automating custom attacks. "Smart" mode implies using its features to generate context-aware or specialized payloads.
      • Edge-Case Inputs: Testing with inputs that developers might not have anticipated:
        • Long Strings: Can cause buffer overflows or denial of service.
        • Null Bytes (%00): Can terminate strings prematurely in some languages, leading to path traversal or other issues.
        • Unicode Characters: Can sometimes bypass filters or cause unexpected behavior in how data is processed or rendered.

πŸ“‹ Reporting & Validation

A clear and concise report is crucial for getting your vulnerability validated and rewarded.

  • Proof of Concept (PoC)

    • Detail: Provide the minimum, unambiguous steps needed for the security team to reproduce the vulnerability.
    • Minimal Steps:
      1. Request (curl/HTTPraw): The exact HTTP request that triggers the vulnerability. curl commands are often preferred as they are easily runnable. HTTPraw is the raw text of the request.
      2. Modified Payload: Clearly indicate what part of the request was modified (e.g., specific parameter, header) and what the malicious payload was.
      3. Successful Response or UI Change: Show the evidence of the vulnerability (e.g., the HTTP response containing leaked data, a screenshot of the UI change, error messages indicating successful exploitation).
  • Environment Details

    • Detail: Contextual information that can help the vendor reproduce the issue, especially if it's environment-specific.
    • List: Application version (if known), browser type and version, operating system, any specific tokens (e.g., session cookies, CSRF tokens) or cookies used during testing.
  • Screenshots / Logs

    • Detail: Visual and textual evidence to support your claim.
    • Capture Before/After with Timestamps: Show the state of the application before the exploit and the result after. Timestamps help correlate with server logs.
    • Include Raw HTTP Snippets: Relevant portions of requests and responses.
  • Suggested Fix

    • Detail: Offering a potential solution demonstrates your understanding and can be helpful to the vendor.
    • References:
      • OWASP (Open Web Application Security Project) Guidelines: Refer to specific OWASP cheatsheets or recommendations (e.g., for input validation, IDOR prevention).
      • Common Mitigation Patterns:
        • Strict ID Validation: Ensuring users can only access objects they are authorized for.
        • Input Sanitization: Cleaning user-supplied input to prevent injection attacks (XSS, SQLi, etc.).
        • Proper CORS (Cross-Origin Resource Sharing) Config: Preventing unauthorized cross-domain requests.
  • Re-Test After Patch

    • Detail: Once the vendor claims to have fixed the issue, verify their solution.
    • Confirmation: Ensure the original vulnerability is no longer exploitable.
    • Regression Testing: Check that the fix hasn't inadvertently broken legitimate functionality or introduced new vulnerabilities.

πŸ”„ Iterative Recon & Monitoring

Reconnaissance is not a one-time activity; it's an ongoing process as applications and infrastructure evolve.

  • Passive Monitoring

    • Detail: Continuously gathering information without actively probing the target.
    • Tools:
      • Subfinder + Amass: Tools for discovering subdomains.
      • Passive Sources:
        • crt.sh: A website that logs SSL certificates, often revealing new subdomains when certificates are issued.
        • Certificate Transparency (CT) Logs: Public logs of all issued SSL/TLS certificates. Monitoring these can uncover new assets.
    • Goal: Catch new subdomains as they come online.
  • Alerting

    • Detail: Setting up automated notifications for changes in the target's attack surface.
    • Triggers: New technologies detected (e.g., a new JavaScript library, a different web server), new API endpoints appearing.
    • Automation Examples:
      • GitHub Actions Workflows: Custom scripts run on a schedule or triggered by events.
      • CI/CD (Continuous Integration/Continuous Deployment) Logs: If accessible, these might indicate new deployments or features.
  • Periodic Re-Scans

    • Detail: Regularly re-running your reconnaissance and scanning tools against high-value domains.
    • Schedule: E.g., weekly scans to detect changes in assets, open ports, or web technologies.
  • Changelog & Patch Notes

    • Detail: Monitoring official communications from the vendor.
    • Purpose: New features often introduce new code and, potentially, new vulnerabilities. Changelogs can highlight areas to focus on.

πŸš€ Continuous Improvement

Refining your process and skills over time.

  • Post-Mortem

    • Detail: After submitting a bug report (successful or not), review your process.
    • Analysis: What techniques worked well? Where was time wasted? Was the reward commensurate with the effort? This helps optimize future hunting.
  • Knowledge Sharing (Personal Wiki)

    • Detail: Maintain a personal database of information gathered about specific vendors or targets.
    • Content:
      • Default Headers: Common headers used by the target's applications.
      • Known WAF (Web Application Firewall) Fingerprints: How to identify the WAF in use and potentially bypass it.
      • Quirks: Any unusual behaviors or configurations specific to a target.
  • Skill Growth

    • Detail: Actively working to improve your technical abilities.
    • Practice Labs:
      • OWASP Juice Shop: An intentionally insecure web application for learning and practicing web hacking.
      • HackTheBox: A platform with vulnerable machines and challenges.
    • CTFs (Capture The Flag Competitions): Challenges focused on API security and business logic flaws can be particularly relevant.

🎯 Purpose of Reconnaissance

The core reasons why reconnaissance is performed.

  • Information Gathering: Collect data on:
    • Subdomains: e.g., dev.example.com, api.example.com.
    • Open Ports: Network ports listening for connections (e.g., 80 for HTTP, 443 for HTTPS, 22 for SSH).
    • Hidden Directories: Web directories not directly linked from the site.
    • Services: Software running on open ports (e.g., web servers, databases, mail servers).
  • Attack Surface Mapping: Identify all potential entry points and externally accessible assets an attacker could target. A larger, well-mapped attack surface increases the chances of finding vulnerabilities.
  • Understanding Infrastructure: Gain insights into:
    • Server Details: Operating systems, web server software (Apache, Nginx), etc.
    • Hosting Environments: Cloud providers (AWS, Azure, GCP), on-premise data centers.
    • WAF Implementations: Identify if a Web Application Firewall is in use and potentially what type, which can affect testing strategies.

πŸ“ Important Notes

Key takeaways and cautions for reconnaissance.

  • Quality Over Quantity: Simply running many automated tools can generate a lot of noise, duplicate findings (already reported by others), or low-severity issues. Focus on meaningful discovery.
  • Focus on High-Impact Bugs: Top bug bounty hunters often prioritize finding significant vulnerabilities (e.g., Remote Code Execution, SQL Injection, serious business logic flaws) that have a greater impact and are often overlooked by purely automated approaches.
  • Tailored Approach: Reconnaissance strategies should not be one-size-fits-all. Adapt your methods based on the specific target, the scope of the engagement (bug bounty vs. penetration test), and the time available.

πŸ“Œ Recon Based on Scope

How the nature of the engagement influences reconnaissance.

  • Bug Bounty Programs

    • Broad Scope: Often include *.company.com or even all assets owned by the company. This provides a large area to explore.
    • Time Flexibility: Researchers can usually report vulnerabilities whenever they find them, without strict deadlines (unless it's a time-limited event).
    • Asset Discovery: Actively finding new, previously unknown assets (including those from newly acquired companies) is often encouraged and can lead to unique findings.
  • Penetration Testing

    • Defined Scope: Typically limited to specific domains, applications, or IP ranges explicitly agreed upon beforehand.
    • Time-Bound: Conducted over a predetermined period (e.g., 1 to 3 weeks). Efficiency is key.
    • Restricted Recon: Certain activities might be explicitly out of scope, such as extensive subdomain enumeration on unrelated parent domains or crawling internet archives for historical data, to keep the focus tight.

πŸ” Reconnaissance Process

A more granular breakdown of reconnaissance steps, often forming a repeatable methodology.

  • 1. Asset Discovery (Initial broad steps to find company-owned entities)

    • Acquisitions:
      • Purpose: When a company acquires another, the acquired company's assets often become in-scope. Identifying these can reveal new targets.
      • Platforms:
        • Crunchbase: Database of companies, investors, and funding.
        • Tracxn: Platform for tracking startups and private companies.
        • Owler: Business information and competitive insights.
    • WHOIS Data:
      • Purpose: Domain registration information (registrant name, organization, contact email, nameservers) can link different domains owned by the same entity.
    • Reverse WHOIS:
      • Purpose: Use registrant information (like an email or name) found from a WHOIS lookup to find other domains registered by that same entity.
  • 1. Identify the Root Domain (Reiteration/Alternative Starting Point)

    • Detail: Start with the main, publicly known domain (e.g., company.com). This forms the basis for subsequent subdomain discovery.
  • 2. Research Acquisitions and Company History (Reiteration)

    • Detail: Understand company growth, mergers, and acquisitions. This can reveal related domains or older, potentially less secure, infrastructure.
    • Tools: Crunchbase, Wikipedia, Google searches.
  • 3. Perform Reverse WHOIS Lookup (Reiteration)

    • Tools: Whoxy.com, or general Google searches for registrant emails/names.
  • 4. Analyze Technologies and Analytics

    • Purpose: Identify the software stack (programming languages, frameworks, web servers, analytics tools, etc.) used by the target. Knowing the technology helps in tailoring attacks (e.g., specific exploits for a known version of a CMS).
    • Tools/Extensions:
      • Wappalyzer: Browser extension and tool to identify technologies on websites.
      • BuiltWith: Website and tool providing technology profiles of websites.
  • 5. Enumerate Subdomains

    • Purpose: Find all subdomains associated with the root domains (e.g., blog.company.com, api.company.com, dev.company.com).
    • Tools:
      • Amass: Comprehensive tool for active and passive subdomain enumeration, network mapping.
      • Sublist3r: Passive subdomain enumeration tool using search engines and third-party services.
      • DNSDumpster: Web-based tool for DNS reconnaissance.
      • MassDNS: A high-performance DNS resolver often used for brute-forcing subdomains with large wordlists.
    • Wordlists:
      • JHaddix's all.txt: A popular and comprehensive wordlist for subdomain brute-forcing.
  • 6. Gather ASN Information

    • Purpose: An Autonomous System Number (ASN) is a unique number assigned to an Autonomous System (AS), which is a collection of IP networks operated by one or more network operators that has a single and clearly defined external routing policy. Identifying ASNs associated with a target can reveal entire IP ranges owned by them.
    • Benefit: Helps map the organization's network presence.
  • 7. Conduct Port Scanning

    • Purpose: Identify open network ports on discovered subdomains and IP addresses. Each open port might be running a service that could be an entry point.
    • Tools:
      • Nmap (Network Mapper): A powerful open-source tool for network discovery and security auditing, including port scanning, OS detection, and service version detection.
  • 8. Document Findings

    • Purpose: Take visual snapshots of web applications and document anything unusual or noteworthy. This helps in later analysis, prioritization, and reporting.
    • Tools:
      • Eyewitness: Takes screenshots of websites, provides some server header information, and can identify default credentials.
      • Aquatone: Similar to Eyewitness, used for visual inspection of websites across a large number of hosts.
  • 9. Check for Subdomain Takeovers

    • Purpose: A subdomain takeover occurs when a subdomain (e.g., info.company.com) has a DNS record (e.g., CNAME) pointing to a third-party service (e.g., GitHub Pages, Heroku, S3 bucket), but the service is no longer configured or the account has been deleted. An attacker can then claim this orphaned service endpoint and host malicious content on the legitimate subdomain.
  • 2. ASN and IP Analysis (More Detail/Consolidation)

    • Autonomous System Numbers (ASNs): (As described above)
    • Tools:
      • Hurricane Electric BGP Toolkit (he.net): Web-based tool to explore BGP (Border Gateway Protocol) information and ASNs.
      • amass intel -asn <ASN>: Amass can use ASN to find associated CIDRs/IP ranges.
      • asnmap: Tool to map ASNs to IP ranges.
    • Reverse DNS:
      • Purpose: Look up domain names associated with a given IP address.
      • Tools: hakrevdns can perform reverse DNS lookups on a list of IPs.
  • 3. SSL/TLS Certificate Analysis

    • Certificate Transparency Logs: (As described above – monitoring new certs).
    • Tools:
      • TLSX: A fast SSL/TLS data gathering and analysis tool. It can extract Subject Alternative Names (SANs) and Common Names (CNs) from certificates, which often list other related domains and subdomains.
    • Fingerprinting:
      • JARM: An active TCP fingerprinting tool for identifying SSL/TLS server applications.
      • JA3/JA3S: A method for creating SSL/TLS client (JA3) and server (JA3S) fingerprints based on the parameters of the SSL/TLS handshake. These can help identify specific client applications or malware, or group similar server configurations.
  • 4. Shodan Search

    • Purpose: Search engine for internet-connected devices.
    • Device Enumeration: Find servers, IoT devices, webcams, industrial control systems, etc., related to the target using specific queries (e.g., org name, IP range, port numbers).
    • Vulnerability Identification: Can reveal exposed services, default credentials, misconfigurations, or known vulnerabilities based on software banners.

GOAL: Find Every Possible Target & Attack Vector

This section defines what constitutes an attack vector in different contexts.

  • An Injection Attack Vector is the unique combination of:

    • HTTP Verb: GET, POST, PUT, DELETE, PATCH, etc. The method used for the request.
    • Domain:Port: The target host and port (e.g., example.com:443).
    • Endpoint: The specific path of the URL (e.g., /api/v1/users).
    • Injection Point: The specific parameter, header, cookie, or part of the URL path where malicious input is injected.
    • Significance: Each unique combination is a distinct place to test for vulnerabilities like SQL Injection, XSS, Command Injection, etc.
  • A Logic Attack Vector is one of the following four things:

    • Overly Complex Mechanism: A feature or workflow with many steps, conditions, or dependencies. Complexity increases the chance of oversight and flaws.
    • Database Query Using ID From HTTP Request: Any endpoint that retrieves data based on an ID supplied in the request (e.g., /items?id=123) is a potential IDOR vector.
    • Granular Access Controls: Systems with many roles, permissions, or fine-grained access rules. The more complex the rules, the harder they are to implement and enforce correctly, potentially leading to privilege escalation or authorization bypasses.
    • "Hacky" Implementations: Code or features that seem rushed, poorly designed, or like workarounds. These often cut corners on security.
  • Ebb & Flow:

    • Concept: Your bug hunting process should be iterative.
    • Methodology:
      1. Follow the recon methodology to identify 3-5 promising attack vectors on a target URL.
      2. Spend focused time testing these vectors.
      3. If you get stuck or results diminish, "put a pin" (pause and remember) in those vectors.
      4. Return to an earlier stage of recon, try new tools/techniques to expand knowledge of the attack surface.
      5. Choose 3-5 new attack vectors.
      6. Repeat.
    • Goal: Maintain momentum, avoid burnout on a single path, and continuously expand coverage.

# Core Recon Workflow

A structured workflow, particularly useful for programs with broad scopes or when starting with just a company name.

  • Finding Apex Domains

    • Summary: For programs with "Wide Open Scope" (any asset owned by the company), you first need to find the main (apex) domains (e.g., company.com, anotherproduct.com) before you can find subdomains.
    • Example Programs: US Department of Defense (DoD), Tesla.
    • Input: Company Name
    • Techniques:
      • Web Scraping:
        • Tools: Shodan, DNS Dumpster, Reverse WhoIs (viewdns.info).
        • Amass Intel Module: amass intel -org 'Company Name' can find domains associated with an organization.
        • Creativity: Think of less obvious public places where a company might list domains (e.g., marketing materials, job postings, partner pages, legal documents). The goal is to find domains other researchers might miss.
      • Google Dorking: Using advanced Google search operators (intitle:, intext:, site:, filetype:, etc.) to find specific information or domains. This can uncover websites hosted on domains not easily found by searching for the company name directly.
      • Cloud IP Ranges:
        • Method: Scan IP ranges belonging to cloud providers (AWS, Azure, GCP) where the company might host assets. Extract SSL certificate data from responding IPs and look for certificates issued to domains containing the company name or related keywords.
        • Note: This can be time-consuming and data-intensive.
      • Autonomous System Number (ASN):
        • Method: If a company hosts its own infrastructure (on-premise), it will likely have registered IP address ranges with an ISP, which are assigned an ASN. Query public resources (e.g., BGP tools) for ASNs associated with the company to find its IP ranges and potentially apex domains resolving within those ranges.
      • Acquisitions & Mergers:
        • Method: Monitor tech news, financial news, and sites like Crunchbase for M&A activity. Domains of acquired companies often become in-scope.
      • LinkedIn + GitHub:
        • Method:
          1. Use LinkedIn to find developers/engineers working for the target company.
          2. Try to find their personal GitHub accounts (if public).
          3. Search their public repositories for code snippets containing domains or keywords related to the target company. Developers sometimes test company code or use company assets in personal projects.
      • Marketing & Favicon:
        • Tracking Cookies: If you find a website using the same unique tracking cookie ID (e.g., Google Analytics ID, Hubspot ID) as known company sites, it might also belong to the company.
        • Favicon Hashing: Calculate the hash (e.g., MD5, MMH3) of a known company favicon. Search for this hash in search engines like Shodan, Censys, or specialized favicon search tools. Sites using the same favicon might be related.
    • Output: List of Apex Domains
  • Finding Live Web Applications

    • Summary: Once you have apex domains, the next step is to find all associated subdomains and then determine which of these (and their corresponding IPs/ports) are hosting live web applications.
    • Input: Apex Domain
    • Steps (Apex Domain β†’ List of Subdomains):
      • Amass: (As mentioned before) A primary tool for comprehensive subdomain discovery using various techniques. The note states it finds ~80% of subdomains, implying the remaining 20% require more creative/manual efforts.
      • Web Scraping (for subdomains):
        • Tools: Sublist3r, Assetfinder, GetAllUrls (GAU - fetches known URLs from AlienVault's Open Threat Exchange, Wayback Machine, and Common Crawl), Certificate Transparency Logs (via tools like ctfr, subfinder, or manually on crt.sh), Subfinder (passive discovery tool).
        • Goal: Use public resources and APIs to find subdomains.
      • Brute Force:
        • Method: Trying a list of common or generated subdomain names against the apex domain.
        • Tools:
          • ShuffleDNS: A wrapper around MassDNS, used for resolving subdomains with wildcard filtering and subdomain bruteforcing.
          • CeWL + ShuffleDNS: CeWL crawls a website and generates a custom wordlist based on words found on the site. This list can then be used with ShuffleDNS for more targeted subdomain brute-forcing, potentially finding subdomains that follow a naming convention visible in the site's content.
      • Link Discovery (Crawling existing findings for more links):
        • Tools:
          • GoSpider: A fast web spider that can find URLs, subdomains, and JavaScript files.
          • SubDomainizer: Scans JavaScript files and web pages for (sub)domains and other interesting information.
        • Note: This step is iterative. You find some subdomains, crawl them, find more, and repeat. The note mentions doing this twice: once before this stage and once at the end.
      • Cloud IP Ranges:
        • Tool: Clear-Sky (author's tool) automates scanning cloud IP ranges and extracting certificate data to find associated domains/subdomains. This can take a long time.
    • Steps (List of Subdomains β†’ List of Live URLs):
      • Resolve Subdomains to IPs:
        • Method: Convert FQDNs (Fully Qualified Domain Names like sub.example.com) to their IP addresses.
        • Caution: Prone to false positives (e.g., shared hosting, CDNs). Manual verification is crucial.
        • Verification: Check if IPs fall within known company ASN ranges (for on-prem) or access the IP directly in a browser. Accessing by IP can sometimes bypass security controls or reveal different applications due to Host Header variations.
      • Port Scanning:
        • Method: On the verified IPs, scan for open ports beyond standard web ports (80, 443), such as 8000, 8080, 8443, etc.
        • Tools: DNMasscan (combines DNS resolution with Masscan, a very fast port scanner). Again, verify results.
      • Consolidate:
        • Method: Create a unique list of subdomains. Filter out out-of-scope domains that crawlers might have picked up.
      • Test for Live Web App:
        • Method: Probe the unique list of subdomains/IPs/ports with HTTP/S requests to see if a web server responds.
        • Tools: httprobe, httpx (these tools take a list of domains and probe for live HTTP/S servers).
    • Output: List of URLs Pointing to Live Web Applications
  • Choosing Target URLs

    • Summary: From the list of live web applications, select which ones are most promising for manual testing based on indicators of potential vulnerability or lack of maintenance.
    • Input: List of URLs Pointing to Live Web Applications
    • Techniques:
      • Wide-Band Scanning (Automated Initial Scan):
        • Purpose: Quickly scan all live URLs for known vulnerabilities, misconfigurations, or outdated software. This can yield quick wins (rarely, as everyone does this) or, more importantly, highlight neglected applications.
        • Tools:
          • Nuclei: Fast, template-based vulnerability scanner. It has a large community-provided template base and allows custom YAML templates.
          • Semgrep: Open-source static analysis tool. Can be used on client-side JavaScript to find DOM XSS patterns, insecure coding practices. If lucky (e.g., unobfuscated webpack source maps), you can download raw client-side code (React, Vue, Angular) and scan it.
      • Choosing an App Worth Your Time (Manual Indicators): This is where experience ("Pointers") comes in.
        • Screenshots:
          • Tools: Nuclei (has screenshot templates), EyeWitness (gathers info and screenshots).
          • Look For: Major visual differences between apps, error messages, debug information, default pages, development environments.
        • Tech Stack:
          • Comfort Zone: Prioritize apps built with technologies you are familiar with and enjoy testing (e.g., MERN stack vs. .NET).
          • Tools: Wappalyzer, BuiltWith.
        • NPM Packages (Client-Side):
          • Method: Enumerate client-side JavaScript libraries and their versions. Check if these versions have known CVEs.
          • Tool: Retire.js (browser extension or command-line tool).
          • Caution: A known CVE in a library doesn't automatically mean the application is vulnerable. The vulnerable function in the library must actually be used by the application. However, many outdated packages suggest poor maintenance.
        • Certificates:
          • Expired Certificate: Strong indicator of a neglected application.
          • Mismatched Certificate: (e.g., cert for old.example.com served on new.example.com). Could indicate recent changes, rushed migration, or misconfiguration.
          • Self-Signed Certificate: Should not be on public-facing production systems. Often indicates a development/test environment accidentally exposed.
          • Caution: Appending port 443 to a URL (e.g., https://example.com:443) might cause browsers or tools to show a mismatch if the cert is only for example.com, but it's the same app.
    • Output: List of URLs Hosting Web Applications Worth Your Time
  • Enumeration (Deep Dive into a Chosen Target URL)

    • Summary: Once a promising URL is chosen, the goal is to find specific Attack Vectors by thoroughly examining its components.
    • Input: URL Pointing to Live Web Application Worth Your Time (Target URL)
    • Injection Attack Vectors (User-controlled input not sanitized):
      • Endpoints (Routes/Paths):
        • Manual Clicking: Explore the application normally to map out intended functionality.
        • Automated Crawl: Use crawlers to find endpoints missed manually (e.g., in JavaScript files, sitemaps).
          • Tools: PortSwigger's Burp Suite (Site map > Discover content), Project Discovery's Katana, Caido (another web security auditing toolkit).
        • Fuzzing For Endpoints (Brute-forcing directories/files):
          • Tools: FFUF, Burp Intruder, Burp Content Discovery (Discover content feature).
      • Parameters (User-controlled input in URL query string or request body):
        • Tools for finding hidden/unlinked parameters: Arjun, Burp Param Miner (extension for Burp Suite).
      • HTTP Verbs (Methods: GET, POST, PUT, DELETE, OPTIONS, etc.):
        • Method: Test each endpoint with different HTTP verbs. Some endpoints might behave differently or expose unintended functionality with verbs other than the one typically used.
        • Tool: appscan by gh0st (mentioned for testing verbs).
      • Headers/Cookies:
        • Method: Fuzz for non-standard or hidden HTTP headers and cookies that might be processed by the application, leading to vulnerabilities or revealing debug functionality.
        • Tools: Burp Param Miner, FFUF, Burp Intruder.
    • Logic Attack Vectors (Flaws in application design/workflow, developer oversight):
      • Dev Tools (Browser Developer Tools - F12): A primary source for initial logic assessment.
        • Client-Side Data Storage:
          • localStorage / sessionStorage: Check for sensitive data stored here (e.g., tokens, user info). While convenient, it's accessible to any JavaScript running on the page (e.g., XSS).
        • Cookies & Cookie Flags:
          • Data Stored in Cookie: Look for plaintext or easily decodable (e.g., Base64 encoded JSON like JWTs) sensitive data.
          • Cookie Signed for Integrity: If data is present, is it signed to prevent tampering? If signed, is the signature validated consistently across all endpoints?
          • Secure Flag: Ensures cookie is only sent over HTTPS. Absence can lead to leakage over HTTP.
          • HttpOnly Flag: Prevents client-side JavaScript from accessing the cookie. Absence makes session cookies vulnerable to XSS.
          • SameSite Flag (Strict, Lax, None): Mitigates CSRF. SameSite=None (especially without Secure) can make CSRF attacks easier.
        • Client-Side JavaScript:
          • Webpack Serialized/Obfuscated?: If not (e.g., source maps exposed), raw framework code (React, Vue, etc.) might be downloadable (e.g., using Burp's JS Miner extension or similar tools) for deeper analysis with tools like Semgrep.
          • Readable JS?: Unminified or well-commented JS is easier to analyze for flaws or hidden endpoints.
          • Custom JS Files?: Custom logic is less vetted than standard libraries and more prone to bugs.
          • Secrets/API Keys?: Hardcoded secrets in client-side JS are a common vulnerability.
          • API Endpoints in JS?: JS code often reveals API endpoints, including potentially hidden or undocumented ones.
        • State/Props (for frameworks like React, Vue, Angular):
          • Tools: React Developer Tools (browser extension).
          • Method: Inspect the component state and props in the Virtual DOM for sensitive data that developers might assume is not easily accessible.
      • Mechanisms (Series of HTTP requests for a specific task, e.g., CRUD):
        • Look For:
          • Complex Mechanisms: More steps/parameters = more chances for errors.
          • Sensitive Mechanisms: Those handling valuable data or critical functions. Impact is key for bug bounties.
        • Examples: Password Reset, SSO/OAuth Authentication, File Upload, Shopping Cart/Checkout.
      • Access Controls (Rules dictating what a client can access/do):
        • Role-Based Access Control (RBAC): Users assigned roles (Admin, User). Test for privilege escalation (e.g., User performing Admin actions).
        • Discretionary Access Control (DAC): Owners of resources grant permissions to others (e.g., sharing a document). Test if uninvited users can access.
        • Granular Policy-Based Access Controls (PBAC): Very specific permissions for individual users/operations (e.g., user A can CREATE and READ item X, but not UPDATE or DELETE). These are complex and prone to bypasses.
      • Database Queries (Focus on IDORs):
        • Method: Look for endpoints where the application queries a database using an identifier from the HTTP request (e.g., ObjectID, UserID, Email).
        • Goal: Test if by manipulating this identifier, you can access data belonging to other users or entities that you shouldn't have access to. The focus here is on authorization, not necessarily SQL injection (though that's also a risk).
    • Output: List of Attack Vectors Worth Your Time

# Finding Bugs w/ Recon

Using the gathered reconnaissance data with automation to find bugs, sometimes without extensive manual testing.

  • Leaked Secrets (In-App)

    • Input: URL Pointing to Live Web Application Worth Your Time
    • Concept: Finding sensitive data unintentionally exposed within the application's client-side resources or responses. Requires speed (report first) or creativity (find what others miss).
    • Examples:
      • API Key in client-side JavaScript.
      • Server responses returning excessive user data (e.g., password hashes, salts for all users).
      • JWTs leaking the "seed" for randomness (allowing prediction).
      • Sensitive chat messages stored in React State/Props visible via dev tools.
      • Unobfuscated webpack revealing debug API endpoints.
      • Plaintext credentials stored in localStorage (e.g., a "fix" for session timeouts).
    • Output: Data Valuable to an Attacker
  • Leaked Secrets (Web Scraping)

    • Input: Company Name
    • Concept: Developers sometimes post code snippets or ask for help on public forums, accidentally leaking sensitive information or internal code.
    • Sources: StackOverflow, Pastebin, public forums.
    • Output: Data Valuable to an Attacker
  • Leaked Secrets (GitHub/GitLab)

    • Input: Company Name, Employee Names, Company GH Org
    • Concept: Finding sensitive data in public code repositories.
    • Methods:
      • Public Repo on Company's Official Org Account: A repository that should be private but is accidentally public. Look for API keys, credentials, or internal logic that could be exploited.
      • Repo on Software Engineer's Personal Account: Developers might use personal accounts for company-related code (testing, side projects, unauthorized collaboration). Use LinkedIn to find engineers, then search for their GitHub/GitLab accounts.
      • String Search + Code Types: Search all public repos for company names or apex domains, filtered by specific languages (e.g., starbucks.com in bash or python scripts, hoping for hardcoded credentials).
      • String Search + Wordlist: Combine company/domain search with keywords like "password," "api_key," "secret."
        • Tool: Author's tool R-s0n/Github_Brute-Dork.
    • Output: Data Valuable to an Attacker
  • CVE Spraying

    • Input: List of URLs Pointing to Live Web Applications
    • Concept: A Common Vulnerability and Exposure (CVE) is a publicly known vulnerability in specific software. CVE spraying involves testing many targets for a specific CVE or a set of CVEs.
    • Hunting Styles:
      • Recon Heavy: Find attack vectors (domains, apps, infrastructure) that other researchers are missing, then scan these unique assets for existing CVEs using tools like Nuclei.
      • Future Bugs:
        1. A new CVE is announced (e.g., for an NPM package, cloud service, CMS like WordPress).
        2. Quickly build a way to test for this CVE (custom script or Nuclei template). The goal is to test for it on bug bounty programs before automated tools or other researchers do.
      • Both At Once (GOLD STANDARD): The ideal is to find unique attack surfaces and be among the first to test for newly disclosed CVEs on those surfaces.
    • Output: Valid CVE Found on Target's Attack Surface

πŸ” Reconnaissance Techniques (Additional/Overlooked - often from sources like Intigriti blog)

These are more specific or less common techniques to augment your recon.

  • 1. Custom Wordlists

    • Purpose: Improve brute-force attacks (directories, files, parameters) by using wordlists tailored to the target, rather than generic lists.
    • Tools: CeWL (crawls a site and generates a wordlist from its content).
    • Benefits: More relevant findings, fewer unnecessary requests (less noise, less risk of WAF blocking).
  • 2. Virtual Host (VHost) Enumeration

    • Purpose: Discover web applications hosted on the same IP address but configured for different Host headers. These might not be discoverable via DNS.
    • Method: Brute-force the Host header with a list of potential hostnames (e.g., common subdomains, variations of the target name) while sending requests to a known IP of the target.
    • Tools: Ffuf (e.g., ffuf -w vhost_wordlist.txt -H "Host: FUZZ.target.com" -u http://target_ip).
  • 3. Forced Browse with Different HTTP Methods

    • Purpose: Some endpoints might only be accessible or behave differently with specific HTTP methods (POST, PUT, DELETE, etc.) that are not typically used for Browse.
    • Approach: Systematically test discovered endpoints with various HTTP methods.
  • 4. JavaScript File Monitoring

    • Purpose: Detect new API endpoints, parameters, or functionalities as they are added to JavaScript files over time.
    • Tool: Jsmon (monitors JS files for changes and alerts you).
  • 5. Crawling with Different User-Agent Headers

    • Purpose: Some websites serve different content or have different interfaces for mobile devices, specific browsers, or search engine crawlers.
    • Method: Emulate various User-Agent strings (e.g., iPhone, Android browser, Googlebot) when crawling or interacting with the site.
  • 6. Favicon Hashing

    • Purpose: Identify related websites or assets that share the same favicon (the small icon displayed in browser tabs).
    • Method:
      1. Fetch the target's favicon file.
      2. Calculate its hash (e.g., MMH3 hash for Shodan).
      3. Search for this hash on platforms like Shodan (http.favicon.hash:<hash>) or Censys.
  • 7. Analyzing Legacy JavaScript Files

    • Purpose: Old or archived versions of JavaScript files might contain deprecated API endpoints, comments, or sensitive information that has since been removed from live versions but might still be active on the backend.
    • Method: Use the Wayback Machine (archive.org) to find historical versions of a site's JS files.

Dark Web in Bug Bounty (Leveraging Cyber Threat Intelligence - CTI)

This section, attributed to "Mater," discusses using dark web intelligence.

  • Embracing CTI: CTI involves collecting and analyzing information about cyber threats (actors, TTPs - Tactics, Techniques, and Procedures) to help organizations mitigate risks.
  • Leveraging the Dark Web: The dark web is a source for stolen data, hacking tools, and discussions among cybercriminals. Info-stealer malware often compromises accounts (including those of security researchers) and leaks credentials.
  • Practical Application in Bug Bounty Hunting (Mater's Methodology):
    1. Email Enumeration: Gather employee emails of the target company (e.g., using Hunter.io).
    2. Data Leak Investigation: Search dark web forums, marketplaces, and Telegram channels for leaked credentials associated with these emails or the target company.
    3. Credential Validation:
      • Identify login portals for the target company (e.g., VPN, admin panels, internal tools).
      • Tools like Logsensor (GitHub tool for searching logs for specific patterns, potentially to find where credentials might be used) can assist.
      • Test the compromised credentials on these portals.
    4. Access Exploitation: If credentials work, attempt to access various portals to find vulnerabilities or sensitive data.
  • Ethical Considerations:
    • Program Policies: Not all bug bounty programs accept findings based on leaked credentials. Some might mark them as informational, duplicates, or out of scope (especially if the leak isn't directly the company's fault). Always check the program's policy.
    • Legal Boundaries: Ensure all actions comply with legal and ethical standards. Accessing systems with credentials, even if found publicly, can be a gray area or illegal depending on jurisdiction and context if not explicitly authorized.

Skills Checklist & Offensive Skills

These are extensive lists of technologies, tools, and offensive techniques/vulnerability types.

  • Skills Checklist (Technologies):

    • Purpose: A self-assessment checklist for a bug bounty hunter to gauge their familiarity with building and securing a wide array of web technologies, frameworks (front-end and back-end), APIs, cloud platforms (AWS, Azure, GCP), CI/CD tools, infrastructure components, and security concepts.
    • Content: Covers HTML/CSS/JS basics, various programming languages (PHP, Ruby, Python, Java, Node.js), frameworks (React, Angular, Vue, Django, Spring, Express), API types (REST, GraphQL, SOAP), data formats (JSON, XML), authentication/authorization protocols (OAuth, JWT, SAML), security mechanisms (CSP, CORS), cloud services (S3, Lambda, IAM, VPC), containerization (Docker, Kubernetes), and much more.
    • Implication: The broader and deeper your knowledge of these technologies (from a developer's and a security perspective), the better equipped you are to find vulnerabilities.
  • Offensive Skills (Tools/Techniques & Vulnerability Types):

    • Purpose: A self-assessment checklist for a bug bounty hunter to gauge their experience in "weaponizing" (i.e., actively exploiting or testing for) various vulnerabilities and using common offensive security tools.
    • Content:
      • Vulnerability Types: Covers the OWASP Top 10 and many more, including SQL Injection, XSS, CSRF, SSRF, LFI/RFI, Auth Bypass, IDOR, XXE, Business Logic Flaws, various injection types, insecure deserialization, misconfigurations, etc. It also includes more advanced concepts like race conditions, blind attacks, second-order injections, and cloud-specific vulnerabilities.
      • Tools: Lists a vast array of popular security tools, including proxies (Burp Suite, ZAP), scanners (Nmap, Nikto, Nuclei), fuzzers (FFUF, Gobuster), recon tools (Amass, Sublist3r, Recon-ng), exploitation frameworks (Metasploit), password crackers (John the Ripper, Hashcat), web crawlers, API testing tools (Postman, Insomnia), cloud security tools (ScoutSuite, Prowler), and many more.
    • Implication: Proficiency in these areas is essential for effectively finding and demonstrating the impact of vulnerabilities.

Top comments (0)