I built VibeScan — an LLM-powered security audit for AI-generated SaaS apps. It's $49, it produces a PDF, it's for founders shipping on Lovable / Bolt / v0 / Cursor.
Today I pointed it at its own codebase. The tool's author is me. The repo is the same Python + Claude Agent orchestration layer I use to run the audits. If the tool works, it should find real bugs in any codebase — including mine. If it doesn't find anything in mine, either my code is unusually clean (it isn't) or the tool is a placebo.
It found 2 HIGH findings. 0 critical, 0 medium. One was mitigated. One was a real leak I'd shipped to main. Here's the receipt.
The finding
[HIGH] Apify API token passed as URL query parameter (leaks into logs)
→ scripts/crawl_apify_reviews.py:165
The Apify token is appended to every request URL as ?token=....
URLs routinely get captured in HTTP server logs, proxy logs, and error
stack traces — anyone who sees those logs can use the token to run paid
Apify actors on your account and rack up charges.
Fix: In _api_call(), pass the token via the Authorization header
instead of appending it to the URL, per Apify's documented auth
options.
I opened the file. The function looked like this:
def _api_call(method, path, body=None, token=None):
url = f"{APIFY_API_BASE}{path}"
if token:
sep = "&" if "?" in url else "?"
url = f"{url}{sep}token={urllib.parse.quote(token)}"
headers = {"Accept": "application/json"}
# ... send the request ...
The URL ends up looking like https://api.apify.com/v2/acts/<actor-id>/runs?token=apify_api_<60char_secret>.
That URL hits the wire. My HTTP client doesn't log it, but any corporate proxy, any reverse proxy, any Python stack trace that includes the request URL — they all capture the token. If any of those logs leak (or if I share a stack trace in a bug report), the token leaks. The token in question runs paid Apify actors on my account. Somebody with it can burn my budget faster than I can rotate.
How did I ship this? Probably the same way anyone ships it: the Apify quickstart docs show ?token=... as the first example. I copied the pattern during a session where the goal was "get the API working", not "secure the token flow." The code worked, the tests passed, I moved on. Classic velocity-over-security.
The fix
Apify documents both ?token=... and Authorization: Bearer <token> as supported auth forms. The header is strictly safer — no URL logging, no referrer leak, no stack-trace capture.
Before:
url = f"{APIFY_API_BASE}{path}"
if token:
sep = "&" if "?" in url else "?"
url = f"{url}{sep}token={urllib.parse.quote(token)}"
headers = {"Accept": "application/json"}
After:
url = f"{APIFY_API_BASE}{path}"
headers = {"Accept": "application/json"}
if token:
headers["Authorization"] = f"Bearer {token}"
Three fewer lines. One fewer import (urllib.parse.quote no longer needed). One fewer leak surface.
I committed the fix, re-ran the smoke tests, and pushed to main — 14 minutes from finding to fix.
The other finding (already mitigated)
VibeScan also flagged this:
[HIGH] Gmail OAuth refresh token stored in repo credentials/ directory
→ hub/gmail_api.py:37
The Gmail refresh token for my automation lives in credentials/gmail_oauth_token.json. If someone got that file, they could read and send email from my account until I revoke the token manually — refresh tokens don't expire.
This one I'd already mitigated, but only by convention:
-
credentials/is in.gitignore, so the token has never been committed. - The hub repo is private.
Those two together mean an attacker would need to compromise my Windows box to get the token. That's a real threat model (see: every malware-infected dev box in existence), but it's not the same class as "anyone with proxy logs can read the token." VibeScan was right to flag it; the severity in my specific setup is lower than in the default threat model the finding assumes.
The cleaner long-term fix is to move the token to the OS keychain (Windows Credential Manager) and load it from there. I've filed that as a backlog item rather than chasing it this week — the mitigation bar is already higher than most security findings I've seen in customers' codebases.
Why this matters more than a clean audit
If VibeScan had found nothing, I would have been suspicious — not proud. Every non-trivial codebase has some security debt. Mine does. The question isn't whether the tool finds issues; it's whether it finds real issues (not noise), explains them clearly (not CVE-jargon), and gives specific fixes (not "review this").
The findings above: one real bug, one over-reported but accurate concern. Both had concrete file paths and line numbers. Both had copy-paste fixes. Both took under 15 minutes to validate + address or triage.
That's the experience I promise buyers. Running it against my own code is the most honest unit test of that promise I can run.
The wider pattern
If you ship AI-scaffolded code and you're worried there's a leaky URL token, a committed secret, an unauthenticated function, or a misconfigured RLS policy in there — there probably is. Every codebase has some. The question is whether you know where, and whether you've prioritized the ones that actually matter.
If you want the same treatment for your app, VibeScan is $49 one-time at systag.gumroad.com/l/vibescan. PDF in ~10 minutes, 7-day refund if the report isn't useful. Or there's a public sample if you want to see the format before committing.
Either way — check your own token-passing code today. ?token= in a URL is the kind of bug that sits quietly for years until the logs leak.
— Michael
Top comments (0)