DEV Community

Perufitlife
Perufitlife

Posted on

I built the same security auditor twice this week — once for Supabase, once for PocketBase

I built the same security auditor twice this week — once for Supabase, once for PocketBase. Here's the pattern that ports across BaaS, and where it breaks.

Two Mondays ago I scanned my own Supabase project and found 17 publicly readable tables I had no idea about. b2b_leads, internal growth metrics, engagement emails — all of it grant-readable to anon because that was the default behavior PostgREST shipped with for years.

I shipped an open-source auditor for it. Then I ported the same approach to PocketBase. This post is about what survived the port and what didn't.

The pattern that ports

Every BaaS that exposes the database directly to the browser ships with the same shape of vulnerability:

  1. A default that exposes everything. Supabase's default for years was that any table in public is readable by the anon role unless you explicitly add RLS and policies. PocketBase's default for new collections is empty rules, which means fully public for that operation. Different mechanism, identical foot-gun.
  2. A "looks safe" rule that isn't. In Supabase: an RLS policy with no WITH CHECK clause that fires only on read. In PocketBase: @request.auth.id != "", which any logged-in user (including a self-signed-up anonymous one) passes.
  3. A leftover dev rule. Supabase: using (true). PocketBase: a literal true in the rule field. Both bypass everything.
  4. A separate surface that bypasses your careful rules entirely. Supabase: tables in the supabase_realtime publication broadcast row changes via WebSocket; if RLS is off on a table in the publication, every INSERT/UPDATE goes to anyone subscribed. PocketBase: the OAuth2 redirect URL whitelist — leave it empty and you've got an open redirect.

So the auditor for both looks structurally the same:

  • Authenticate against a privileged endpoint to read the schema/rules
  • Walk every collection/table, classify each rule against a small set of known-bad patterns
  • Surface findings ranked by severity, with a fix snippet on each

Where the port broke

Active probe semantics. This is the part I'm proud of and the part that has to be redesigned per platform.

The differential vs every other passive scanner is that after we infer a leak from the metadata, we send an actual anonymous request and try to fetch data. If we get rows back, the finding is marked confirmed: true with the row count, columns visible, and bytes returned. The report shows "we just fetched 42 rows from this collection without any auth" instead of "this might be exposed."

For Supabase the probe is straightforward: the project has a stable PostgREST endpoint at https://{ref}.supabase.co/rest/v1/{table}?limit=1, and we can pull the anon key automatically from the Management API at /v1/projects/{ref}/api-keys?reveal=true. One HTTP GET, one true/false answer.

For PocketBase the probe targets {base}/api/collections/{name}/records?perPage=1 with no auth header at all. PocketBase has no concept of a public anon API key — anonymous = no header. That actually makes the probe simpler, but it broke my abstraction: the Supabase code passes anonKey everywhere; the PocketBase code passes null and just doesn't add an Authorization header. Same probe semantics, different proof material.

Write probes are the other big difference. For Supabase, I never probe writes (a POST /rest/v1/users with anon would actually create a row if RLS is open — too destructive even for a dry-run audit). For PocketBase, same constraint — createRule/updateRule/deleteRule findings are inferred from rule metadata only. The probe runs only against listRule and viewRule. That's a deliberate safety boundary, not a missing feature.

What I'm not going to port (yet)

I researched four other BaaS for this:

  • Firebase: huge TAM, but security-scanner space is saturated. FireScan, Firepwn, FireBaseScanner, OpenFirebase — five mature tools already exist. Diff would have to come from something none of them offer (MCP integration, Apify distribution, agent-loop remediation). Maybe later, not first.
  • Appwrite: 55.7k stars, Discord 17k active devs, zero security scanners exist for it. Same playbook applies, ports cleanly. Next on the list.
  • Convex: security lives inside TypeScript functions, not declarative rules. No external surface to scan without static analysis. Skip.
  • Nhost / Xata: smaller communities, marginal ROI compared to PocketBase + Appwrite combo.

If you wanted to do this for one of those, the pattern is portable: pick the privileged endpoint to read the rule layer, define your "known bad" classifications, write a probe that fetches against the public API, render an HTML report with copy-paste fixes.

Three surfaces, same code

I ship every audit tool in three shapes from the same audit.js:

  1. CLI / Node skillnpx <tool> [creds] for devs who want it in their shell or in their AI agent's skill folder.
  2. MCP server — wraps the same audit() function in stdio MCP tools so Claude Code / Cursor / Cline can call it conversationally and apply fixes (with a confirm: true gate). The MCP version is the only one in the BaaS-scanner space that closes the loop instead of just reporting.
  3. Apify actor — for users who don't want to install Node or use AI tooling. BYOK pattern, password used only for the run.

Same audit.js and report.js shared across all three. The Apify actor calls audit() directly. The MCP server calls audit() directly and adds tools around it. Distribution-wise this means one product change ships to three audiences in one push.

Links

Supabase:

PocketBase (shipped today):

All MIT. Both pull the rule/schema metadata privately, run an anonymous probe to confirm leaks live, and emit a self-contained HTML report with a fix snippet on every finding.

If you run either against your own production instance and find something interesting, drop a comment — I'd love to know what patterns are actually in the wild beyond the four I've coded for.

Top comments (0)