invoice.naija-vpn.com was serving the Carter Efe $50K/month Twitch story as its meta description. That page is an invoice generator tool for Nigerian freelancers. Nothing about it has anything to do with Carter Efe.
The agent caught it on the first run. A scraper wouldn't have — it reads raw HTML before JavaScript executes. The agent uses Playwright, reads the rendered DOM, sees what a browser sees after all the scripts run. The homepage content was leaking in dynamically. The scraper sees an empty description. The agent sees the actual problem.
I own both domains — naija-vpn.com and naijavpn.com, a Virtual Payment Navigator for Nigerian freelancers and creators. I ran the agent on my own sites to see what it would find.
What the agent found
Four URLs. Four FAILs.
| URL | Title | Description | Overall |
|---|---|---|---|
| naija-vpn.com/ | FAIL — 65 chars | FAIL — 185 chars | FAIL |
| naijavpn.com/ | FAIL — 66 chars | FAIL — 192 chars | FAIL |
| /cleva-vs-geegpay | FAIL — 63 chars | PASS | FAIL |
| invoice.naija-vpn.com | FAIL — no page-specific meta | FAIL — no page-specific meta | FAIL |
The homepage title was NaijaVPN - Virtual Payment Navigator for Nigerian Freelancers — 65 characters when the display limit is 60. The description was 185 characters when the limit is 160.
The invoice.naija-vpn.com subdomain was the worst finding. It's a separate subdomain — Google treats it as its own site — but it had no page-specific metadata at all. The agent was reading Carter Efe makes $50K/month from Twitch — here's how he gets paid in Nigeria... as the meta description for an invoice generator tool. The subdomain is a React SPA and the static HTML <head> had no <meta name="description"> tag. Homepage content was leaking in dynamically after JavaScript executed.
A requests-based scraper would have missed this entirely. It reads the raw HTML before JavaScript executes and would have seen an empty description, not the leaking homepage copy. The agent uses Playwright — it reads the rendered DOM, the same page a browser sees after all the scripts run. That's the difference.
The cost curve routing on real pages
The audit ran in tiered mode. Here's what the routing looked like:
-
naija-vpn.com/→ Tier 1 — deterministic, zero API cost -
naijavpn.com/→ Tier 1 — canonical pointing correctly to naija-vpn.com, clean -
/cleva-vs-geegpay→ Tier 1 — clear structure, no escalation needed -
invoice.naija-vpn.com→ Tier 2 (Haiku) — title on the boundary, needed a closer look
Three tiers, four pages, one run. Total API cost for the audit: under $0.05.
What I actually changed
The rewrite agent ran next — same cost curve applied to the suggestions. Tier 1 truncated the titles deterministically. Haiku generated description suggestions. Sonnet wrote the voice-matched copy for the invoice subdomain opening paragraph.
Three changes I deployed:
Homepage title: NaijaVPN - Get Paid Internationally in Nigeria — 46 characters. Cut the bloat, kept the value proposition.
Homepage description: Trimmed from 185 to 156 characters. The agent's suggestion preserved the Carter Efe reference — that's the hook that makes people click — but cut everything that was padding.
invoice.naija-vpn.com: Added a proper <meta name="description"> to the static HTML <head> of the React SPA — the fix lives in index.html, not in the JavaScript. One line of HTML. The agent's suggested copy: "Generate professional invoices instantly. Receive international payments in Nigeria with Geegpay, Payoneer and Cleva. Free invoice tool for Nigerian freelancers." — 158 characters, PASS.
The verification run
Reset state. Re-ran the audit.
| URL | Title | Description | Tier | Overall |
|---|---|---|---|---|
| naija-vpn.com/ | PASS — 46 chars | PASS — 156 chars | Tier 1 | PASS |
| naijavpn.com/ | PASS — 46 chars | PASS — 156 chars | Tier 1 | PASS |
| /cleva-vs-geegpay | PASS — 52 chars | PASS — 156 chars | Tier 1 | PASS |
| invoice.naija-vpn.com | PASS — 50 chars | PASS — 149 chars | Tier 2 | PASS |
4/4. Zero flags.
The verification run hit Tier 1 on every page — pure deterministic Python, zero API calls. That's the cost curve completing its own argument: the first run used Haiku and Sonnet because the issues were ambiguous enough to need judgment. The verification run used nothing because the fixes were mechanical and the checks are mechanical. The model cost dropped to zero not because the run was cheaper, but because the problems were gone.
The finding that mattered most
The invoice.naija-vpn.com subdomain inheriting homepage metadata is a serious SEO problem. Google treats subdomains as separate sites — so this wasn't "duplicate metadata on the same site." It was a standalone subdomain with no metadata of its own, rendering the homepage's Carter Efe story as the description for an invoice tool.
The agent caught it because it reads the rendered DOM. A scraper reads raw HTML — it would have seen an empty description and flagged the absence, not the leak. The agent saw what a real browser sees: the homepage content dynamically injected after JavaScript executed.
The agent found it automatically, on the first run, and proposed a fix.
That's the actual value. Not the character counts. The automated surface of a problem that was invisible to a quick look at the page.
Two full audit passes plus the rewrite run: under $0.15 total. Four pages, three runs, one afternoon.
The free core at dannwaneri/seo-agent handles the audit and the basic report. The PDF with per-page screenshots, severity-sorted issues, and the rewrite suggestions are in the Pro layer. The cost curve runs in both.
Top comments (0)