DEV Community

Cover image for I Ran My Own SEO Agent on My Two Domains — It Went from 0/4 to 4/4 PASS in One Afternoon
Daniel Nwaneri
Daniel Nwaneri

Posted on

I Ran My Own SEO Agent on My Two Domains — It Went from 0/4 to 4/4 PASS in One Afternoon

invoice.naija-vpn.com was serving the Carter Efe $50K/month Twitch story as its meta description. That page is an invoice generator tool for Nigerian freelancers. Nothing about it has anything to do with Carter Efe.

The agent caught it on the first run. A scraper wouldn't have — it reads raw HTML before JavaScript executes. The agent uses Playwright, reads the rendered DOM, sees what a browser sees after all the scripts run. The homepage content was leaking in dynamically. The scraper sees an empty description. The agent sees the actual problem.

I own both domains — naija-vpn.com and naijavpn.com, a Virtual Payment Navigator for Nigerian freelancers and creators. I ran the agent on my own sites to see what it would find.


What the agent found

Four URLs. Four FAILs.

URL Title Description Overall
naija-vpn.com/ FAIL — 65 chars FAIL — 185 chars FAIL
naijavpn.com/ FAIL — 66 chars FAIL — 192 chars FAIL
/cleva-vs-geegpay FAIL — 63 chars PASS FAIL
invoice.naija-vpn.com FAIL — no page-specific meta FAIL — no page-specific meta FAIL

The homepage title was NaijaVPN - Virtual Payment Navigator for Nigerian Freelancers — 65 characters when the display limit is 60. The description was 185 characters when the limit is 160.

The invoice.naija-vpn.com subdomain was the worst finding. It's a separate subdomain — Google treats it as its own site — but it had no page-specific metadata at all. The agent was reading Carter Efe makes $50K/month from Twitch — here's how he gets paid in Nigeria... as the meta description for an invoice generator tool. The subdomain is a React SPA and the static HTML <head> had no <meta name="description"> tag. Homepage content was leaking in dynamically after JavaScript executed.

A requests-based scraper would have missed this entirely. It reads the raw HTML before JavaScript executes and would have seen an empty description, not the leaking homepage copy. The agent uses Playwright — it reads the rendered DOM, the same page a browser sees after all the scripts run. That's the difference.


The cost curve routing on real pages

The audit ran in tiered mode. Here's what the routing looked like:

  • naija-vpn.com/ → Tier 1 — deterministic, zero API cost
  • naijavpn.com/ → Tier 1 — canonical pointing correctly to naija-vpn.com, clean
  • /cleva-vs-geegpay → Tier 1 — clear structure, no escalation needed
  • invoice.naija-vpn.com → Tier 2 (Haiku) — title on the boundary, needed a closer look

Three tiers, four pages, one run. Total API cost for the audit: under $0.05.


What I actually changed

The rewrite agent ran next — same cost curve applied to the suggestions. Tier 1 truncated the titles deterministically. Haiku generated description suggestions. Sonnet wrote the voice-matched copy for the invoice subdomain opening paragraph.

Three changes I deployed:

Homepage title: NaijaVPN - Get Paid Internationally in Nigeria — 46 characters. Cut the bloat, kept the value proposition.

Homepage description: Trimmed from 185 to 156 characters. The agent's suggestion preserved the Carter Efe reference — that's the hook that makes people click — but cut everything that was padding.

invoice.naija-vpn.com: Added a proper <meta name="description"> to the static HTML <head> of the React SPA — the fix lives in index.html, not in the JavaScript. One line of HTML. The agent's suggested copy: "Generate professional invoices instantly. Receive international payments in Nigeria with Geegpay, Payoneer and Cleva. Free invoice tool for Nigerian freelancers." — 158 characters, PASS.


The verification run

Reset state. Re-ran the audit.

URL Title Description Tier Overall
naija-vpn.com/ PASS — 46 chars PASS — 156 chars Tier 1 PASS
naijavpn.com/ PASS — 46 chars PASS — 156 chars Tier 1 PASS
/cleva-vs-geegpay PASS — 52 chars PASS — 156 chars Tier 1 PASS
invoice.naija-vpn.com PASS — 50 chars PASS — 149 chars Tier 2 PASS

4/4. Zero flags.

The verification run hit Tier 1 on every page — pure deterministic Python, zero API calls. That's the cost curve completing its own argument: the first run used Haiku and Sonnet because the issues were ambiguous enough to need judgment. The verification run used nothing because the fixes were mechanical and the checks are mechanical. The model cost dropped to zero not because the run was cheaper, but because the problems were gone.


The finding that mattered most

The invoice.naija-vpn.com subdomain inheriting homepage metadata is a serious SEO problem. Google treats subdomains as separate sites — so this wasn't "duplicate metadata on the same site." It was a standalone subdomain with no metadata of its own, rendering the homepage's Carter Efe story as the description for an invoice tool.

The agent caught it because it reads the rendered DOM. A scraper reads raw HTML — it would have seen an empty description and flagged the absence, not the leak. The agent saw what a real browser sees: the homepage content dynamically injected after JavaScript executed.

The agent found it automatically, on the first run, and proposed a fix.

That's the actual value. Not the character counts. The automated surface of a problem that was invisible to a quick look at the page.


Two full audit passes plus the rewrite run: under $0.15 total. Four pages, three runs, one afternoon.

The free core at dannwaneri/seo-agent handles the audit and the basic report. The PDF with per-page screenshots, severity-sorted issues, and the rewrite suggestions are in the Pro layer. The cost curve runs in both.

Top comments (7)

Collapse
 
jon_at_backboardio profile image
Jonathan Murray

the playwright vs scraper distinction is the key insight here. a scraper reads raw HTML before js executes so it misses everything injected client-side. had the same issue where a dynamically loaded meta description was invisible to every audit tool we tried. the agent approach is slower but you're actually checking what google sees

Collapse
 
dannwaneri profile image
Daniel Nwaneri • Edited

The "what Google sees" framing is mostly right but worth stress-testing. Googlebot renders JavaScript, but it does so on a crawl delay — sometimes days after the initial crawl. So there's a window where Google has indexed the raw HTML version and the rendered version hasn't been processed yet. The Playwright agent reads the fully rendered DOM, which is closer to what Google eventually sees than what a scraper sees but it's not identical to what Google indexed on a specific day. For the invoice subdomain case that gap didn't matter because the fix was in the static HTML head. For metadata that lives entirely in client-side JS, the timing question becomes real. What was the fix in your case — static head or did you have to restructure the rendering?

Collapse
 
scott_morrison_39a1124d85 profile image
Knowband

Really sharp real-world example of how agent-based tooling surfaces issues that traditional scrapers completely miss. The rendered DOM insight and cost-efficient tiered approach make this feel practical, not just experimental

Collapse
 
dannwaneri profile image
Daniel Nwaneri

The practical vs experimental gap is the one worth closing. Running it on your own domains first is the only way to find out which side it falls on.

Collapse
 
valentin_monteiro profile image
Valentin Monteiro

The tiered routing is the right pattern. I run a similar setup with GSC data feeding into agents, deterministic checks handle 90% of pages for near-zero cost. Biggest win for us was also the rendered DOM angle: caught a Next.js site serving stale OG tags from a cached SSR layer that raw HTML checks never flagged.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

The stale SSR cache case is one I hadn't hit yet and it's nastier than the client-side injection problem because it looks correct in raw HTML — the tag is there, it's just serving a cached version that no longer reflects the actual page. A scraper passes it, Playwright passes it, and the bug is invisible until you compare the rendered OG tag against the current page content. That's a different class of check entirely — not "is the tag present" but "does the tag content match the page." Is the GSC feed what surfaces the discrepancy for you, or do you catch it another way?

Collapse
 
buildwithkoray profile image
koray askin · keywordkick

This is exactly where most SEO tools break.

They show the problem, but they don’t tell you what actually matters or what to fix first.

I’ve been working on something similar from the “decision layer” side — instead of just audits, it tries to answer “what should I do next?” based on all data combined.

Curious — did your agent prioritize fixes or just surface them?