DEV Community

Watson Foglift
Watson Foglift

Posted on

We Shipped 127 Programmatic Landing Pages. We Deleted 122 of Them Three Weeks Later. Here's What the Data Told Us.

In early March I shipped 127 vertical landing pages on our SaaS site. /for/plumbing, /for/funeral-home, /for/ice-cream-shops — one for every industry an agent could invent. The argument for it was the argument every programmatic SEO post makes: cast a wide net, own the long tail, ship faster than competitors.

On March 25 I deleted 122 of those pages in a single commit. Kept five: agencies, SaaS, ecommerce, startups, enterprise. The others collapsed to a 301 redirect pointing at /for/.

This is a short write-up of what the data said, because I think the programmatic SEO playbook is being sold harder than ever at exactly the moment it's getting worse, not better, at its actual job.

What 127 pages actually looked like

Each page was ~150 lines of JSX. Same hero component. Same three feature cards. Same CTA. The vertical-specific content was roughly four paragraphs injected from a template dictionary. The word "plumbing" or "veterinary" appeared maybe nine times. The structural depth was identical across every page.

From a human-reader perspective this was thin. From a Google-crawler perspective it was thin with extra steps. From an AI-search perspective — which is the channel I was actually trying to optimize for — it was worse than thin, because AI engines penalize duplicate patterns far more aggressively than Google's ranker does. More on that below.

The signal that it was wrong

Three data points, in order of severity:

1. None of the 122 templated pages appeared in AI citations. Our in-house CLI queries ChatGPT, Perplexity, Claude, and Gemini against prompts like "GEO tools for SaaS" and "AI search for agencies." When we baselined citations after the prune, we were cited zero times across 28 prompt-engine combinations. Not a great number — but notably, none of the then-still-live templated vertical pages were in the cited set during the runs we did before the prune either. Zero demand for them from the engines, even when the vertical keyword was in the prompt.

2. Google indexed them but didn't rank them. Coverage hit near-full within two weeks. Clicks stayed at zero. This is the classic thin-content failure mode Google's Helpful Content Update (HCU) was built to penalize. HCU has shipped in multiple waves (September 2023, March 2024, and the core update of March 2025) and each wave demotes sites where a material fraction of URLs score low on "created primarily to attract search engine traffic." We were a poster child.

3. Our depth signals were being diluted. Aggarwal et al. (KDD 2024) measured LLM citation probability against content-depth signals across thousands of URLs and found a 33.36% citation lift from adding statistics and structural depth to target pages. The same result in reverse: stripping depth (or drowning good pages in templated ones) is a real, measurable loss. When you add 122 template-structured URLs to a sitemap of ~240, you shift the domain's average structural depth downward. Entity-level authority scoring — the way AI engines actually reason about sources — penalizes that.

Why AI engines punish programmatic content harder than Google does

This is the piece I didn't understand when I shipped the 127 pages and wish I had.

Google's ranker scores pages mostly independently. A thin URL can coexist with strong URLs on the same domain because PageRank and relevance are computed per-URL. Penalties cascade through algorithms like HCU but the unit of evaluation is still largely the page.

AI engines don't work that way. When an AI engine decides whether to cite a domain, it's doing something closer to entity-level reasoning over the training corpus:

  • Source-diversity check: how many different-looking pages does this domain contribute? A domain with 120 duplicate-structured pages looks like one page repeated 120 times, not 120 pages.
  • Authority-signal pooling: the citations, external mentions, and reputation signals for a domain get aggregated at the brand level, then diluted across the volume of URLs. Adding 122 low-authority URLs dilutes the per-URL authority of the 5 good ones.
  • Pattern suppression: models learn to distrust patterns that look like SEO spam during RLHF and alignment training. OtterlyAI's 2025 analysis of 100M+ AI-search citations found 94% of cited content was long-form and structurally distinct. Templated verticals fail both tests at once.

The net effect: programmatic SEO at scale actively suppresses your AI-search visibility. It's not neutral. It's negative.

What we kept and why

The 5 surviving verticals (agencies, saas, ecommerce, startups, enterprise) survived for one reason: they had enough real material to be genuinely different from each other. An agency's use case (multi-client reporting, white-label) has no structural overlap with an enterprise use case (procurement, SAML, SOC 2). The plumbing vs funeral-home distinction was cosmetic — the product value prop is identical across them.

After the prune:

  • sitemap dropped by 122 entries
  • crawl budget re-concentrated on the pages we actually care about
  • blog-post AEO scores moved up 2–5 points across the twelve pillar posts over the subsequent weeks as we layered in TOCs, FAQPage schema, and data tables (correlation with the prune, not proof — but directionally encouraging)

The broader lesson

Programmatic SEO isn't dead. It works when each page has a real, structurally-distinct answer to a real question. It's a content strategy, not a URL-generation strategy.

What's dead is programmatic SEO for AI search, if by "programmatic" you mean "127 pages from a template." AI engines are trained specifically to demote that pattern. The playbook that was borderline in 2022 is actively counterproductive in 2026.

The honest version of the workflow I'd recommend now:

  1. Identify the 5–10 verticals or use cases where your product genuinely has a different answer.
  2. Write those pages by hand. Cite real sources. Include real numbers.
  3. If you want breadth, put it in a glossary or a single pillar page that covers many terms at depth, not 100 URLs that each cover one shallowly.
  4. Measure per-URL AI citation, not just Google indexation. If a page isn't getting cited by AI engines 30 days after launch, it probably isn't earning its slot in your sitemap either.

Sources

  • Aggarwal et al., "Geo-Optimization: Ranking Factors in Generative Search." KDD 2024.
  • OtterlyAI, "State of AI Search Citations 2025." Analysis of 100M+ AI citation instances.
  • Google Search Central: Helpful Content Update rollouts, 2023–2025.
  • Google March 2025 Core Update documentation.

If you're running a programmatic SEO experiment right now and your AI citation rate is flat, I'd be curious what your data looks like. We measured ours with foglift scan ai-check — open-source CLI, queries five AI engines, tells you where you're cited and where you're not. The honest version of the answer is usually "not where you thought."

Top comments (0)