<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Apex Stack</title>
    <description>The latest articles on DEV Community by Apex Stack (@apex_stack).</description>
    <link>https://dev.to/apex_stack</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/apex_stack"/>
    <language>en</language>
    <item>
      <title>Most Ad Audits Miss 5 of 7 Critical Dimensions. Here's the Framework I Built to Fix That.</title>
      <dc:creator>Apex Stack</dc:creator>
      <pubDate>Tue, 07 Apr 2026 13:15:35 +0000</pubDate>
      <link>https://dev.to/apex_stack/most-ad-audits-miss-5-of-7-critical-dimensions-heres-the-framework-i-built-to-fix-that-2nkm</link>
      <guid>https://dev.to/apex_stack/most-ad-audits-miss-5-of-7-critical-dimensions-heres-the-framework-i-built-to-fix-that-2nkm</guid>
      <description>&lt;p&gt;Most ad campaign audits are shallow. They check ROAS, maybe creative performance, and call it a day.&lt;/p&gt;

&lt;p&gt;But after auditing dozens of campaigns across Meta, Google, TikTok, and LinkedIn, I noticed the same pattern: the problems that bleed the most budget are hiding in dimensions nobody checks. Audience overlap silently inflates CPMs. Bid strategies contradict campaign objectives. Conversion tracking gaps make every other metric unreliable.&lt;/p&gt;

&lt;p&gt;So I built a structured 7-dimension audit framework, packaged it as a Claude Skill, and started using it on every campaign I touch. Here's how it works — and why most audits only scratch the surface.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Partial Audits Lead to Partial Results
&lt;/h2&gt;

&lt;p&gt;A typical ad audit looks like this: export your Meta Ads Manager report, sort by ROAS, kill the losers, scale the winners. Maybe check frequency if you're thorough.&lt;/p&gt;

&lt;p&gt;That approach misses &lt;strong&gt;five critical dimensions&lt;/strong&gt; that often have more impact on profitability than ROAS alone:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Audience overlap&lt;/strong&gt; — Are your ad sets bidding against each other? I've seen accounts where 40%+ of audiences overlap, artificially inflating CPMs by 15-30%.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conversion tracking integrity&lt;/strong&gt; — If your Meta Pixel isn't paired with Conversions API (CAPI), you're likely underreporting conversions by 20-30% post-iOS 14.5. Every decision built on that data is compromised.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bid strategy alignment&lt;/strong&gt; — A cost-cap bid on a campaign getting 15 conversions/week will spend erratically. The algorithm needs 50+ weekly conversions to optimize properly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Landing page experience&lt;/strong&gt; — A 5-second load time on mobile kills conversion rates, but the ad platform doesn't flag this. Your "low-performing" ad might actually have a landing page problem.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Budget allocation efficiency&lt;/strong&gt; — Equal budget across unequal performers is the most common waste pattern. Marginal ROAS analysis often reveals 20-30% of spend is better allocated elsewhere.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The 7-Dimension Campaign Health Framework
&lt;/h2&gt;

&lt;p&gt;After systematizing audits across different platforms and verticals, I settled on 7 dimensions that together give a complete picture of campaign health. Each is scored 1-10, giving a total health score out of 70.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dimension 1: Creative Performance &amp;amp; Fatigue&lt;/strong&gt;&lt;br&gt;
CTR trends over time, frequency thresholds, creative variant count, hook rates for video. The key insight: frequency above 3.0 almost always correlates with declining CTR. Most advertisers wait until frequency hits 5.0+ before refreshing — by then, you've wasted 2-3 weeks of budget on exhausted creatives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dimension 2: Audience Quality &amp;amp; Overlap&lt;/strong&gt;&lt;br&gt;
Audience definition mapping, overlap estimation between ad sets, conversion rates by segment. The "overlap tax" — wasted spend from self-competition — is invisible in standard reporting. I've calculated it at 10-25% of total spend in poorly structured accounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dimension 3: Budget Allocation &amp;amp; Pacing&lt;/strong&gt;&lt;br&gt;
Marginal ROAS analysis, CBO vs. ABO appropriateness, dayparting opportunities, budget headroom testing. The question isn't "which campaign has the best ROAS?" — it's "where does the &lt;em&gt;next&lt;/em&gt; dollar generate the most return?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dimension 4: Conversion Tracking &amp;amp; Attribution&lt;/strong&gt;&lt;br&gt;
Pixel + CAPI implementation, Event Match Quality scores, UTM consistency, cross-platform attribution conflicts. This dimension is the foundation — if your measurement is broken, every other optimization is built on sand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dimension 5: Bid Strategy &amp;amp; Campaign Structure&lt;/strong&gt;&lt;br&gt;
Bid strategy matching to objectives, campaign consolidation score, Learning Limited status, ad set feeding levels. Meta's algorithm needs 50+ conversions per ad set per week to optimize properly. Most accounts have fragmented structures with ad sets getting 5-10 conversions weekly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dimension 6: Landing Page &amp;amp; Post-Click Experience&lt;/strong&gt;&lt;br&gt;
Load time, message match between ad and page, mobile optimization, CTA clarity, form friction. This is where ad audits traditionally stop — but the post-click experience determines whether clicks become conversions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dimension 7: ROAS &amp;amp; Profitability Analysis&lt;/strong&gt;&lt;br&gt;
Blended vs. marginal ROAS, break-even calculation, new vs. returning customer splits, contribution margin per campaign. The final dimension ties everything together — are you actually making money, or is retargeting revenue masking a prospecting problem?&lt;/p&gt;
&lt;h2&gt;
  
  
  What a Full Audit Output Looks Like
&lt;/h2&gt;

&lt;p&gt;Here's the structure the skill generates from your campaign data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Campaign Audit: [Brand Name]&lt;/span&gt;
&lt;span class="gu"&gt;### Overall Score: 48/70 (69%) — Needs Attention&lt;/span&gt;

| Dimension                        | Score | Priority | Top Action                           |
|----------------------------------|-------|----------|--------------------------------------|
| Creative Performance &amp;amp; Fatigue   | 6/10  | 🟡       | Refresh 3 creatives above freq 4.0  |
| Audience Quality &amp;amp; Overlap       | 5/10  | 🔴       | Merge 2 overlapping ad sets (38%)   |
| Budget Allocation &amp;amp; Pacing       | 7/10  | 🟡       | Shift $200/day from Campaign C → A  |
| Conversion Tracking              | 8/10  | 🟢       | Add CAPI for Purchase event         |
| Bid Strategy &amp;amp; Structure         | 6/10  | 🟡       | Consolidate ad sets in Learning Ltd |
| Landing Page Experience          | 4/10  | 🔴       | Fix mobile load time (4.2s → &amp;lt;3s)   |
| ROAS &amp;amp; Profitability             | 7/10  | 🟡       | Separate new vs returning reporting |

&lt;span class="gu"&gt;### Top 5 Priority Actions&lt;/span&gt;
&lt;span class="p"&gt;1.&lt;/span&gt; Fix landing page mobile speed → Est. +18% conversion rate
&lt;span class="p"&gt;2.&lt;/span&gt; Merge overlapping audiences → Est. -22% CPM savings
&lt;span class="p"&gt;3.&lt;/span&gt; Refresh exhausted creatives → Est. +12% CTR recovery
&lt;span class="p"&gt;4.&lt;/span&gt; Consolidate underfed ad sets → Exit Learning Limited
&lt;span class="p"&gt;5.&lt;/span&gt; Implement CAPI for Purchase → +15% conversion attribution
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The skill generates this from raw campaign exports — CSV data, pasted tables, or even a manual description of your setup. No API keys needed. No subscriptions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Benchmark Problem (and How I Solved It)
&lt;/h2&gt;

&lt;p&gt;One of the biggest challenges in ad auditing is knowing what "good" looks like. A 1.2% CTR on Meta Feed might be excellent for B2B finance ($3.89 avg CPC) but mediocre for eCommerce ($1.12 avg CPC).&lt;/p&gt;

&lt;p&gt;The paid version includes industry benchmarks compiled from WordStream, Databox, Revealbot, and Varos data for 2025-2026. Here's a sample:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Average&lt;/th&gt;
&lt;th&gt;Good&lt;/th&gt;
&lt;th&gt;Excellent&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Meta (Feed)&lt;/td&gt;
&lt;td&gt;CTR&lt;/td&gt;
&lt;td&gt;0.90%&lt;/td&gt;
&lt;td&gt;1.5%+&lt;/td&gt;
&lt;td&gt;2.5%+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Meta&lt;/td&gt;
&lt;td&gt;CPC&lt;/td&gt;
&lt;td&gt;$1.72&lt;/td&gt;
&lt;td&gt;&amp;lt;$1.20&lt;/td&gt;
&lt;td&gt;&amp;lt;$0.70&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google Search&lt;/td&gt;
&lt;td&gt;CTR&lt;/td&gt;
&lt;td&gt;3.17%&lt;/td&gt;
&lt;td&gt;5%+&lt;/td&gt;
&lt;td&gt;8%+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TikTok&lt;/td&gt;
&lt;td&gt;Hook Rate (3s)&lt;/td&gt;
&lt;td&gt;30%&lt;/td&gt;
&lt;td&gt;45%+&lt;/td&gt;
&lt;td&gt;60%+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LinkedIn&lt;/td&gt;
&lt;td&gt;CPC&lt;/td&gt;
&lt;td&gt;$5.26&lt;/td&gt;
&lt;td&gt;&amp;lt;$4.00&lt;/td&gt;
&lt;td&gt;&amp;lt;$2.50&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Without benchmarks, you're scoring in a vacuum. A 2.0x ROAS sounds decent until you realize your break-even ROAS at 50% gross margin is... 2.0x. You're not making money — you're treading water.&lt;/p&gt;

&lt;p&gt;The break-even formula: &lt;strong&gt;Break-Even ROAS = 1 / Gross Margin %&lt;/strong&gt;. At 70% margin, you break even at 1.43x. At 30% margin, you need 3.33x just to cover COGS. Most advertisers I've worked with don't have this number memorized — and they should.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why a Claude Skill Instead of a SaaS Tool?
&lt;/h2&gt;

&lt;p&gt;The existing landscape for ad auditing tools is dominated by monthly SaaS subscriptions. Madgicx runs $44-166/month. Revealbot is $99+/month. Adzooma has a free tier but upsells aggressively.&lt;/p&gt;

&lt;p&gt;These tools are powerful, but they're overkill for many advertisers who need periodic audits, not always-on monitoring. A Claude Skill gives you the audit framework, the benchmarks, and the structured analysis — for a one-time $19 purchase. Run it whenever you need it. No recurring fees.&lt;/p&gt;

&lt;p&gt;The skill also covers what SaaS tools often skip: qualitative assessment of campaign structure, bid strategy alignment, and creative pipeline health. These require judgment, not just data aggregation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Specialized Modules Beyond the Core Audit
&lt;/h2&gt;

&lt;p&gt;The full skill includes six focused modules that go deeper on specific problems:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creative Fatigue Detector&lt;/strong&gt; — Feed it time-series creative data and it classifies each ad into lifecycle stages (Fresh → Mature → Fatiguing → Exhausted) with specific refresh recommendations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audience Overlap Analyzer&lt;/strong&gt; — Maps audience definitions across ad sets, estimates overlap percentages, and calculates the "overlap tax" in wasted spend.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget Allocation Optimizer&lt;/strong&gt; — Calculates marginal ROAS per campaign to find the optimal spend distribution with expected impact projections.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scaling Readiness Assessment&lt;/strong&gt; — Evaluates six criteria (ROAS stability, CPA variance, Learning Limited status, frequency, conversion volume, creative freshness) to determine which campaigns can absorb more budget safely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weekly Performance Brief&lt;/strong&gt; — Generates a stakeholder-ready report with WoW comparisons. Saves 30-60 minutes of manual reporting every week.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It: Free Lite Version Available
&lt;/h2&gt;

&lt;p&gt;The lite version covers 3 of 7 dimensions (Creative Fatigue, Budget Allocation, ROAS &amp;amp; Profitability) — enough to find the biggest problems in most accounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Free lite version:&lt;/strong&gt; &lt;a href="https://github.com/apex-stack-ai/ad-performance-auditor-lite" rel="noopener noreferrer"&gt;Ad Performance Auditor Lite on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full 7-dimension version ($19):&lt;/strong&gt; &lt;a href="https://apexstack.gumroad.com/l/ad-performance-auditor" rel="noopener noreferrer"&gt;Ad Performance Auditor on Gumroad&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're building a financial data product and want to see programmatic SEO at scale, check out how I'm &lt;a href="https://stockvs.com/en/stock/aapl" rel="noopener noreferrer"&gt;analyzing stock performance data across 8,000+ tickers&lt;/a&gt; — the same data-driven methodology applies to ad campaign analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;The Ad Performance Auditor pairs well with two other tools in the Apex Stack lineup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://apexstack.gumroad.com/l/landing-page-copywriter" rel="noopener noreferrer"&gt;Landing Page Copywriter &amp;amp; CRO Auditor&lt;/a&gt;&lt;/strong&gt; — Ads drive the traffic, but landing pages convert it. This skill handles Dimension 6 (post-click experience) in depth.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://apexstack.gumroad.com/l/content-pipeline" rel="noopener noreferrer"&gt;Content Pipeline Builder&lt;/a&gt;&lt;/strong&gt; — If you're running content-driven acquisition campaigns alongside paid, this keeps your organic funnel organized.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Performance marketing isn't just about spending more on what works. It's about systematically finding and fixing the leaks across all seven dimensions — creative, audience, budget, tracking, structure, post-click, and profitability.&lt;/p&gt;

&lt;p&gt;The framework is public. The benchmarks are in the skill. The question is whether you'll keep running partial audits or start catching the problems hiding in the dimensions nobody checks.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built by &lt;a href="https://apexstack.gumroad.com" rel="noopener noreferrer"&gt;Apex Stack&lt;/a&gt; — tools and frameworks for builders who ship.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>marketing</category>
      <category>productivity</category>
    </item>
    <item>
      <title>My Site Has 8,000 Pages and Zero Real Backlinks. Here's the Claude Skill I Built to Fix That.</title>
      <dc:creator>Apex Stack</dc:creator>
      <pubDate>Tue, 24 Mar 2026 13:12:18 +0000</pubDate>
      <link>https://dev.to/apex_stack/my-site-has-8000-pages-and-zero-real-backlinks-heres-the-claude-skill-i-built-to-fix-that-2koi</link>
      <guid>https://dev.to/apex_stack/my-site-has-8000-pages-and-zero-real-backlinks-heres-the-claude-skill-i-built-to-fix-that-2koi</guid>
      <description>&lt;h1&gt;
  
  
  My Site Has 8,000 Pages and Zero Real Backlinks. Here's the Claude Skill I Built to Fix That.
&lt;/h1&gt;

&lt;p&gt;Eight thousand pages. Twelve languages. Two and a half years of data. And Google has indexed exactly 1,335 of them.&lt;/p&gt;

&lt;p&gt;I spent weeks debugging the problem — thin content, crawl budget, hreflang issues. Fixed them all. Then Bing Webmaster Tools told me the real answer in three words: &lt;strong&gt;not enough inbound links&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;No backlinks. Zero. Not one domain pointing at StockVS with any authority. Google doesn't trust a site nobody else trusts. It's that simple, and that brutal.&lt;/p&gt;

&lt;p&gt;The fix is obvious: build backlinks. The execution is where everyone stalls.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Link Building Dies on the To-Do List
&lt;/h2&gt;

&lt;p&gt;I know the playbook. Find guest post opportunities. Research resource pages. Spot broken links. Draft personalized outreach. Follow up. Repeat.&lt;/p&gt;

&lt;p&gt;I also know why it never gets done.&lt;/p&gt;

&lt;p&gt;To find 10 good link-building targets, you need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Google "write for us" + your niche across 20 variations&lt;/li&gt;
&lt;li&gt;Check each site's DA, traffic, and editorial quality&lt;/li&gt;
&lt;li&gt;Confirm they're still accepting submissions (half these pages are dead)&lt;/li&gt;
&lt;li&gt;Find the right contact — editor email, submission form, Twitter DM&lt;/li&gt;
&lt;li&gt;Write a personalized pitch that doesn't sound like every other cold email they ignore&lt;/li&gt;
&lt;li&gt;Do this for 10 targets minimum before you see any response&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's 4–6 hours of manual work for a pipeline that may produce one accepted pitch. Most people do it once, get discouraged, and go back to writing more content that still won't rank.&lt;/p&gt;

&lt;p&gt;I did it too. I set up a scheduled task that ran twice a week to research targets and draft pitches. It worked — I've now got 27 targets identified and several pitches ready to send. But it took weeks to tune, and the logic was buried in a custom agent prompt that nobody else could reuse.&lt;/p&gt;

&lt;p&gt;So I packaged it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Backlink Prospector &amp;amp; Outreach Drafter Does
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Backlink Prospector &amp;amp; Outreach Drafter&lt;/strong&gt; is a Claude Skill that turns the entire link-building research-and-outreach workflow into a 10-minute session instead of a 4-hour grind.&lt;/p&gt;

&lt;p&gt;Here's what it actually does when you run it:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Niche-Targeted Opportunity Research
&lt;/h3&gt;

&lt;p&gt;You give it your domain, your niche, and your target audience. It runs structured searches across the high-value opportunity types that actually move the needle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Guest post targets&lt;/strong&gt; — blogs and publications accepting contributor content in your space&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource page links&lt;/strong&gt; — curated lists that link to tools, guides, and data sites like yours&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Broken link reclamation&lt;/strong&gt; — pages linking to dead content you could replace&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Directory submissions&lt;/strong&gt; — niche and general-purpose directories with real DA&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For each target it finds, it evaluates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Domain authority / trust signals&lt;/li&gt;
&lt;li&gt;Whether submissions are currently open (not just a zombie "write for us" page)&lt;/li&gt;
&lt;li&gt;The right contact method (form, email, Twitter — whatever's live)&lt;/li&gt;
&lt;li&gt;Whether your content genuinely fits their audience&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You get a prioritized list, not a dump of 50 targets you'd need to manually vet.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Personalized Outreach Drafts — Not Templates
&lt;/h3&gt;

&lt;p&gt;This is the part that usually takes the most time. Generic cold pitches go straight to the trash. The skill drafts outreach that's specific to each target:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;References a recent article or topic they published (shows you actually read the site)&lt;/li&gt;
&lt;li&gt;Proposes a specific angle that fits their audience — not just "I'd love to write for you"&lt;/li&gt;
&lt;li&gt;Matches the tone of the publication (technical blogs get technical pitches; indie hacker communities get builder-to-builder framing)&lt;/li&gt;
&lt;li&gt;Includes your credentials in context — what makes you qualified to write this piece for their audience&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's a real example from my StockVS outreach campaign. Instead of sending:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Hi, I'd love to contribute a guest post about SEO to your blog..."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The skill drafted something like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Hi — I noticed your recent piece on programmatic content strategies. I'm building StockVS, an 8,000-page stock data platform in 12 languages, and I've been documenting the indexing and crawl budget challenges at scale. Happy to write a detailed piece on multilingual hreflang implementation for large sites — it's a gap in your existing content and something your readers building content sites would find immediately useful."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The second pitch gets responses. The first one doesn't.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. A Tracker You'll Actually Use
&lt;/h3&gt;

&lt;p&gt;Every session outputs a clean prospect tracker — status (researched / pitched / accepted / published), DA, contact info, angle proposed, and a follow-up date. It appends to a running outreach file so you never lose track of where each pitch stands.&lt;/p&gt;




&lt;h2&gt;
  
  
  Real Numbers From My StockVS Campaign
&lt;/h2&gt;

&lt;p&gt;Since I've been running the underlying workflow for a few weeks, here's what the numbers actually look like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;27 targets identified&lt;/strong&gt; across guest posts, resource pages, and directories&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;5 pitches submitted&lt;/strong&gt; (freeCodeCamp, SitePoint, GrowthHackers, Envato Tuts+, Smashing Magazine)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;6 outreach drafts ready to send&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1 HackerNoon article&lt;/strong&gt; drafted (DA 82 — with direct deep-links to StockVS pages)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time per research session:&lt;/strong&gt; ~15 minutes with the skill vs. 3–4 hours manually&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Zero accepted links yet — link building is a long game and most pipelines take 4–6 weeks to produce results. But the pipeline is built. Without the skill, I'd still be on target #3.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who This Is For
&lt;/h2&gt;

&lt;p&gt;If any of these describe you, this skill will save you real hours:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Indie site builders&lt;/strong&gt; — you have content, you need authority, you hate cold outreach&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SEO professionals&lt;/strong&gt; — you're doing this for clients and need to 10x your prospecting volume&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solopreneurs&lt;/strong&gt; — you know link building matters but it always falls to the bottom of the list&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Small agencies&lt;/strong&gt; — you can't justify $99/mo for Ahrefs just for prospecting; this does the research layer for $19 one-time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The skill works best for content sites, SaaS tools, developer tools, and any niche where guest posts and resource pages are the primary link-building vector.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You Get
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Paid version ($19):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full opportunity research across all 4 target types (guest posts, resource pages, broken links, directories)&lt;/li&gt;
&lt;li&gt;Personalized pitch drafts with site-specific research baked in&lt;/li&gt;
&lt;li&gt;Prioritized prospect scoring (DA, fit, effort-to-reward)&lt;/li&gt;
&lt;li&gt;Running outreach tracker with follow-up dates&lt;/li&gt;
&lt;li&gt;Pitch templates for cold email, Twitter DM, and web forms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Free lite version (GitHub):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Guest post prospecting only (1 of 4 target types)&lt;/li&gt;
&lt;li&gt;Basic outreach template (not personalized)&lt;/li&gt;
&lt;li&gt;Good for testing the workflow before committing&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Math
&lt;/h2&gt;

&lt;p&gt;A single accepted guest post on a DA 70+ site can move your domain rating by 3–5 points. A few of those and Google starts trusting your site enough to crawl deeper. With 8,000 pages sitting in "Discovered - not indexed," that crawl trust is the only thing standing between me and actual search traffic.&lt;/p&gt;

&lt;p&gt;The skill doesn't guarantee placements — nothing does. But it removes the friction that makes link building the task that never gets started.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get the full version:&lt;/strong&gt; &lt;a href="https://apexstack.gumroad.com/l/backlink-prospector" rel="noopener noreferrer"&gt;apexstack.gumroad.com/l/backlink-prospector&lt;/a&gt; — $19, one-time&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try the free lite first:&lt;/strong&gt; &lt;a href="https://github.com/apex-stack-ai/backlink-prospector-lite" rel="noopener noreferrer"&gt;github.com/apex-stack-ai/backlink-prospector-lite&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Building StockVS in public — follow along on &lt;a href="https://dev.to/apex_stack"&gt;Dev.to&lt;/a&gt; for weekly updates on what's working and what isn't.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>seo</category>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Google Isn't the Only Search Engine That Matters Anymore. Here's How I Optimize for All of Them.</title>
      <dc:creator>Apex Stack</dc:creator>
      <pubDate>Fri, 20 Mar 2026 13:12:46 +0000</pubDate>
      <link>https://dev.to/apex_stack/google-isnt-the-only-search-engine-that-matters-anymore-heres-how-i-optimize-for-all-of-them-2d10</link>
      <guid>https://dev.to/apex_stack/google-isnt-the-only-search-engine-that-matters-anymore-heres-how-i-optimize-for-all-of-them-2d10</guid>
      <description>&lt;p&gt;Last month, I checked where my site's traffic was actually coming from. Google Search? Sure — a few clicks. But here's what caught my attention: people were finding my pages through Perplexity, ChatGPT Browse, and Bing Copilot. Not by ranking in blue links — by being &lt;strong&gt;cited in AI-generated answers&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The shift hit me hard. I'd spent weeks optimizing title tags, meta descriptions, and keyword density. Classic SEO. But the game has changed. AI search engines don't show you a list of 10 links. They synthesize an answer and cite their sources. If your content isn't structured for citation, you're invisible in the fastest-growing search channel.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Traditional SEO Doesn't Get You Cited
&lt;/h2&gt;

&lt;p&gt;Here's what I mean by "the new invisible." I run &lt;a href="https://stockvs.com" rel="noopener noreferrer"&gt;StockVS&lt;/a&gt;, a financial data platform with 8,000+ stock analysis pages across 12 languages. Google has indexed about 1,580 of them. Impressions are slowly climbing. Classic SEO is working — slowly.&lt;/p&gt;

&lt;p&gt;But AI search engines play by different rules. When someone asks Perplexity "What is AAPL's dividend yield?" or Bing Copilot "Compare tech sector stocks," these engines don't just rank pages. They extract specific facts, quote sentences, and cite sources. If your content has the answer buried in paragraph 14 of a wall of text, the AI skips you entirely.&lt;/p&gt;

&lt;p&gt;I tested this with my own pages. I asked Perplexity about stocks I knew I had detailed analysis for. Result? Not cited. The data was there, but it was wrapped in narrative paragraphs that no AI engine would bother to extract.&lt;/p&gt;

&lt;p&gt;Traditional SEO tools couldn't help here. Ahrefs tells me about backlinks. Semrush shows keyword rankings. Neither of them can tell me whether my content is &lt;strong&gt;citable by an AI engine&lt;/strong&gt;. That's a completely different optimization problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Makes Content AI-Citable
&lt;/h2&gt;

&lt;p&gt;After weeks of research and hands-on testing across my 8,000+ pages, I identified 8 specific dimensions that determine whether AI search engines will cite your content:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Direct Answer Density&lt;/strong&gt; — How many standalone, quotable facts exist per section? AI engines extract concise statements, not paragraphs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured Data Signals&lt;/strong&gt; — Schema markup (JSON-LD), heading hierarchy, semantic HTML. AI crawlers rely on these to understand what your page is about.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source Authority Signals&lt;/strong&gt; — Author credentials, outbound links to authoritative sources, editorial policies. The E-E-A-T dimension, but for machines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Query-Intent Alignment&lt;/strong&gt; — Does your first paragraph answer the query? AI engines check the first 100 words before deciding to cite you.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Freshness &amp;amp; Temporal Signals&lt;/strong&gt; — &lt;code&gt;dateModified&lt;/code&gt; in schema, "Last updated" on the page, current-year references. AI engines weight recency heavily.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal Richness&lt;/strong&gt; — Tables, charts, comparison data with proper alt text. AI engines increasingly process structured visuals.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semantic Depth&lt;/strong&gt; — Topical coverage completeness. Does your page cover all subtopics an expert would expect?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Competitive Citation Position&lt;/strong&gt; — Are competitors already being cited for your target queries? What are they doing that you're not?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I turned this framework into a scoring system: each dimension gets a 1-10 score, and the total tells you how AI-citable your content is. Below 40/80? Your content is essentially invisible to AI search. Above 65? You're in strong citation territory.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the GEO Optimizer Works in Practice
&lt;/h2&gt;

&lt;p&gt;Let me show you what this looks like on a real page. I ran the audit on my stock analysis page for AAPL (&lt;code&gt;stockvs.com/en/stock/aapl&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GEO Audit Results: stockvs.com/en/stock/aapl
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Direct Answer Density:    7/10  ✅ Strong — financial data is naturally factual
Structured Data Signals:  8/10  ✅ JSON-LD schema present, good heading hierarchy
Source Authority:         4/10  ⚠️ No author byline, few outbound links
Query-Intent Alignment:   6/10  ⚠️ Answer not in first paragraph
Freshness Signals:        8/10  ✅ dateModified present, data refreshed weekly
Multimodal Richness:      7/10  ✅ Tables for financials, comparison data
Semantic Depth:           5/10  ⚠️ Missing analyst ratings, earnings timeline
Competitive Position:     3/10  ❌ Not cited yet — competitors dominating
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TOTAL: 48/80 — Moderate citability. Key gaps: authority + competitive.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The audit immediately flagged what I'd been missing. My data was accurate and fresh, but I had no authority signals (no author bio, no editorial policy) and my content didn't lead with the answer. A human scanning the page could find what they needed. An AI engine extracting quotes? It was bouncing.&lt;/p&gt;

&lt;p&gt;The skill then generates specific rewrites. For example, it took my opening paragraph:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Before:&lt;/strong&gt; "Apple Inc. (AAPL) is a technology company headquartered in Cupertino, California. The company designs, manufactures, and markets consumer electronics..."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And produced:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;After:&lt;/strong&gt; "Apple (AAPL) trades at $198.42 with a market cap of $3.04T as of March 2026. The stock's dividend yield is 0.49% ($0.96/share annually), and its trailing P/E ratio is 32.7. AAPL is the largest holding in the Technology sector, representing 7.2% of the S&amp;amp;P 500."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The second version is packed with extractable facts. Every sentence can stand alone as an AI answer. That's what Perplexity and ChatGPT Browse want to cite.&lt;/p&gt;

&lt;p&gt;Beyond content rewrites, the skill generates schema templates tailored to your page type — FAQPage markup for question-based content, HowTo schemas for guides, and proper Article schemas with &lt;code&gt;dateModified&lt;/code&gt; that AI crawlers prioritize when deciding freshness.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Results on My Own Site
&lt;/h2&gt;

&lt;p&gt;I applied GEO optimization to 50 of my highest-impression stock pages. Here's what changed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Direct Answer Density&lt;/strong&gt; went from an average of 4.2/10 to 7.8/10 after restructuring content with facts-first paragraphs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema scores&lt;/strong&gt; improved across the board after adding FAQPage markup and &lt;code&gt;dateModified&lt;/code&gt; to every page&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Query-Intent Alignment&lt;/strong&gt; jumped when I added "Key Facts" boxes and TL;DR sections at the top of each page&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The checklist approach is what makes it repeatable. Before publishing any page now, I run through the GEO quick reference: first paragraph answers the primary query, every statistic has a date, FAQ section at the bottom, schema markup present, outbound links to authoritative sources. It takes 10 minutes per page instead of hoping AI engines notice you.&lt;/p&gt;

&lt;p&gt;I'm also using it to monitor which AI crawlers are hitting my pages — the skill includes a reference table of all major AI engine user agents (PerplexityBot, ChatGPT-User, ClaudeBot, Bingbot) so you know who's actually crawling your content.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who This Is For
&lt;/h2&gt;

&lt;p&gt;If you're running a content site, a SaaS marketing blog, an affiliate site, or any web property that depends on search traffic — GEO is no longer optional. AI search is already pulling traffic from traditional results, and it's accelerating.&lt;/p&gt;

&lt;p&gt;The GEO Optimizer is especially useful if you're doing programmatic SEO (hundreds or thousands of similar pages) because you can audit one page template and apply the fixes across your entire site.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try the free version first:&lt;/strong&gt; The lite edition runs the 8-dimension audit and gives you a citability score. It shows you exactly where your content falls short.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/apex-stack-ai/geo-optimizer-lite" rel="noopener noreferrer"&gt;&lt;strong&gt;Get the free lite version on GitHub&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Want the full toolkit?&lt;/strong&gt; The paid version ($19) includes the content rewriter, schema template generator, competitive GEO analysis, and the monthly monitoring checklist — everything you need to systematically optimize your entire site for AI search.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://apexstack.gumroad.com/l/geo-optimizer" rel="noopener noreferrer"&gt;&lt;strong&gt;Get GEO Optimizer on Gumroad — $19&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm building a portfolio of online businesses and sharing everything I learn along the way. Follow me for more on programmatic SEO, AI automation, and building in public.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>seo</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Stop Checking 5 Dashboards Every Morning. I Built an AI Agent That Does It for Me.</title>
      <dc:creator>Apex Stack</dc:creator>
      <pubDate>Tue, 17 Mar 2026 13:12:30 +0000</pubDate>
      <link>https://dev.to/apex_stack/stop-checking-5-dashboards-every-morning-i-built-an-ai-agent-that-does-it-for-me-1hdc</link>
      <guid>https://dev.to/apex_stack/stop-checking-5-dashboards-every-morning-i-built-an-ai-agent-that-does-it-for-me-1hdc</guid>
      <description>&lt;p&gt;Every morning, my routine looked like this: open Google Search Console, check indexing numbers, switch to Google Analytics, pull session data, hop to Gumroad for sales metrics, check Dev.to stats for article performance, then Yandex Webmaster for international crawl data. Five tabs, five logins, forty-five minutes — before I’d written a single line of code or published anything.&lt;/p&gt;

&lt;p&gt;I’m building &lt;a href="https://stockvs.com" rel="noopener noreferrer"&gt;stockvs.com&lt;/a&gt;, a programmatic SEO site serving stock analysis across 8,000+ tickers in 12 languages. I also sell Claude Skills on &lt;a href="https://apexstack.gumroad.com" rel="noopener noreferrer"&gt;Gumroad&lt;/a&gt;. Between the two properties, the number of metrics I need to track every day was eating my entire morning.&lt;/p&gt;

&lt;p&gt;So I built an AI agent to do it for me.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Cost of Dashboard Hopping
&lt;/h2&gt;

&lt;p&gt;If you’re running even one side project, you probably know the drill. Google Search Console takes 2-3 minutes to load and navigate. Google Analytics has a new UI every quarter that moves everything around. Gumroad’s analytics are behind a few clicks. And if you’re cross-posting content to Dev.to, Medium, and Hashnode, that’s three more dashboards to check.&lt;/p&gt;

&lt;p&gt;Here’s what it actually costs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;45 minutes per day&lt;/strong&gt; on metric checking alone (not acting on the data — just collecting it)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context switching tax&lt;/strong&gt; — by the time you’ve checked everything, you’ve lost your focus for deep work&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Decision paralysis&lt;/strong&gt; — you see 15 metrics but don’t know which one matters today&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Missed patterns&lt;/strong&gt; — you check each dashboard in isolation, so you never spot cross-platform trends&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For solopreneurs and small teams, this is death by a thousand tabs. You built a product to escape the 9-5, and now you spend your mornings doing data entry.&lt;/p&gt;

&lt;h2&gt;
  
  
  What If Your AI Assistant Already Knew the Numbers?
&lt;/h2&gt;

&lt;p&gt;I wanted something dead simple: tell Claude “morning briefing” and get back a consolidated view of every property, every channel, every metric that matters — with the top 3 things I should actually do today.&lt;/p&gt;

&lt;p&gt;That’s what Revenue Agent Pro does. It’s a Claude Skill that turns your Claude session into a daily business operations assistant. Instead of you going to the data, the data comes to you.&lt;/p&gt;

&lt;p&gt;Here’s what a daily briefing looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Daily Growth Briefing — March 17, 2026&lt;/span&gt;

&lt;span class="gu"&gt;## Priority Actions Today&lt;/span&gt;
&lt;span class="p"&gt;1.&lt;/span&gt; Fix sectors 403 error on DO Spaces — blocking Google from
   indexing sector hub pages (est. impact: +200 indexed pages)
&lt;span class="p"&gt;2.&lt;/span&gt; Publish cross-post of Article 7 to remaining platforms —
   content is written, just needs distribution
&lt;span class="p"&gt;3.&lt;/span&gt; Check Gumroad conversion funnel — 7 articles live with CTAs
   but 0 store visits, diagnose the CTA placement

&lt;span class="gu"&gt;## Metrics Snapshot&lt;/span&gt;
| Property      | Yesterday | 7-Day Avg | Trend |
|---------------|-----------|-----------|-------|
| StockVS (GSC) | 36 imp    | 5.1/day   | up    |
| Gumroad       | 0 views   | 0/day     | flat  |
| Dev.to        | ~70 views | ~10/day   | up    |

&lt;span class="gu"&gt;## Revenue Progress&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; MTD: $0 / $200 target (0%)
&lt;span class="p"&gt;-&lt;/span&gt; Pace: Behind — need first sale this week
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No tab switching. No login juggling. Just: here’s what happened, here’s what matters, here’s what to do about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Seven Capabilities That Replace Your Dashboard Routine
&lt;/h2&gt;

&lt;p&gt;Revenue Agent Pro isn’t just a briefing generator. It’s a full operations layer:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Daily Growth Briefing&lt;/strong&gt; — The morning summary shown above. Pulls from your revenue data, content calendar, and SEO metrics to generate a prioritized action plan.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Revenue Dashboard&lt;/strong&gt; — Multi-property income tracking with a Python-based RevenueTracker class. Log revenue by property, channel, and date. Get monthly summaries, channel breakdowns, and trend analysis that tells you if you’re growing, flat, or declining.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;tracker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;RevenueTracker&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;tracker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_property&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;StockVS&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ads&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;affiliate&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="n"&gt;tracker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_property&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Gumroad&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;products&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="n"&gt;tracker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log_revenue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Gumroad&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;products&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;29.00&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2026-03-15&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tracker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;monthly_summary&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Content Calendar&lt;/strong&gt; — Plan publication dates, track cross-posting schedules, and never forget which platform still needs this week’s article. I run 3 platforms (Dev.to, Medium, Hashnode) and publish 2-3 articles per week — without a calendar, things slip through the cracks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. SEO Monitor&lt;/strong&gt; — Track pages indexed, crawl health, traffic trends, and keyword positions over time. It calculates your index health score automatically and flags when something goes wrong (like my indexing regression from 2,246 to 1,917 pages — a decline I caught early because the agent flagged it).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Promotion Scheduler&lt;/strong&gt; — Set up recurring promotions on a rotation. Product A gets promoted on Monday, Product B on Wednesday, Product C on Friday. The scheduler tells you what’s due today so no product goes stale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Competitor Watch&lt;/strong&gt; — Log competitor moves and get alerted when positioning changes. Useful when you’re in a crowded niche and need to react fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Weekly Review Generator&lt;/strong&gt; — Every Monday, get an automated performance summary: wins, revenue trends, content published, SEO progress, and next week’s priorities. This is the one I use most — it forces accountability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before vs. After: What Changed for Me
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Before Revenue Agent Pro:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;45 minutes every morning checking dashboards&lt;/li&gt;
&lt;li&gt;Decisions based on whichever metric I looked at last&lt;/li&gt;
&lt;li&gt;Missed cross-platform patterns (like Dutch pages driving all my GSC impressions, but English content getting all my Dev.to traffic)&lt;/li&gt;
&lt;li&gt;Weekly reviews took 2 hours to compile manually&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;After:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;5 minutes: read the briefing, confirm priorities, start working&lt;/li&gt;
&lt;li&gt;Decisions based on consolidated data with trend arrows&lt;/li&gt;
&lt;li&gt;Cross-property insights surfaced automatically&lt;/li&gt;
&lt;li&gt;Weekly reviews generated in seconds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The math is simple. At 40 minutes saved per day, that’s over 4.5 hours per week back for building, writing, and shipping. Over a month, that’s nearly 20 hours — essentially a half-time employee’s worth of data analysis, for a one-time $19 purchase.&lt;/p&gt;

&lt;h2&gt;
  
  
  It’s Built From Real Operations, Not Theory
&lt;/h2&gt;

&lt;p&gt;Most productivity tools are built by people who’ve never actually run a multi-property online business. Revenue Agent Pro came directly from the operational workflows I built for managing StockVS (100K+ pages across 12 languages) and an Apex Stack digital product store with 6 live products.&lt;/p&gt;

&lt;p&gt;Every template, every tracking class, every briefing format was tested against real data — real GSC numbers, real Gumroad analytics, real content calendars with real deadlines.&lt;/p&gt;

&lt;p&gt;The daily routine, weekly routine, and monthly routine automations aren’t aspirational. They’re what I actually run.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Free, Then Go Pro
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;free lite version&lt;/strong&gt; gives you daily briefings, basic revenue tracking, and the priority action framework — enough to replace your morning dashboard routine immediately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/apex-stack-ai/revenue-agent-lite" rel="noopener noreferrer"&gt;Try Revenue Agent Lite free on GitHub&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you’re ready for multi-property dashboards, SEO monitoring, content calendars, promotion scheduling, and automated weekly reviews:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://apexstack.gumroad.com/l/revenue-agent" rel="noopener noreferrer"&gt;Get Revenue Agent Pro — $19 on Gumroad&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Install it as a Claude Skill, say “morning briefing,” and take your mornings back.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built by &lt;a href="https://apexstack.gumroad.com" rel="noopener noreferrer"&gt;Apex Stack&lt;/a&gt; — from real experience growing online properties from $0 to revenue.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>automation</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>I'm Optimizing 100,000 Pages for AI Search Engines, Not Just Google. Here's My Playbook.</title>
      <dc:creator>Apex Stack</dc:creator>
      <pubDate>Sat, 14 Mar 2026 14:32:51 +0000</pubDate>
      <link>https://dev.to/apex_stack/im-optimizing-100000-pages-for-ai-search-engines-not-just-google-heres-my-playbook-48gp</link>
      <guid>https://dev.to/apex_stack/im-optimizing-100000-pages-for-ai-search-engines-not-just-google-heres-my-playbook-48gp</guid>
      <description>&lt;p&gt;Google AI Overviews now appear in more than 1 out of every 4 search results. For long-tail queries — the exact type that programmatic SEO sites target — that number jumps above 50%.&lt;/p&gt;

&lt;p&gt;I run &lt;a href="https://stockvs.com" rel="noopener noreferrer"&gt;StockVS.com&lt;/a&gt;, a programmatic SEO site with 100,000+ pages covering stock analysis, sector breakdowns, and ETF data across 12 languages. When I started building it, I optimized exclusively for traditional Google rankings. Page titles, meta descriptions, schema markup, internal linking — the standard playbook.&lt;/p&gt;

&lt;p&gt;That playbook is no longer enough. AI search engines like Google's AI Overviews, ChatGPT with browsing, Perplexity, and others are reshaping how people find information. They don't just rank pages — they synthesize answers. And if your content isn't structured to be cited by these systems, you're invisible in the new search landscape.&lt;/p&gt;

&lt;p&gt;Here's how I'm adapting 100,000 pages for a world where AI does the reading first.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: AI Overviews Eat Your Click
&lt;/h2&gt;

&lt;p&gt;Here's what happens now when someone searches "NVDA stock analysis 2026":&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Google shows an AI Overview at the top — a synthesized answer pulling from multiple sources&lt;/li&gt;
&lt;li&gt;Below that, maybe some People Also Ask boxes&lt;/li&gt;
&lt;li&gt;Then the traditional blue links&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The AI Overview answers the question well enough that many users never scroll down. For programmatic SEO sites that depend on long-tail traffic, this is an existential shift. You can rank on page 1 and still get zero clicks because the AI summary already gave the user what they needed.&lt;/p&gt;

&lt;p&gt;I noticed this pattern in my own Search Console data. Impressions were climbing in certain query buckets, but click-through rates were declining. People were seeing my pages in search results but not clicking through — because the AI Overview had already answered their question.&lt;/p&gt;

&lt;h2&gt;
  
  
  GEO: The New Optimization Layer
&lt;/h2&gt;

&lt;p&gt;The SEO community has started calling this "Generative Engine Optimization" or GEO. It's the practice of structuring your content so that AI systems are more likely to cite it when generating answers.&lt;/p&gt;

&lt;p&gt;This isn't about tricking AI. It's about making your data so clear, so structured, and so authoritative that when an AI needs to answer a financial question, your page becomes the obvious source to cite.&lt;/p&gt;

&lt;p&gt;Here's what I've changed across my 100,000+ pages.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Structured Data Becomes Non-Negotiable
&lt;/h3&gt;

&lt;p&gt;I was already using schema markup — &lt;code&gt;FinancialProduct&lt;/code&gt;, &lt;code&gt;FAQPage&lt;/code&gt;, &lt;code&gt;BreadcrumbList&lt;/code&gt;. But for AI search, I've gone deeper.&lt;/p&gt;

&lt;p&gt;Every stock page on StockVS now includes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"@type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"FinancialProduct"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AAPL Stock Analysis"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Apple Inc. stock analysis with key financials, valuation metrics, and sector comparison"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"@type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Organization"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"StockVS"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"offers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"@type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Offer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"price"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"priceCurrency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"USD"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But I've also added explicit &lt;code&gt;dateModified&lt;/code&gt; timestamps to every page, &lt;code&gt;author&lt;/code&gt; markup linking to the site's editorial policy, and &lt;code&gt;about&lt;/code&gt; schema connecting each stock page to its sector and industry.&lt;/p&gt;

&lt;p&gt;Why? AI systems use structured data as a trust signal. When Perplexity or Google's AI Overview needs to decide which source to cite for "NVDA P/E ratio," the page with clean, machine-readable schema wins.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Direct-Answer Formatting
&lt;/h3&gt;

&lt;p&gt;AI Overviews pull from content that directly answers questions. I restructured every stock page to lead with key metrics in a scannable format before diving into analysis.&lt;/p&gt;

&lt;p&gt;Instead of:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Apple Inc. (AAPL) is a technology company that designs, manufactures, and markets smartphones..."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I now start with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;AAPL Key Metrics (March 2026)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Market Cap: $3.2T&lt;/li&gt;
&lt;li&gt;P/E Ratio: 28.4&lt;/li&gt;
&lt;li&gt;Dividend Yield: 0.54%&lt;/li&gt;
&lt;li&gt;52-Week Range: $169.21 – $260.10&lt;/li&gt;
&lt;li&gt;Sector: Technology&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then the analysis follows. This formatting makes it trivial for AI systems to extract and cite specific data points. Every page becomes a structured data card that AI can pull from.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. FAQ Sections That AI Actually Cites
&lt;/h3&gt;

&lt;p&gt;I've always had FAQ schema on my pages, but I reworked the questions to match how people actually query AI assistants.&lt;/p&gt;

&lt;p&gt;Old approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"What is the P/E ratio of AAPL?"&lt;/li&gt;
&lt;li&gt;"Is AAPL a good investment?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;New approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"How does AAPL's valuation compare to the Technology sector average?"&lt;/li&gt;
&lt;li&gt;"What are the key risks for Apple stock in 2026?"&lt;/li&gt;
&lt;li&gt;"Should I buy AAPL at its current price?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The difference: the new questions match how people phrase queries to ChatGPT, Perplexity, and Google's conversational search. When an AI system encounters these questions in your FAQ schema, it's more likely to cite your answer in its generated response.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Unique Data Points as Citation Magnets
&lt;/h3&gt;

&lt;p&gt;Here's the biggest insight: AI systems preferentially cite pages that contain unique, quantitative data that other sources don't have.&lt;/p&gt;

&lt;p&gt;Every stock page on StockVS generates unique analysis using financial data from yfinance combined with a local Llama 3 model. This means my AAPL page doesn't just repeat the same data as Yahoo Finance — it includes proprietary analysis, custom comparisons within the sector, and valuation assessments that exist nowhere else on the web.&lt;/p&gt;

&lt;p&gt;For AI citation, unique data is the moat. If your page says the same thing as 50 other pages, the AI has no reason to cite you specifically. If your page contains a unique analysis or data point, the AI must cite you to reference that information.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Cross-Language as an AI Advantage
&lt;/h3&gt;

&lt;p&gt;One of the most underappreciated aspects of AI search: multilingual content creates citation opportunities across language barriers.&lt;/p&gt;

&lt;p&gt;When someone asks ChatGPT about a stock in German, the AI pulls from German-language sources. Most financial analysis sites are English-only. StockVS covers &lt;a href="https://stockvs.com/en/stocks/" rel="noopener noreferrer"&gt;stocks&lt;/a&gt;, &lt;a href="https://stockvs.com/en/stocks/sectors/" rel="noopener noreferrer"&gt;sectors&lt;/a&gt;, and &lt;a href="https://stockvs.com/en/etfs/" rel="noopener noreferrer"&gt;ETFs&lt;/a&gt; across 12 languages — and in several of those languages, we're one of very few sources with comprehensive financial analysis.&lt;/p&gt;

&lt;p&gt;My Search Console data confirms this: Dutch and German pages consistently generate more impressions per page than English ones. In AI search, this advantage compounds — there are simply fewer high-quality German-language financial analysis pages for AI to cite.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Technical Implementation
&lt;/h2&gt;

&lt;p&gt;Here's what the pipeline looks like for optimizing 100,000 pages for AI search:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Layer (Supabase PostgreSQL)&lt;/strong&gt;&lt;br&gt;
→ 8,000+ ticker records with live financial data from yfinance&lt;br&gt;
→ Sector/industry/ETF relationship mappings&lt;br&gt;
→ Historical price data and calculated metrics&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Content Generation (Local Llama 3)&lt;/strong&gt;&lt;br&gt;
→ Template-based analysis with unique data-driven insights per ticker&lt;br&gt;
→ FAQ generation matching conversational search patterns&lt;br&gt;
→ Cross-language content with localized financial terminology&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Schema Layer (Astro Static Build)&lt;/strong&gt;&lt;br&gt;
→ JSON-LD structured data generated at build time&lt;br&gt;
→ &lt;code&gt;FinancialProduct&lt;/code&gt;, &lt;code&gt;FAQPage&lt;/code&gt;, &lt;code&gt;BreadcrumbList&lt;/code&gt;, &lt;code&gt;Organization&lt;/code&gt;&lt;br&gt;
→ &lt;code&gt;dateModified&lt;/code&gt; auto-updated per data refresh cycle&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delivery (Cloudflare CDN)&lt;/strong&gt;&lt;br&gt;
→ Sub-second page loads globally&lt;br&gt;
→ Edge-cached static HTML — no JavaScript rendering required&lt;br&gt;
→ This matters because AI crawlers heavily penalize slow or JS-dependent pages&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm Measuring
&lt;/h2&gt;

&lt;p&gt;Traditional SEO metrics don't capture the full picture anymore. Here's what I'm tracking:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;CTR at position&lt;/strong&gt; — If I'm ranking position 5-10 and CTR drops, it likely means an AI Overview is absorbing clicks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Impression-to-click ratio by query type&lt;/strong&gt; — Long-tail financial queries vs. branded queries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema validation rate&lt;/strong&gt; — Percentage of pages passing Google's Rich Results test&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI citation monitoring&lt;/strong&gt; — Searching key queries in Perplexity and ChatGPT to check if StockVS pages are cited&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language-specific AI visibility&lt;/strong&gt; — Checking AI responses in German, Dutch, Polish for stock queries&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I don't have a perfect measurement system for AI citations yet — nobody does. But directionally, I can see which optimizations move the needle.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Not Working
&lt;/h2&gt;

&lt;p&gt;Transparency time: some things I've tried haven't panned out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aggressive FAQ expansion&lt;/strong&gt; didn't help as much as I expected. Adding 20 FAQs per page didn't increase AI citations — having 5 really well-structured, data-rich FAQs performed better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trying to "game" AI Overviews&lt;/strong&gt; by stuffing exact-match questions into headings backfired. The content felt unnatural and the quality signals degraded.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Over-optimizing meta descriptions for AI&lt;/strong&gt; was a waste. AI systems read the full page content, not just the meta description. The meta description matters for traditional CTR, not for AI citation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Playbook Summary
&lt;/h2&gt;

&lt;p&gt;If you're running a programmatic SEO site, here's the minimum viable GEO stack for 2026:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Schema everything&lt;/strong&gt; — &lt;code&gt;dateModified&lt;/code&gt;, &lt;code&gt;author&lt;/code&gt;, &lt;code&gt;about&lt;/code&gt;, domain-specific types. Make your data machine-readable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lead with data&lt;/strong&gt; — Put key facts and metrics above the fold in scannable formats. AI extracts from the top of your content first.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Match conversational queries&lt;/strong&gt; — Rewrite FAQs to match how people ask AI assistants, not how they type into Google.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate unique analysis&lt;/strong&gt; — Original data points are your citation moat. If your page says what everyone else's says, AI won't cite you.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Go multilingual&lt;/strong&gt; — Non-English AI search is wide open. If your data works in other languages, translate and localize it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Measure AI visibility&lt;/strong&gt; — Check Perplexity, ChatGPT, and Google AI Overviews manually for your key queries. The tools for automated tracking are coming but aren't reliable yet.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;I'm building automated monitoring that checks whether StockVS pages appear in AI-generated answers for a rotating set of financial queries across all 12 languages. It's basically an "AI SERP tracker" — and I'll share the results when I have enough data.&lt;/p&gt;

&lt;p&gt;The shift from "rank on page 1" to "get cited by AI" is the biggest change to SEO since mobile-first indexing. For programmatic SEO at scale, it's both a threat and an opportunity. The sites that adapt their content structure for AI consumption first will have a massive head start.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm building StockVS and documenting the entire journey. If you're into programmatic SEO, AI-powered content generation, or building data-driven web properties, I write about the real numbers — what works, what fails, and what the data actually says.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Resources I've built:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📘 &lt;a href="https://apexstack.gumroad.com/l/pseo-blueprint" rel="noopener noreferrer"&gt;Programmatic SEO Blueprint&lt;/a&gt; — The complete framework for building data-driven SEO sites at scale&lt;/li&gt;
&lt;li&gt;🔍 &lt;a href="https://apexstack.gumroad.com/l/pseo-auditor" rel="noopener noreferrer"&gt;Programmatic SEO Auditor&lt;/a&gt; — Claude skill that audits your pSEO site for indexing, content quality, and technical issues&lt;/li&gt;
&lt;li&gt;📊 &lt;a href="https://apexstack.gumroad.com/l/financial-analyzer" rel="noopener noreferrer"&gt;Financial Data Analyzer&lt;/a&gt; — Claude skill for pulling live stock data and running analysis&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>seo</category>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Google Only Indexed 2% of My 100,000-Page Site. Here's What I'm Doing About It.</title>
      <dc:creator>Apex Stack</dc:creator>
      <pubDate>Fri, 13 Mar 2026 17:36:03 +0000</pubDate>
      <link>https://dev.to/apex_stack/google-only-indexed-2-of-my-100000-page-site-heres-what-im-doing-about-it-3jof</link>
      <guid>https://dev.to/apex_stack/google-only-indexed-2-of-my-100000-page-site-heres-what-im-doing-about-it-3jof</guid>
      <description>&lt;p&gt;I've been building &lt;a href="https://stockvs.com" rel="noopener noreferrer"&gt;StockVS&lt;/a&gt;, a multilingual stock analysis platform with over 100,000 pages covering 8,000+ US tickers across 12 languages. It's a programmatic SEO play — templatized pages generated from financial data and local LLM analysis.&lt;/p&gt;

&lt;p&gt;Here's the problem: Google has only indexed about 1,920 of those pages. That's roughly 2%.&lt;/p&gt;

&lt;p&gt;And it's getting worse.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers Don't Lie
&lt;/h2&gt;

&lt;p&gt;Every few days I check Google Search Console and pull the indexing report. Here's what I'm looking at right now:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;th&gt;Pages&lt;/th&gt;
&lt;th&gt;What It Means&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Crawled — not indexed&lt;/td&gt;
&lt;td&gt;51,061&lt;/td&gt;
&lt;td&gt;Google visited, read the page, and said "no thanks"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Discovered — not indexed&lt;/td&gt;
&lt;td&gt;28,016&lt;/td&gt;
&lt;td&gt;Google knows the URL exists but won't even bother crawling it&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Indexed&lt;/td&gt;
&lt;td&gt;1,920&lt;/td&gt;
&lt;td&gt;The 2% that made the cut&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Redirects&lt;/td&gt;
&lt;td&gt;2,648&lt;/td&gt;
&lt;td&gt;Pages I intentionally removed&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The most painful line is "Crawled — not indexed." That means Googlebot actually spent its crawl budget visiting 51,000 pages, processed them, and decided they weren't worth indexing. That's not a discovery problem. That's a quality problem.&lt;/p&gt;

&lt;p&gt;Even worse: my indexed count dropped from 2,246 to 1,920 in one week. Google is actively de-indexing pages it previously accepted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Google Is Rejecting 98% of My Pages
&lt;/h2&gt;

&lt;p&gt;After months of analyzing GSC data and reading every Google documentation page about indexing, I've identified three root causes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Thin Content Signals&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My stock pages were originally 200-300 words of templated analysis. For a site with zero authority, that's not enough to convince Google that each page adds unique value. When Google's AI-first indexing systems see thousands of pages with similar structure and shallow content, it treats the whole domain as low-quality.&lt;/p&gt;

&lt;p&gt;I've since expanded pages to 600-800 words with more specific financial analysis per ticker, but the reputation damage takes time to reverse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Zero Domain Authority&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's a stat that surprised me: one audit I read about described a site with 8 million discovered pages but only 650,000 indexed. The reason? Google scales its indexing generosity with your domain's trust signals. No backlinks, no authority, no index.&lt;/p&gt;

&lt;p&gt;StockVS has zero backlinks registered in any tool I've checked. I've published five articles across Medium, Dev.to, and Hashnode to start building links, but that's a drop in the ocean compared to what's needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Crawl Budget Economics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Google allocates crawl budget based on a site's perceived importance. When you have 100,000+ URLs competing for attention on a domain Google doesn't trust yet, you're burning crawl budget on pages Google will never index. It's a vicious cycle: low authority means less crawling, which means fewer indexed pages, which means less traffic, which means less authority.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm Actually Doing About It
&lt;/h2&gt;

&lt;p&gt;I'm not giving up on programmatic SEO. The math still works — if even 10% of my pages start ranking, that's 10,000 pages of organic traffic. But the approach has to change.&lt;/p&gt;

&lt;h3&gt;
  
  
  Strategy 1: Kill the Weakest Pages
&lt;/h3&gt;

&lt;p&gt;I already removed all comparison pages — they were the thinnest content on the site and were actively diluting crawl budget. Those 2,648 redirects in my GSC report are the evidence. Sometimes the best SEO move is subtraction.&lt;/p&gt;

&lt;p&gt;The principle: fewer, better pages &amp;gt; more pages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Strategy 2: Thicken Content Where It Matters
&lt;/h3&gt;

&lt;p&gt;I'm adding unique sections to every stock page that aren't just reformatted data points. Things like related news, analyst ratings, earnings timelines, and market context sections that actually require specific data per ticker.&lt;/p&gt;

&lt;p&gt;The goal is to make Google's quality classifiers see each page as a legitimate analysis rather than a data table with a paragraph bolted on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Strategy 3: Internal Linking Architecture
&lt;/h3&gt;

&lt;p&gt;Programmatic SEO sites often have flat architecture where every page is equally connected (or disconnected). I'm building internal linking widgets — "Related Stocks," "Popular in This Sector," cross-links between stock pages, sector pages, and ETF pages.&lt;/p&gt;

&lt;p&gt;This helps Google understand the topical relationships between pages and distributes whatever authority the domain has more effectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  Strategy 4: Build Backlinks Through Content Marketing
&lt;/h3&gt;

&lt;p&gt;Five articles are live across three platforms. Each one tells part of the StockVS story and naturally links back to the site. I'm planning more — the key is writing about the journey of building a large-scale site, which resonates with the developer and SEO communities.&lt;/p&gt;

&lt;p&gt;This article you're reading is part of that strategy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Strategy 5: Focus on What's Working
&lt;/h3&gt;

&lt;p&gt;Here's an interesting signal from my GSC data: non-English pages are getting more impressions than English ones. Dutch pages lead impressions, followed by German and Polish. The competition for "[ticker] analyse" in Dutch is dramatically lower than "[ticker] analysis" in English.&lt;/p&gt;

&lt;p&gt;This validates the multilingual approach. Instead of fighting for English keywords against sites with DR 70+, I can win in languages where the SERP is wide open.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth About Programmatic SEO in 2026
&lt;/h2&gt;

&lt;p&gt;Google's indexing bar has never been higher. Their AI-first systems are more aggressive about filtering out content that doesn't demonstrate unique value. If you're generating thousands of pages, each one needs to earn its place in the index individually.&lt;/p&gt;

&lt;p&gt;The old playbook of "spin up 100k pages, submit sitemap, wait for traffic" doesn't work anymore. You need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Content depth&lt;/strong&gt; that goes beyond what any template can auto-generate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authority signals&lt;/strong&gt; (backlinks, brand searches, engagement metrics) that tell Google your domain is trustworthy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technical hygiene&lt;/strong&gt; — clean sitemaps, proper canonicals, fast load times, no crawl traps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Patience&lt;/strong&gt; — Google's re-evaluation cycle for domains isn't fast, especially when you're recovering from thin content signals&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;I'm tracking this weekly. Every GSC report tells me whether the content thickening and backlink building is moving the needle. The leading indicators I'm watching:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Crawled-not-indexed count&lt;/strong&gt; — if content quality is improving, this should decrease&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Indexed page count&lt;/strong&gt; — the north star metric&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Impressions on non-English pages&lt;/strong&gt; — the multilingual arbitrage play&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain Rating in Ahrefs&lt;/strong&gt; — currently zero, need to see movement&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're building a programmatic SEO site and hitting the same indexing wall, I'd love to hear what's working for you. The old "just build more pages" approach is dead. In 2026, it's about building pages that deserve to be indexed.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I write about building large-scale SEO sites, AI-powered content generation, and the tools I use to manage it all. If you're into programmatic SEO, check out my &lt;a href="https://apexstack.gumroad.com/l/pseo-blueprint" rel="noopener noreferrer"&gt;Programmatic SEO Blueprint&lt;/a&gt; — it covers the architecture, data pipelines, and multilingual strategy I use for StockVS.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For AI-powered SEO workflows, I've also built a set of &lt;a href="https://apexstack.gumroad.com" rel="noopener noreferrer"&gt;Claude Skills&lt;/a&gt; that handle everything from content auditing to cross-platform publishing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>seo</category>
      <category>webdev</category>
      <category>programming</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>Agentic SEO Is Here: How I Use AI Agents to Manage a 100,000-Page Website</title>
      <dc:creator>Apex Stack</dc:creator>
      <pubDate>Thu, 12 Mar 2026 16:22:27 +0000</pubDate>
      <link>https://dev.to/apex_stack/agentic-seo-is-here-how-i-use-ai-agents-to-manage-a-100000-page-website-4b2p</link>
      <guid>https://dev.to/apex_stack/agentic-seo-is-here-how-i-use-ai-agents-to-manage-a-100000-page-website-4b2p</guid>
      <description>&lt;p&gt;Everyone's talking about AI agents replacing DevOps engineers and writing code. But nobody's talking about what happens when you point AI agents at SEO.&lt;/p&gt;

&lt;p&gt;I run &lt;a href="https://stockvs.com" rel="noopener noreferrer"&gt;StockVS.com&lt;/a&gt;, a programmatic SEO site with over 100,000 pages covering 8,000+ stock tickers across 12 languages. Managing a site this size manually is impossible. So I built an agentic workflow that handles everything from content generation to cross-platform publishing to technical auditing.&lt;/p&gt;

&lt;p&gt;Here's exactly how the system works, what surprised me, and where AI agents still fall short.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "Agentic SEO" Actually Means
&lt;/h2&gt;

&lt;p&gt;There's a difference between using AI to write a blog post and using AI agents to &lt;em&gt;operate&lt;/em&gt; an SEO pipeline.&lt;/p&gt;

&lt;p&gt;Traditional AI-assisted SEO looks like this: you open ChatGPT, ask it to write an article, copy-paste it into WordPress, and manually optimize it. That's AI as a tool.&lt;/p&gt;

&lt;p&gt;Agentic SEO is different. You define the workflow once — data collection, content generation, quality checks, publishing, monitoring — and agents execute the entire pipeline autonomously. You shift from being the operator to being the architect.&lt;/p&gt;

&lt;p&gt;For StockVS, that pipeline looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Data Layer:     yfinance API → Supabase PostgreSQL (8,000+ tickers)
Content Layer:  Local Llama 3 → 600-800 word analysis per ticker
Build Layer:    Astro static site → 12 language variants
Distribution:   Cloudflare CDN → Digital Ocean Spaces
Monitoring:     Google Search Console → Custom dashboards
Publishing:     AI skill → Dev.to + Medium + Hashnode simultaneously
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each layer runs with minimal manual intervention. That's the shift.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent 1: Content Generation at Scale With a Local LLM
&lt;/h2&gt;

&lt;p&gt;The backbone of StockVS is a local Llama 3 instance generating unique stock analysis for every ticker. Not generic filler — each page includes valuation metrics, financial health indicators, sector positioning, and a forward-looking assessment.&lt;/p&gt;

&lt;p&gt;I covered this in detail in &lt;a href="https://dev.to/apex_stack/how-i-use-a-local-llm-to-generate-seo-content-for-10000-pages-242j"&gt;a previous article&lt;/a&gt;, but the key insight is cost. Running Llama 3 locally means generating 100,000+ pages of content costs exactly $0 in API fees. The tradeoff is compute time and the work of building the pipeline, but once it's running, the marginal cost of adding a new ticker is near zero.&lt;/p&gt;

&lt;p&gt;What I'd do differently: I initially generated 300-400 word pages. Google flagged most of them as thin content (50,000+ pages "crawled but not indexed" in Search Console). After expanding to 600-800 words with more unique data points per page, the quality signal improved significantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson: AI agents are only as good as the quality bar you set for them.&lt;/strong&gt; An agent that generates fast but shallow content will scale your problems, not your results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent 2: Cross-Platform Publishing in Under a Minute
&lt;/h2&gt;

&lt;p&gt;Writing articles is half the battle. The other half is getting them in front of people across multiple platforms without spending 30 minutes copy-pasting and reformatting.&lt;/p&gt;

&lt;p&gt;I built a publishing agent that takes a markdown article and pushes it to Dev.to, Hashnode, and Medium simultaneously. It handles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Platform-specific formatting differences&lt;/li&gt;
&lt;li&gt;Canonical URL management (so Google doesn't see duplicate content)&lt;/li&gt;
&lt;li&gt;Tag optimization per platform&lt;/li&gt;
&lt;li&gt;Frontmatter translation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A single command takes an article from draft to live on three platforms. The same article, properly formatted for each platform's quirks, with canonical URLs pointing back to the primary source.&lt;/p&gt;

&lt;p&gt;Before this, publishing a single article across three platforms took 25-30 minutes of manual work. Now it takes under 60 seconds.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://apexstack.gumroad.com/l/blog-cross-publisher" rel="noopener noreferrer"&gt;Blog Cross-Publisher&lt;/a&gt; is available as a Claude skill if you want to try this workflow yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent 3: Programmatic SEO Auditing
&lt;/h2&gt;

&lt;p&gt;When you have 100,000+ pages, you can't manually check each one for SEO issues. You need an agent that crawls your own site, identifies problems, and prioritizes fixes.&lt;/p&gt;

&lt;p&gt;My auditing agent checks for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Missing or duplicate meta descriptions&lt;/li&gt;
&lt;li&gt;Thin content pages (under word count threshold)&lt;/li&gt;
&lt;li&gt;Broken internal links&lt;/li&gt;
&lt;li&gt;Missing schema markup&lt;/li&gt;
&lt;li&gt;Hreflang consistency across 12 languages&lt;/li&gt;
&lt;li&gt;Canonical tag mismatches (Google flagged 297 pages for this)&lt;/li&gt;
&lt;li&gt;Orphaned pages with no internal links&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The output is a prioritized list of issues sorted by impact. Fix the top 10 items and you move the needle more than fixing 1,000 random pages.&lt;/p&gt;

&lt;p&gt;This is where agents genuinely outperform humans. No person can audit 100,000 pages for hreflang consistency. An agent does it in minutes.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://apexstack.gumroad.com/l/pseo-auditor" rel="noopener noreferrer"&gt;Programmatic SEO Auditor&lt;/a&gt; is the skill I packaged from this workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent 4: Financial Data Analysis
&lt;/h2&gt;

&lt;p&gt;StockVS pages need actual financial data, not just AI-generated text. Each stock page pulls live data through yfinance: price history, P/E ratios, dividend yields, revenue growth, debt levels, and more.&lt;/p&gt;

&lt;p&gt;An analysis agent processes this raw data and generates the narrative sections — "AAPL is trading at 28x earnings, above its 5-year average of 24x, suggesting the market is pricing in continued growth from its services segment."&lt;/p&gt;

&lt;p&gt;This is where combining structured data with LLM reasoning gets interesting. The agent isn't hallucinating numbers — it's reading real financial data from a database and generating contextual analysis around verified facts.&lt;/p&gt;

&lt;p&gt;I open-sourced a lite version of this as the &lt;a href="https://apexstack.gumroad.com/l/financial-analyzer" rel="noopener noreferrer"&gt;Financial Data Analyzer&lt;/a&gt; skill.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hard Truth: What Agents Can't Do (Yet)
&lt;/h2&gt;

&lt;p&gt;After 6 months of building agentic SEO workflows, here's where agents still struggle:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Editorial judgment.&lt;/strong&gt; Agents can generate content that's technically accurate and well-structured, but they can't tell you if a page is &lt;em&gt;interesting&lt;/em&gt;. The difference between a page that ranks #3 and one that ranks #30 is often the quality of insight, not the quality of grammar.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Link building.&lt;/strong&gt; This is still fundamentally a human relationship activity. Agents can identify link opportunities and draft outreach emails, but the actual relationship — the "hey, I noticed your resource page" conversation — requires human nuance. My site has zero backlinks despite 100K+ pages, and no agent has solved that yet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Strategic pivots.&lt;/strong&gt; When Google's indexing data showed 50,000+ pages as "crawled but not indexed," the decision to &lt;em&gt;remove&lt;/em&gt; comparison pages entirely and double down on stock analysis pages was a strategic call. Agents can surface the data, but the judgment to cut 200,000 pages requires understanding Google's signals in context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Understanding Google's mood.&lt;/strong&gt; Google's indexing behavior is partially opaque. Why did my Dutch pages get more impressions than English ones? Why did a copper industry page get a click before any individual stock page? Agents can track these patterns, but interpreting them still requires human pattern recognition.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Numbers
&lt;/h2&gt;

&lt;p&gt;Transparency matters more than hype, so here's where things actually stand:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Current&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Total pages&lt;/td&gt;
&lt;td&gt;100,000+ across 12 languages&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pages indexed by Google&lt;/td&gt;
&lt;td&gt;2,246 (2.1%)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Search clicks (3 months)&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Impressions (3 months)&lt;/td&gt;
&lt;td&gt;2,180&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Average position&lt;/td&gt;
&lt;td&gt;52.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Backlinks&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Revenue&lt;/td&gt;
&lt;td&gt;$0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Those numbers are humbling. And they're real.&lt;/p&gt;

&lt;p&gt;The indexing rate is the critical blocker. Google has crawled 50,000+ pages and rejected them, and another 51,000+ pages it discovered but won't even crawl yet. Until the site builds authority through backlinks and proves content quality, the agentic pipeline is generating pages that Google isn't serving.&lt;/p&gt;

&lt;p&gt;But here's why I'm still optimistic: the pipeline is built. When indexing improves — through backlink building, content quality signals, and time — the system can scale instantly. I don't need to manually write 100,000 pages. The agent infrastructure is ready.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Start With Agentic SEO
&lt;/h2&gt;

&lt;p&gt;If you want to experiment with agentic SEO workflows, start small:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Automate your publishing pipeline first. Cross-posting is the lowest risk, highest time-savings entry point.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Build a site auditing agent. Even for a 50-page site, automated auditing catches things you'd miss manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Add data-driven content generation for one content type. Don't try to automate everything at once.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Monitor Google Search Console data programmatically. Build alerts for indexing drops, crawl errors, and ranking changes.&lt;/p&gt;

&lt;p&gt;The goal isn't to remove humans from SEO. It's to let humans focus on strategy and relationships while agents handle the repetitive execution at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;I'm currently focused on three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Content thickening&lt;/strong&gt; — adding related news, analyst ratings, and earnings data to each stock page to increase the quality signal&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal linking&lt;/strong&gt; — building cross-references between stock, sector, and ETF pages so Google can better understand site structure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backlink building&lt;/strong&gt; — the one thing agents can't automate, and the biggest blocker to indexing&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you're building something similar, or if you've cracked the indexing challenge at scale, I'd love to hear about it in the comments. What's your experience with using AI agents for SEO tasks?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I build tools for programmatic SEO and AI-powered content workflows. Check out the &lt;a href="https://apexstack.gumroad.com" rel="noopener noreferrer"&gt;Apex Stack&lt;/a&gt; collection of Claude skills for SEO, publishing, and financial analysis.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>seo</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>5 Claude Skills Every SEO Professional Needs in 2026</title>
      <dc:creator>Apex Stack</dc:creator>
      <pubDate>Wed, 11 Mar 2026 20:40:46 +0000</pubDate>
      <link>https://dev.to/apex_stack/5-claude-skills-every-seo-professional-needs-in-2026-57j6</link>
      <guid>https://dev.to/apex_stack/5-claude-skills-every-seo-professional-needs-in-2026-57j6</guid>
      <description>&lt;p&gt;The SEO landscape shifted dramatically when AI coding agents went mainstream. Tools like Claude, Codex, and ChatGPT aren't just chatbots anymore — they're autonomous agents that can run scripts, analyze data, and execute complex workflows.&lt;/p&gt;

&lt;p&gt;But here's what most SEO professionals are missing: &lt;strong&gt;skills&lt;/strong&gt; (also called custom instructions or agent configurations) turn these general-purpose AIs into specialized SEO machines. Instead of explaining your workflow from scratch every session, a skill gives the agent pre-loaded context, proven frameworks, and ready-to-run scripts.&lt;/p&gt;

&lt;p&gt;I've been running a programmatic SEO site with 100,000+ pages across 12 languages. Over the past few months, I've built and battle-tested a set of Claude skills that handle everything from content generation at scale to backlink strategy. Here are the five that changed my workflow the most.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Programmatic SEO Auditor
&lt;/h2&gt;

&lt;p&gt;If you're running a programmatic site — or thinking about building one — this is the skill you install first.&lt;/p&gt;

&lt;p&gt;The Programmatic SEO Auditor analyzes your site architecture for the specific patterns that make or break programmatic SEO: template quality scoring, internal linking coverage, schema markup completeness, crawl budget optimization, and content uniqueness across generated pages.&lt;/p&gt;

&lt;p&gt;What makes it different from generic SEO audit tools? It understands programmatic sites. Most audit tools flag every templated page as "duplicate content." This skill knows the difference between a well-structured template with unique data and actual thin content. It scores your templates on data richness, contextual uniqueness, and information gain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scores page templates for content quality and uniqueness&lt;/li&gt;
&lt;li&gt;Maps internal linking gaps across your page taxonomy&lt;/li&gt;
&lt;li&gt;Validates schema markup (JSON-LD) for every page type&lt;/li&gt;
&lt;li&gt;Identifies crawl budget waste from low-value URLs&lt;/li&gt;
&lt;li&gt;Generates a prioritized fix list with estimated impact&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I originally built this to diagnose why Google was only indexing 2% of my pages. The audit identified that my comparison pages were dragging down the entire domain's quality signals. After removing them based on the audit findings, my index rate started climbing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try it:&lt;/strong&gt; A &lt;a href="https://github.com/apex-stack-ai/pseo-auditor-lite" rel="noopener noreferrer"&gt;free lite version is on GitHub&lt;/a&gt; with basic auditing capabilities. The &lt;a href="https://apexstack.gumroad.com/l/pseo-auditor" rel="noopener noreferrer"&gt;full version ($19)&lt;/a&gt; adds multi-language support, crawl budget analysis, and generates a complete action plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Content Pipeline Builder
&lt;/h2&gt;

&lt;p&gt;Generating one article with AI is easy. Generating 10,000 pages with consistent quality, proper data integration, and multi-language support? That's an engineering problem.&lt;/p&gt;

&lt;p&gt;The Content Pipeline Builder skill encodes the entire system I use to generate programmatic content at scale. It's not a collection of prompts — it's an architecture pattern that includes template design, data source integration, LLM prompting strategies, quality scoring, and output validation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it covers:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Template system design with variable injection patterns&lt;/li&gt;
&lt;li&gt;Data pipeline architecture for pulling from APIs and databases&lt;/li&gt;
&lt;li&gt;LLM prompt engineering for consistent, factual output&lt;/li&gt;
&lt;li&gt;Quality scoring to catch hallucinations and thin content before publishing&lt;/li&gt;
&lt;li&gt;Multi-language content generation with cultural adaptation&lt;/li&gt;
&lt;li&gt;Scaling patterns that work from 100 to 100,000 pages&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key insight baked into this skill is that content quality at scale isn't about better prompts — it's about better systems. The skill includes a &lt;code&gt;pipeline_runner.py&lt;/code&gt; script that orchestrates the full generation → validation → output cycle, plus a &lt;code&gt;template-patterns.md&lt;/code&gt; reference covering the most effective template architectures I've tested.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get it:&lt;/strong&gt; This one is &lt;a href="https://apexstack.gumroad.com/l/content-pipeline" rel="noopener noreferrer"&gt;$25 on Gumroad&lt;/a&gt; with no free version — it's a complete system with production-tested code and architecture docs.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Blog Cross-Publisher
&lt;/h2&gt;

&lt;p&gt;Content marketing drives backlinks. Backlinks drive rankings. But cross-posting the same article to Medium, Dev.to, and Hashnode manually — while managing canonical URLs correctly — eats an hour every time.&lt;/p&gt;

&lt;p&gt;The Blog Cross-Publisher skill automates the entire process. Feed it a markdown article, and it publishes to all three platforms with proper canonical URL attribution, platform-specific tag mapping, and formatting that respects each editor's quirks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The tricky part it handles for you:&lt;/strong&gt; Medium deprecated their API for new users in 2023. Most automation tools simply don't work with Medium anymore. This skill includes a browser automation fallback that uses &lt;code&gt;insertText&lt;/code&gt; commands to work with Medium's editor directly — and it handles the gotchas like editor state sync that break most automation attempts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Publishes to Dev.to via REST API (instant)&lt;/li&gt;
&lt;li&gt;Publishes to Hashnode via GraphQL API (instant)&lt;/li&gt;
&lt;li&gt;Publishes to Medium via browser automation (2-5 min)&lt;/li&gt;
&lt;li&gt;Manages canonical URLs across all platforms&lt;/li&gt;
&lt;li&gt;Maps tags to each platform's format and limits&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I used this skill to publish the three articles that currently drive traffic to my projects. What used to take an hour per article now takes minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try it:&lt;/strong&gt; &lt;a href="https://github.com/apex-stack-ai/blog-publisher-lite" rel="noopener noreferrer"&gt;Free lite version on GitHub&lt;/a&gt; handles Dev.to publishing. The &lt;a href="https://apexstack.gumroad.com/l/blog-cross-publisher" rel="noopener noreferrer"&gt;full version ($12)&lt;/a&gt; adds Medium, Hashnode, and canonical URL management.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Financial Data Analyzer
&lt;/h2&gt;

&lt;p&gt;This one is for SEO professionals working in the finance niche — and there are a lot of you, because finance keywords have some of the highest RPMs in display advertising.&lt;/p&gt;

&lt;p&gt;The Financial Data Analyzer pulls stock data via yfinance (no expensive API subscriptions), generates analysis reports, compares tickers, and visualizes performance. If you're building content around stock analysis, dividend tracking, or sector comparisons, this skill turns Claude into a financial research assistant.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pulls real-time and historical stock data via yfinance&lt;/li&gt;
&lt;li&gt;Generates comprehensive ticker analysis reports&lt;/li&gt;
&lt;li&gt;Compares multiple stocks with side-by-side metrics&lt;/li&gt;
&lt;li&gt;Tracks dividend history and calculates yield metrics&lt;/li&gt;
&lt;li&gt;Visualizes price trends, sector performance, and earnings data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I built this to power the data pipeline behind my stock analysis site. Every stock page pulls fresh data, runs it through analysis frameworks, and generates unique content that Google actually values — because it's based on real numbers, not generic summaries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try it:&lt;/strong&gt; &lt;a href="https://github.com/apex-stack-ai/financial-analyzer-lite" rel="noopener noreferrer"&gt;Free lite version on GitHub&lt;/a&gt; with basic stock lookup. The &lt;a href="https://apexstack.gumroad.com/l/financial-analyzer" rel="noopener noreferrer"&gt;full version ($15)&lt;/a&gt; adds sector analysis, dividend tracking, portfolio comparison, and charting.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Revenue Agent
&lt;/h2&gt;

&lt;p&gt;Here's a skill I didn't plan to build — it emerged from my own daily routine.&lt;/p&gt;

&lt;p&gt;Running multiple web properties means tracking revenue across platforms, monitoring search rankings, managing content calendars, scheduling promotions, and doing weekly reviews. The Revenue Agent skill consolidates all of this into a single daily briefing system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generates a daily growth briefing with key metrics&lt;/li&gt;
&lt;li&gt;Maintains a revenue dashboard across multiple platforms&lt;/li&gt;
&lt;li&gt;Manages your content calendar with publishing deadlines&lt;/li&gt;
&lt;li&gt;Monitors SEO performance and flags ranking changes&lt;/li&gt;
&lt;li&gt;Schedules and tracks promotions across properties&lt;/li&gt;
&lt;li&gt;Produces a weekly review with trends and action items&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The skill includes Python classes (&lt;code&gt;RevenueTracker&lt;/code&gt;, &lt;code&gt;ContentCalendar&lt;/code&gt;, &lt;code&gt;SEOMonitor&lt;/code&gt;, &lt;code&gt;PromotionScheduler&lt;/code&gt;) that you can extend for your specific setup. It's designed for solo operators managing a portfolio of sites — the people who don't have a team but still need the systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try it:&lt;/strong&gt; &lt;a href="https://github.com/apex-stack-ai/revenue-agent-lite" rel="noopener noreferrer"&gt;Free lite version on GitHub&lt;/a&gt; with basic daily briefing. The &lt;a href="https://apexstack.gumroad.com/l/revenue-agent" rel="noopener noreferrer"&gt;full version ($19)&lt;/a&gt; adds multi-platform tracking, Python classes, and weekly review generation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Skills Beat Prompts
&lt;/h2&gt;

&lt;p&gt;You might be wondering: can't I just write good prompts and get the same result?&lt;/p&gt;

&lt;p&gt;You can, for one-off tasks. But skills are fundamentally different because they include &lt;strong&gt;persistent context&lt;/strong&gt;. A skill carries reference files, scripts, and domain knowledge that would take thousands of tokens to re-explain every session. My Programmatic SEO Auditor skill, for example, includes reference docs on crawl budget optimization patterns, schema markup templates for 15+ page types, and internal linking formulas — none of which would fit in a prompt.&lt;/p&gt;

&lt;p&gt;Skills also include &lt;strong&gt;executable code&lt;/strong&gt;. The Content Pipeline Builder doesn't just tell Claude how to generate content — it includes a Python script that actually runs the pipeline. The Blog Cross-Publisher includes API integration code for three different platforms. That's the difference between instructions and infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack Effect
&lt;/h2&gt;

&lt;p&gt;These five skills work best together. The Content Pipeline Builder generates your pages. The Programmatic SEO Auditor validates quality. The Blog Cross-Publisher distributes your content marketing. The Financial Data Analyzer powers your data pipeline. And the Revenue Agent tracks whether any of it is actually making money.&lt;/p&gt;

&lt;p&gt;That's the workflow I run daily, and it's all available as installable skills you can drop into Claude (or any compatible AI agent) and start using immediately.&lt;/p&gt;

&lt;p&gt;All the free lite versions are available on &lt;a href="https://github.com/apex-stack-ai" rel="noopener noreferrer"&gt;GitHub under the Apex Stack account&lt;/a&gt;. Full versions with advanced features are on &lt;a href="https://apexstack.gumroad.com" rel="noopener noreferrer"&gt;Gumroad&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm building in public — follow along as I scale a portfolio of online businesses using AI, programmatic SEO, and too much coffee. More articles on the systems behind it coming soon.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>seo</category>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>My Site Has 287,000 Pages and Zero Backlinks. Here's My Plan to Fix That.</title>
      <dc:creator>Apex Stack</dc:creator>
      <pubDate>Tue, 10 Mar 2026 14:43:38 +0000</pubDate>
      <link>https://dev.to/apex_stack/my-site-has-287000-pages-and-zero-backlinks-heres-my-plan-to-fix-that-5c9d</link>
      <guid>https://dev.to/apex_stack/my-site-has-287000-pages-and-zero-backlinks-heres-my-plan-to-fix-that-5c9d</guid>
      <description>&lt;p&gt;There's a brutal truth about programmatic SEO that nobody talks about when they're selling you on the dream: Google doesn't care how many pages you have if nobody links to you.&lt;/p&gt;

&lt;p&gt;I built a stock comparison engine with 287,000 pages across 12 languages. The tech stack works. The content pipeline runs. Pages render fast on a static Astro site behind Cloudflare. Every page has structured data, clean URLs, and unique AI-assisted analysis.&lt;/p&gt;

&lt;p&gt;Google has indexed about 2,500 of those 287,000 pages. That's 0.9%.&lt;/p&gt;

&lt;p&gt;The reason is simple. Domain authority is zero. Backlink count is zero. In Google's eyes, my site is a ghost.&lt;/p&gt;

&lt;p&gt;So I'm fixing it. Here's exactly what I'm doing, what's working, and what I'd skip if I started over.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Backlinks Still Matter in 2026
&lt;/h2&gt;

&lt;p&gt;Every year someone declares backlinks are dead. Every year the data says otherwise. Ahrefs, Semrush, and Moz have all published studies showing strong correlations between referring domains and organic rankings. In Google's own leaked documentation, link signals remain part of the ranking system.&lt;/p&gt;

&lt;p&gt;For programmatic SEO sites this matters even more than usual. When you're generating thousands of pages from templates, Google needs external trust signals to justify crawling and indexing all that content. Without backlinks, your crawl budget stays low and most of your pages sit in the "discovered but not indexed" bucket indefinitely.&lt;/p&gt;

&lt;p&gt;That's exactly where I was.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 1: Directory Submissions (Week 1-2)
&lt;/h2&gt;

&lt;p&gt;I started with the lowest effort, highest certainty plays. Directory submissions aren't glamorous, but they establish a baseline of referring domains that tells Google your site exists and is real.&lt;/p&gt;

&lt;p&gt;Here's what I targeted:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;General web directories&lt;/strong&gt; — places like BOTW, Jasmine Directory, and a handful of curated niche directories that still pass value. I looked for directories that have editorial review, not the ones that accept anything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tech and startup directories&lt;/strong&gt; — Product Hunt (as a discussion post, not a formal launch), BetaList, SaaSHub, AlternativeTo. These have real traffic and decent domain authority.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finance-specific directories&lt;/strong&gt; — niche directories for financial tools, stock screeners, and investment resources. Smaller audience but highly relevant.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer tool directories&lt;/strong&gt; — since the site is built with Astro and uses programmatic SEO, it fits in directories that list developer tools and open-source projects.&lt;/p&gt;

&lt;p&gt;Total time investment: about 4 hours across two days. Expected result: 15-25 referring domains within a month as submissions get reviewed and approved.&lt;/p&gt;

&lt;p&gt;The key insight: don't submit to 200 directories in one day. Spread it out. A sudden spike of low-quality directory links looks unnatural.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 2: Content Marketing (Week 2-4)
&lt;/h2&gt;

&lt;p&gt;This is where the real leverage is. I started writing about the process of building this site — the technical decisions, the failures, the data — and publishing on platforms where my target audience already hangs out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Medium&lt;/strong&gt; — long-form technical articles. Medium has strong domain authority and articles can rank in Google independently. Each article links back to the site naturally within the context of the story.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dev.to&lt;/strong&gt; — developer-focused platform. Great for technical breakdowns of the stack, code examples, and architecture decisions. The community is engaged and articles get syndicated through their newsletter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hashnode&lt;/strong&gt; — another developer blogging platform with good domain authority. Cross-posting here with canonical URLs pointing back to Medium ensures the content gets maximum distribution without duplicate content issues.&lt;/p&gt;

&lt;p&gt;The strategy isn't to write thinly veiled ads. Every article needs to stand on its own as genuinely useful content. The backlink comes from naturally referencing the project within an article that teaches something real.&lt;/p&gt;

&lt;p&gt;My first two articles covered the architecture behind building a 287k-page site and how I use a local LLM for content generation. Both got solid engagement from the developer community because they shared specific, actionable details — not vague "10 tips" fluff.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 3: Community Engagement (Ongoing)
&lt;/h2&gt;

&lt;p&gt;Reddit, Indie Hackers, and Hacker News are goldmines for early traffic and backlinks — but only if you do it right.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The wrong way:&lt;/strong&gt; Drop a link to your site in every relevant thread. Get downvoted. Get banned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The right way:&lt;/strong&gt; Be genuinely helpful. Answer questions in detail. Share your experience when it's relevant. Build a reputation in subreddits like r/SEO, r/SideProject, r/juststart, and r/webdev. When someone asks about programmatic SEO or scaling content, share what you've learned. The link to your site becomes a natural citation, not spam.&lt;/p&gt;

&lt;p&gt;I spend about 20 minutes per day browsing these communities and contributing where I actually have something useful to say. Some days I don't link to anything. Some days someone asks a question where my project is the perfect case study.&lt;/p&gt;

&lt;p&gt;The compounding effect is real. After a few weeks of genuine engagement, people start recognizing your username and upvoting your content by default.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 4: Guest Posts (Month 2-3)
&lt;/h2&gt;

&lt;p&gt;Once you have some published articles and community presence, guest posting becomes much easier. You're not a random person pitching from nowhere — you're someone with a track record of writing useful technical content.&lt;/p&gt;

&lt;p&gt;I'm targeting SEO blogs that cover programmatic approaches, developer blogs that feature case studies, indie hacking newsletters that showcase side projects, and finance/fintech blogs that cover stock analysis tools.&lt;/p&gt;

&lt;p&gt;The pitch is simple: I have a unique story (287k-page site, one person, local LLM pipeline) and I can write a detailed technical breakdown that their audience would find valuable. In exchange, I get a contextual backlink in the author bio or within the article.&lt;/p&gt;

&lt;p&gt;Acceptance rate for cold guest post pitches is typically 5-10%. With a portfolio of published articles to show, that jumps to 20-30%.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Skip
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Paid links.&lt;/strong&gt; Too risky for a new domain. Google's spam detection has gotten aggressive, and if your first batch of backlinks are obviously paid, you're starting your site's life with a penalty.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PBN networks.&lt;/strong&gt; Same problem, worse execution. Private blog networks are increasingly easy for Google to detect and the penalty is site-wide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Link exchanges.&lt;/strong&gt; "I'll link to you if you link to me" is the most common pitch in SEO, and it's also the least effective. Google explicitly targets reciprocal linking patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HARO/Connectively.&lt;/strong&gt; These journalist-response platforms used to be great for earning high-DA links. But response volumes have exploded, acceptance rates have cratered, and many publishers have moved behind paywalls. Still worth trying, but don't build your strategy around it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Math
&lt;/h2&gt;

&lt;p&gt;Here's my realistic timeline:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 1:&lt;/strong&gt; 15-25 referring domains from directories + content platforms. Domain Rating moves from 0 to maybe 5-8.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 3:&lt;/strong&gt; 50+ referring domains. Guest posts start landing. DR reaches 15+. Index rate climbs from 0.9% to 10-15%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 6:&lt;/strong&gt; 100-200 referring domains. Community reputation established. DR hits 25-30. Index rate above 30%.&lt;/p&gt;

&lt;p&gt;The relationship between backlinks and indexing isn't linear — it's more like a step function. There's a threshold where Google suddenly decides your site is worth crawling properly. Based on what I've seen from other programmatic SEO sites, that threshold is somewhere around DR 15-20 with 50+ unique referring domains.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Takeaway
&lt;/h2&gt;

&lt;p&gt;Building backlinks for a new domain isn't complicated. It's just slow. The temptation is to look for shortcuts, but every shortcut in link building carries risk that could sink your entire project.&lt;/p&gt;

&lt;p&gt;The sustainable playbook is: create content worth linking to, distribute it where your audience already is, and engage authentically in communities. It takes months, not days. But once domain authority starts building, every one of your programmatic pages benefits from the rising tide.&lt;/p&gt;

&lt;p&gt;I'll report back on the results. If you want the full technical breakdown of how I built the programmatic SEO pipeline behind this site — the data architecture, templates, AI integration, and deployment — I put everything into a guide: &lt;a href="https://apexstack.gumroad.com/l/pseo-blueprint" rel="noopener noreferrer"&gt;Programmatic SEO Blueprint&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Building in public at Apex Stack. Follow for more on programmatic SEO, side project economics, and building online businesses as a solo developer.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>seo</category>
      <category>webdev</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How I Use a Local LLM to Generate SEO Content for 10,000+ Pages</title>
      <dc:creator>Apex Stack</dc:creator>
      <pubDate>Tue, 10 Mar 2026 01:37:03 +0000</pubDate>
      <link>https://dev.to/apex_stack/how-i-use-a-local-llm-to-generate-seo-content-for-10000-pages-242j</link>
      <guid>https://dev.to/apex_stack/how-i-use-a-local-llm-to-generate-seo-content-for-10000-pages-242j</guid>
      <description>&lt;p&gt;There's a lot of hype about using AI for content. Most of it is about writing blog posts faster. That's fine, but it misses the bigger opportunity: using AI as a component in a programmatic content pipeline that generates thousands of unique pages.&lt;/p&gt;

&lt;p&gt;I run a stock comparison site with 287,000 pages across 12 languages. Every page has AI-generated narrative sections — not generic filler, but analysis that's specific to each stock pair. The AI doesn't write the page. It writes the parts of the page that need to feel human, while structured data handles everything else.&lt;/p&gt;

&lt;p&gt;Here's how the system works and what I've learned about making AI content that Google actually accepts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Local LLM (and Why Llama 3)
&lt;/h2&gt;

&lt;p&gt;First question everyone asks: why not just use the OpenAI API?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost.&lt;/strong&gt; At 287,000 pages with roughly 500-800 tokens of AI-generated content per page, even at GPT-3.5 prices you're looking at $200-400 just for the initial generation. Then factor in regeneration when you update templates, multilingual variants, and iterative improvements. It adds up fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Speed.&lt;/strong&gt; API rate limits mean generating content for 287k pages would take days of queued requests. With a local Llama 3 instance running on a decent GPU, I can generate content for thousands of pages per hour without worrying about rate limits or API outages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Control.&lt;/strong&gt; I can fine-tune prompts, adjust generation parameters, and rerun entire batches without worrying about cost. This matters more than you'd think — I went through about 15 iterations of my prompt templates before landing on output that consistently passed quality checks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Privacy.&lt;/strong&gt; Financial data flowing through a third-party API isn't ideal. Running locally means all data stays on my machine.&lt;/p&gt;

&lt;p&gt;That said, cloud APIs absolutely have their place. For one-off content, complex reasoning tasks, or when you need the best possible quality on a small number of pages, GPT-4 or Claude is better. But for batch generation at scale, local is the way to go.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Template + AI Hybrid
&lt;/h2&gt;

&lt;p&gt;The key concept: AI doesn't write the entire page. It writes specific sections within a structured template.&lt;/p&gt;

&lt;p&gt;A stock comparison page has roughly this structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Header — ticker names, logos, prices] ← Structured data
[Key Metrics Table — P/E, market cap, etc.] ← Structured data
[AI Comparison Summary — 2-3 paragraphs] ← AI generated
[Dividend Analysis] ← Mix of data + AI narrative
[Growth Metrics Chart] ← Structured data
[AI Investment Considerations] ← AI generated
[FAQ Section] ← AI generated from data
[Schema Markup] ← Auto-generated
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Maybe 60-70% of each page is structured data rendered by the template. The AI fills in the 30-40% that needs natural language — summaries, analysis, and FAQs.&lt;/p&gt;

&lt;p&gt;This hybrid approach solves two problems at once:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Uniqueness.&lt;/strong&gt; Every page has different data AND different narrative, so Google doesn't flag it as duplicate content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accuracy.&lt;/strong&gt; The AI generates text based on actual financial data passed in the prompt, not from its training data. This means the content is factually grounded, not hallucinated.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Prompt Architecture
&lt;/h2&gt;

&lt;p&gt;I can't share my exact production prompts (those are in the &lt;a href="https://apexstack.gumroad.com/l/pseo-blueprint" rel="noopener noreferrer"&gt;full blueprint&lt;/a&gt;), but here's the general approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context injection.&lt;/strong&gt; Every prompt starts with the actual data for that specific stock pair. The AI isn't generating from scratch — it's analyzing and narrating data it's been given.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are analyzing {STOCK_A} vs {STOCK_B}.

Here is the current financial data:
- {STOCK_A} P/E: {pe_a}, Market Cap: {mcap_a}, Dividend Yield: {div_a}
- {STOCK_B} P/E: {pe_b}, Market Cap: {mcap_b}, Dividend Yield: {div_b}

Write a 2-paragraph comparison focusing on...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Variation instructions.&lt;/strong&gt; To prevent all pages from reading the same way, I include randomized style directives: vary sentence length, alternate between starting with stock A or stock B, use different comparison frameworks (value vs growth, income vs appreciation, etc.).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output constraints.&lt;/strong&gt; Word count limits, formatting requirements, and explicit instructions about what NOT to include (no financial advice disclaimers in the body text, no "as an AI" self-references, no generic filler phrases).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quality gates.&lt;/strong&gt; After generation, every piece of content runs through automated checks: minimum uniqueness score against other generated pages, readability score, factual consistency against the source data, and keyword density checks.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Google's Helpful Content Update Means for This
&lt;/h2&gt;

&lt;p&gt;Google's official stance: AI-generated content isn't automatically bad. Low-quality content is bad, regardless of how it's made.&lt;/p&gt;

&lt;p&gt;In practice, here's what I've observed across my 287k pages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pages that get indexed tend to have:&lt;/strong&gt; Unique data points, specific analysis tied to that data, proper schema markup, and they answer a real search query that a human would type.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pages that DON'T get indexed tend to have:&lt;/strong&gt; Generic narrative that could apply to any stock pair, thin analysis that just restates the numbers in sentence form, and patterns that are too similar across pages.&lt;/p&gt;

&lt;p&gt;The lesson: the AI content needs to actually say something specific. "Stock A has a higher P/E than Stock B" is thin. "Stock A's P/E of 35 suggests the market expects significant growth, which makes sense given their 40% revenue increase last quarter, while Stock B's P/E of 12 reflects a more mature business with stable but slower growth" — that's useful analysis.&lt;/p&gt;

&lt;p&gt;Getting the AI to consistently produce the latter instead of the former is the core challenge. It's solvable with good prompts, but it took me 15+ iterations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multilingual Generation
&lt;/h2&gt;

&lt;p&gt;One of the biggest advantages of programmatic SEO is going multilingual cheaply. Here's how I handle it:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Template strings&lt;/strong&gt; (headers, labels, button text) are translated once by a human translator and stored in locale files. This is maybe 200-300 strings per language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI narrative sections&lt;/strong&gt; are generated in the target language directly, not translated from English. This produces more natural-sounding content than translation. The prompt is in English, but I instruct the model to output in the target language with the data injected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data stays the same.&lt;/strong&gt; Numbers, ticker symbols, percentages — these are universal. The template handles formatting (date formats, number separators) based on locale.&lt;/p&gt;

&lt;p&gt;Result: 12 languages with minimal per-language effort. The heavy lifting is in building the system. Each additional language is maybe 2-3 days of work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Breakdown
&lt;/h2&gt;

&lt;p&gt;For anyone wondering about the economics:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Local GPU (one-time, used RTX 3090)&lt;/td&gt;
&lt;td&gt;~$700&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Electricity for generation runs&lt;/td&gt;
&lt;td&gt;~$5/batch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Supabase (free tier + small paid)&lt;/td&gt;
&lt;td&gt;$25/mo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hosting (DigitalOcean Spaces)&lt;/td&gt;
&lt;td&gt;$5/mo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloudflare CDN&lt;/td&gt;
&lt;td&gt;Free tier&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Domain&lt;/td&gt;
&lt;td&gt;$12/year&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Monthly recurring&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$30/mo&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Compare that to paying for API calls at scale or hiring writers. The local LLM pays for itself after generating content for maybe 5,000-10,000 pages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mistakes I Made
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Starting with too many pages.&lt;/strong&gt; I generated content for 287k pages before validating that Google would index them. Should have started with 5k, gotten indexed, then scaled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not enough variation in prompts.&lt;/strong&gt; My first generation pass used identical prompt structures for every page. The output was technically unique but structurally identical. Google noticed. Second pass included randomized style directives and it made a big difference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ignoring readability.&lt;/strong&gt; Early AI output was dense and clinical. Real financial analysis writing varies between technical detail and accessible explanations. I had to explicitly prompt for this variation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No quality gate initially.&lt;/strong&gt; I generated everything in batch and published it all. Should have implemented automated quality checks before publishing. Catching the bottom 10% of AI output before it goes live saves you from thin content flags.&lt;/p&gt;

&lt;h2&gt;
  
  
  The System Today
&lt;/h2&gt;

&lt;p&gt;After iterating, my current pipeline looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data refresh&lt;/strong&gt; — Pull latest financial data via yfinance (daily cron job)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content generation&lt;/strong&gt; — Run Llama 3 on pages with stale or missing narrative (weekly)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quality check&lt;/strong&gt; — Automated scoring: uniqueness, readability, factual accuracy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build&lt;/strong&gt; — Astro generates static HTML for all pages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy&lt;/strong&gt; — Push to CDN&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The whole thing runs on a single machine. Total human time per week: about 2 hours of monitoring and occasional prompt tweaking.&lt;/p&gt;




&lt;p&gt;If you want the full technical details — the exact prompt templates, the quality scoring system, the Astro project structure, and the complete deployment pipeline — I wrote it all up in the &lt;a href="https://apexstack.gumroad.com/l/pseo-blueprint" rel="noopener noreferrer"&gt;Programmatic SEO Blueprint&lt;/a&gt;. It includes the MIT-licensed code examples you can adapt for your own projects.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Follow for more on building AI-powered content systems at scale.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>seo</category>
      <category>llm</category>
      <category>programming</category>
    </item>
    <item>
      <title>I Built a 287,000-Page Website. Here's What I Learned About Programmatic SEO.</title>
      <dc:creator>Apex Stack</dc:creator>
      <pubDate>Tue, 10 Mar 2026 01:37:02 +0000</pubDate>
      <link>https://dev.to/apex_stack/i-built-a-287000-page-website-heres-what-i-learned-about-programmatic-seo-4d5h</link>
      <guid>https://dev.to/apex_stack/i-built-a-287000-page-website-heres-what-i-learned-about-programmatic-seo-4d5h</guid>
      <description>&lt;p&gt;Most SEO advice boils down to the same thing: pick a keyword, write an article, wait three months, repeat. If you want 10x the traffic, you write 10x the content. That math doesn't work if you're one person.&lt;/p&gt;

&lt;p&gt;About a year ago I started experimenting with a different approach. Instead of writing articles one by one, I built a system that generates pages programmatically — one template, one data pipeline, thousands of output pages. Each page targets a specific long-tail keyword. The effort goes into building the machine, not feeding it.&lt;/p&gt;

&lt;p&gt;The site I built is a stock comparison engine. You type in two tickers and get a side-by-side breakdown: financials, dividends, growth metrics, the works. Simple concept, but at scale it covers every meaningful stock pair — across 12 languages.&lt;/p&gt;

&lt;p&gt;287,000 pages. One person. No content team.&lt;/p&gt;

&lt;p&gt;Here's what actually happened.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;p&gt;Nothing exotic. Astro for static site generation — fast, SEO-friendly, handles thousands of routes without breaking. Supabase PostgreSQL for the data layer. yfinance API for pulling financial data. A local Llama 3 instance for generating the narrative sections on each page.&lt;/p&gt;

&lt;p&gt;Total monthly cost: under $50.&lt;/p&gt;

&lt;p&gt;The key insight was separating data from presentation. The database holds structured financial data for 8,000+ tickers. The templates define how that data gets rendered into comparison pages. The AI fills in the gaps — generating human-readable analysis that's unique to each pair.&lt;/p&gt;

&lt;p&gt;I won't go deep on the code here, but the architecture looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Data Layer (Supabase PostgreSQL)
    ↓
ETL Pipeline (Python + yfinance)
    ↓
Content Generation (Llama 3 local)
    ↓
Static Site Generator (Astro)
    ↓
CDN (Cloudflare)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each layer is independent. If I want to swap Astro for Next.js, the data layer doesn't care. If I want to switch from Llama 3 to Claude, the templates don't change. This modularity ended up being really important when I needed to iterate fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Worked
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Long-tail coverage is insane.&lt;/strong&gt; When your site has a page for literally every stock pair, you're catching searches nobody else bothers to target. "[Random small-cap] vs [other random small-cap]" has almost zero competition. Volume per page is tiny. Volume across 287,000 pages adds up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multilingual was easier than expected.&lt;/strong&gt; Once you have the template and data pipeline, translating to 12 languages is mostly a matter of translating the template strings and regenerating. The data — numbers, tickers, percentages — stays the same. I used AI translation for the initial pass, then cleaned up the most important pages manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Schema markup at scale pays off.&lt;/strong&gt; Every page gets FinancialProduct schema, FAQ schema, and BreadcrumbList markup. Programmatic means you write the schema once, it applies everywhere. This is one of those things that would be insane to do manually across 287k pages but is trivial when it's built into the template.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The build pipeline is surprisingly stable.&lt;/strong&gt; I expected constant breakdowns with this many pages. But because everything is static and generated from structured data, there's very little that can go wrong at runtime. The site just serves HTML files. No server-side processing, no database queries on page load, nothing to crash.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Didn't Work
&lt;/h2&gt;

&lt;p&gt;Here's the thing nobody warns you about with programmatic SEO: Google doesn't want to index 287,000 pages from a brand new domain with zero backlinks.&lt;/p&gt;

&lt;p&gt;Out of 287k pages, Google indexed about 2,500. That's a &lt;strong&gt;0.9% index rate&lt;/strong&gt;. Brutal.&lt;/p&gt;

&lt;p&gt;The root causes were obvious in hindsight:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No domain authority.&lt;/strong&gt; New domain, no backlinks, no brand recognition. Google had no reason to allocate crawl budget to my site when established financial sites exist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Content similarity.&lt;/strong&gt; While each page had unique financial data, the narrative sections followed similar patterns. Google's helpful content system flagged some pages as thin — not because they lacked information, but because the structure was too uniform across pages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Crawl budget is real.&lt;/strong&gt; Google physically will not crawl 287k pages on a new domain. I was watching Search Console and seeing Googlebot visit maybe 200-500 pages per day. At that rate, it would take years to crawl everything — and Google's not going to crawl pages it doesn't think are worth indexing anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix (What I'm Doing Now)
&lt;/h2&gt;

&lt;p&gt;The solution is counterintuitive: fewer pages, not more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Cut pages ruthlessly.&lt;/strong&gt; I'm going from 287k down to 5,000-30,000 pages per language. Only keeping comparison pairs that have real search demand — validated with Search Console data and keyword research tools. If nobody searches for "[obscure penny stock] vs [other obscure penny stock]," that page doesn't need to exist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Thicken remaining pages.&lt;/strong&gt; Each surviving page needs genuine unique value beyond plugging different numbers into the same template. I'm adding sector context, historical trend analysis, dividend deep-dives, and custom AI-generated insights that are actually specific to each stock pair. The goal is that every page could stand on its own as a useful resource.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Build backlinks.&lt;/strong&gt; There's no shortcut here. I started with directory submissions (boring but necessary), moved to industry-specific directories, and I'm now doing targeted outreach. The goal is DR 15+ within 6 months.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Optimize crawl budget.&lt;/strong&gt; Better sitemap strategy — instead of one massive sitemap, I broke it into smaller topic-based sitemaps. Improved internal linking so Googlebot can discover important pages through the site structure, not just the sitemap. Removed low-value pages from the index with noindex tags.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;p&gt;Let me be transparent about where things stand:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Current&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Total pages&lt;/td&gt;
&lt;td&gt;287,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pages indexed&lt;/td&gt;
&lt;td&gt;~2,500&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Index rate&lt;/td&gt;
&lt;td&gt;0.9%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Domain Rating&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Backlinks&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Monthly revenue&lt;/td&gt;
&lt;td&gt;$0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Monthly cost&lt;/td&gt;
&lt;td&gt;~$50&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Not pretty. But here's why I'm still bullish on this approach:&lt;/p&gt;

&lt;p&gt;The infrastructure works. The data pipeline works. The content generation works. The site loads fast, passes Core Web Vitals, and has proper schema markup on every page. The only thing broken is Google's trust — and that's a solvable problem with time and backlinks.&lt;/p&gt;

&lt;p&gt;Once indexing improves, the growth should compound quickly. Each indexed page targets keywords that virtually nobody else is competing for. And with 12 languages, the addressable market is massive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons If You're Considering This
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Start smaller than you think.&lt;/strong&gt; If I could do it over, I'd launch with 5,000 pages instead of 287,000. Get those indexed, prove the model works, then scale up. Launching with hundreds of thousands of pages on a new domain is just asking Google to ignore you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your data source is your moat.&lt;/strong&gt; Anyone can build a template. The hard part is finding a data source that's both comprehensive and accessible. Financial data via yfinance was a good choice because it's free, structured, and covers thousands of entities. Think about what data you can get that's hard for others to replicate at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The template + AI hybrid is the sweet spot.&lt;/strong&gt; Pure template-based pages (just plugging in data) get flagged as thin. Pure AI-generated pages are expensive and inconsistent. The hybrid — structured data rendered by templates with AI-generated narrative sections — hits the balance of unique content at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget for backlink building from day one.&lt;/strong&gt; I made the mistake of assuming great content would earn links naturally. It doesn't when you have zero domain authority. Build backlink acquisition into your launch plan, not as an afterthought.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Programmatic SEO is a patience game.&lt;/strong&gt; This isn't a "launch and rank tomorrow" strategy. It's an infrastructure play. You're building a machine that compounds over time. The first 6 months will feel slow. That's normal.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;I'm documenting everything as I go — the full technical architecture, every prompt I use for content generation, the monetization roadmap, and all the mistakes so you don't repeat them.&lt;/p&gt;

&lt;p&gt;I've packaged it all into a guide called the &lt;a href="https://apexstack.gumroad.com/l/pseo-blueprint" rel="noopener noreferrer"&gt;Programmatic SEO Blueprint&lt;/a&gt;. It covers niche selection, data architecture, AI content generation, the Astro/Next.js implementation, SEO infrastructure, the indexing problem and how to solve it, and monetization strategy. All code examples are MIT licensed.&lt;/p&gt;

&lt;p&gt;If you're thinking about building something like this, I'd say go for it. Just start smaller than I did.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you found this useful, follow me for more on building programmatic SEO sites. I'll be posting updates as the indexing situation improves.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>seo</category>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
