DEV Community

Biricik Biricik
Biricik Biricik

Posted on • Originally published at zsky.ai

How I Shipped 18 Landing Pages in 9 Hours (As a Solo Founder)

A few weeks ago I shipped 18 SEO landing pages in nine hours. Solo. No agency, no contractors, no copywriter. They went live, they pass Lighthouse, they're indexed, they're already pulling search impressions, and I didn't burn out doing it.

This is not a "10x your output with AI" post. It's a tactical writeup of what actually worked, what broke, and what I would do differently. If you're a solo founder trying to claw out organic traffic, this is the post I wish I'd had when I started.

The setup

Background: I run an indie creative platform. I'm a solo founder with one engineer (me) and one ops person (me). For months I'd been hand-writing landing pages one at a time and it was killing me — each page took 4-6 hours, and I'd ship maybe two a week. At that pace, the SEO flywheel never starts spinning.

The math problem: my niche has roughly 600 long-tail queries that matter. At two pages a week, that's six years to cover them all. By that time I'm running a coffee shop instead.

So I sat down one Saturday morning and decided to do an experiment: how many pages can I ship in one day if I parallelize aggressively, use good tooling, and accept "good enough" instead of "perfect"?

The answer turned out to be 18. Here's how.

What you can and can't parallelize

The first lesson, and the one that took me the longest to learn: not everything parallelizes. People talk about "parallel agents" like it's magic, but the moment two agents touch the same file, you have a merge nightmare.

Here's the actual decomposition that worked:

Parallelizable — page-specific content drafting, schema generation, image prompting, FAQ generation, internal link suggestions.

NOT parallelizable — sitemap.xml updates, header navigation updates, the global CSS, anything that touches a shared file.

The discipline I had to enforce: each subagent gets its own scoped output directory and its own page slug. They never write to a shared file. The merge step happens at the end, by hand, in 15 minutes. Without that boundary, the parallel work eats itself.

The actual workflow

Here's the order of operations. I'm writing this in the order I did it, including the dead ends.

Step 1 — pull real SERP data. Don't speculate about what people search for. I pulled the top 100 long-tail queries from my niche using Google Search Console, sorted by impressions, then filtered to ones where my current rank was page 2-4 (the "almost ranking" zone where one good page can move you to page 1).

This gave me a CSV of 100 rows: query, current rank, current impressions, current CTR. That CSV became the prompt for everything else.

Step 2 — group queries into page topics. Some queries are duplicates ("free ai image generator" and "ai image generator free"). Group them so one page can target a cluster of 5-10 related queries. I ended up with 22 clusters from the 100 queries. Eighteen of them were good enough to ship that day; four needed more research.

Step 3 — write a "done template." This is the most important step and the one most people skip. Before you write any content, write the template that defines what "done" looks like. Mine was:

  • 1,200-1,800 words
  • H1 with the primary keyword
  • 4-6 H2 sections with variant keywords
  • 1 hero image with descriptive alt text
  • 1 FAQ section with 5-7 questions in JSON-LD schema
  • 1 internal link block to 3-5 related pages
  • 1 CTA above the fold
  • Lighthouse score 90+ on mobile

Then I wrote one page by hand against the template. It took 90 minutes. That page became the gold standard. Every subsequent page had to match the gold standard or I'd reject it.

Step 4 — dispatch parallel subagents. I batched the 18 remaining pages into groups of 3 and dispatched each batch in parallel, with a strict prompt: "Match the gold standard at /pages/golden.html. Use these queries. Output to /pages/draft/{slug}.html. Do not touch any other file."

The first batch came back in 20 minutes. The second in 25. By batch four I was getting them in 12 minutes. The model was learning the template from my reference page and got faster as it went.

Step 5 — schema audit. Every page needs JSON-LD schema or it doesn't get rich snippets. I wrote a tiny audit script that grepped each draft for application/ld+json blocks, validated the schema with the official Google validator, and flagged anything missing. Took 4 minutes per batch.

#!/bin/bash
# audit_schema.sh
for f in pages/draft/*.html; do
  if ! grep -q "application/ld+json" "$f"; then
    echo "MISSING SCHEMA: $f"
    continue
  fi
  schema=$(awk '/<script type="application\/ld\+json">/,/<\/script>/' "$f")
  echo "$schema" | python3 -c "import json,sys; json.loads(sys.stdin.read().split('>')[1].split('<')[0])" 2>/dev/null \
    || echo "INVALID SCHEMA: $f"
done
Enter fullscreen mode Exit fullscreen mode

Eight pages failed the audit on the first pass. I fixed them in batch with one regex.

Step 6 — internal link strategy. This is the part most people get wrong. They publish a bunch of pages and then forget to link them to each other. Orphan pages don't rank.

My approach: I built a tiny graph. Each page declares 3-5 "related slugs" in its frontmatter. After all 18 pages were drafted, I ran a script that injected a "Related" block into each page based on the graph. Bidirectional. Every page links to at least 3 others, and is linked from at least 3 others. Google sees a tight topic cluster.

Step 7 — sitemap and header update. This is the serial step. One file at a time. Sitemap.xml gets 18 new entries. The site header navigation gets a "More Tools" mega-menu. Took 20 minutes.

Step 8 — Lighthouse check on 3 random pages. Don't check all 18. Pick three at random. If they pass, the rest probably pass. If one fails, look at the failure mode and fix all 18 (it's almost always the same issue).

Step 9 — submit to Search Console. Indexing API for the impatient, sitemap ping for the patient. I did both because I was already in the zone.

The 30-minute "done template" test

This is the rule that saved me. After writing the gold standard page, I told myself: every subsequent page must take less than 30 minutes from "agent finishes" to "page is live." If a page takes longer, the template is broken — go fix the template, don't fix the page.

This forced me to make the template so good that the agent's output basically just worked. By page 6, I was spending 8 minutes per page on review. By page 12, I was spending 4 minutes.

If you find yourself spending 60 minutes per page reviewing AI output, your template is wrong. Stop, fix the template, restart.

What broke

Not everything went well. A few horror stories:

Subagent A wrote a page in the wrong voice. It was technical and dry; the rest of the site is conversational and warm. I had to rewrite it. The fix was adding "voice example: [paste 200 words from your existing site]" to the dispatch prompt.

Two pages had nearly identical opening paragraphs. The agents converged on similar phrasing because they had similar query inputs. I rewrote the openings of both. The fix going forward: explicitly tell each agent "do not start with the word X, do not use phrase Y." Adversarial constraints work.

One page had a phantom image reference. The agent wrote <img src="hero.jpg"> with no actual image file. I caught it because Lighthouse complained. Fix: add an image-existence check to the audit script.

Schema validation broke twice on FAQ blocks. Google's FAQ schema is picky about question text matching exactly between visible and JSON-LD versions. I had a regex that handled punctuation differences. It took 15 minutes to debug, then it was fine for the rest.

The mega-menu got cluttered. With 18 new pages added to the header nav, it looked like 1998 Yahoo. I refactored to a two-column dropdown. Took 30 minutes after the main batch was done.

What worked

A few things I'd absolutely repeat:

Real SERP data over keyword research tools. GSC told me what was already almost ranking. That signal is gold compared to volume estimates from third-party tools.

One gold-standard reference page. This is the single highest-leverage decision. It turned the agents from "generic AI content factories" into "extensions of my voice."

The done template as a hard contract. I rejected pages that didn't match the template instead of fixing them. Rejection is faster than fixing because the agent can re-output in 90 seconds.

Parallel batches with isolated outputs. No shared files = no merge conflicts. The whole pipeline ran without a single git rebase.

Internal link graph as a separate pass. Linking is its own concern; doing it after drafting is faster than trying to do it during.

Strict 30-minute per-page time budget. The constraint forced quality into the template instead of into individual page review.

The result, two weeks later

  • 18 pages live, all indexed
  • 11 of them already showing impressions in GSC
  • 3 of them are on page 1 of Google for their primary query
  • 0 of them have been edited since launch

That last number matters. The pages aren't perfect. They're good enough, which means I haven't been pulled back into copy-editing them. My time goes to building features instead of polishing landing pages I already shipped.

The thing I'd do differently next time

Schema. I should have written a schema generator subagent as its own concern from the start. Instead I bolted it on as an audit step. Half my debugging time was schema-related. If I were doing this again, I'd have a dedicated subagent that takes the page draft as input and outputs only the JSON-LD block, validated. Then merge.

The other thing: I'd queue up day 2 the night before. The hardest part of shipping 18 pages in one day is not the writing — it's the cognitive load of switching between 18 topics. If I'd staged the SERP groupings the night before with notes, day-of would have been 30% faster.

Why I bothered

I'm a solo founder who didn't have $20K to hire an SEO agency. I do have a brain injury and ADHD and a mission I care about. When I figured out that 18 pages in 9 hours was possible, it was the first time SEO felt like something I could actually do as a non-marketer.

That's the thing about AI tooling that gets lost in the hype. It's not about replacing humans. It's about making things possible for the people who couldn't do them before — the indie founder, the artist, the person with two jobs. The tools collapse a $20K agency project into one focused Saturday. That's the part that matters.

I'm at zsky.ai if you want to see what we built with all that time we got back. Drop a comment if you want me to share the audit script or the dispatch template — happy to write a follow-up.

Top comments (0)