This morning I scaffolded @royal_lore, a second YouTube channel that runs on the exact same scripts/yt-publish/ pipeline that already powers my first channel. The content bank I wrote covers 32 historical queens — Egyptian, European, Chinese, Indian, Russian, Aztec, African — across five narrative angles each: "who she was," "why she's underrated," "what historians argue about," "what she built," "how she died." That's 160 videos at one per day — roughly five months of content before I need to refill the bank.
Zero new CI code.
What I actually built
The scaffolding for the new channel was four files:
-
.github/workflows/yt-publish-royal.yml— a dedicated workflow pointing atcontent/yt-queue-royal/as its queue directory andROYAL_YT_*secrets for OAuth -
content/yt-queue-royal/— an empty queue directory with a.gitkeep -
content/yt-queue-royal/uploaded/— where the workflow archives processed items after upload -
docs/royal-queens-bank-en.md— 179 lines of structured content prompts
The entire upload path — ffmpeg thumbnail generation, Pexels background selection, the OAuth flow, the queue-dequeue state machine, the retry logic, the [skip yt-publish] guard on commits — is identical to the first channel and untouched. The second channel is a namespace, not a new system.
Four hours of work to add a pipeline lane that could run for five months without me touching it again. That cost ratio is the thing I want to test.
The pipeline-as-product thesis
The real asset in a content-heavy side project isn't the content itself. It's the pipeline that produces and distributes content consistently, without my hands on it each day.
For the three SEO directory sites (Top AI Tools, Find Games Like, Open Alternative To), I have a Turso libSQL database holding model and game records, ETL scripts that hit the HuggingFace API and Claude Haiku for content generation, Astro 5 SSG to compile static HTML, and GitHub Actions cron jobs to trigger the whole chain daily. Total hands-on time per day: effectively zero after setup. The sites refresh, the videos publish, the Bluesky queue fills — none of it waits for me to wake up.
The marginal cost of the second YouTube channel was about four hours. Contrast that with starting a channel manually — scripting, recording, editing, uploading, writing descriptions, picking tags — which for a single polished video can be a half-day of work.
If the pipeline can produce 160 videos at essentially zero time cost after setup, the question isn't whether to run the experiment. The question is whether YouTube has any technical or policy problem with it.
Content bank design
The structure I used for the royal queens bank isn't arbitrary. Each queen gets five angles because I want topical diversity within a coherent channel. YouTube probably rewards consistency of upload schedule more than rigid format repetition — a channel that posts 30 identical "who was queen X" videos is probably less interesting than one that mixes biographical, historiographical, and analytical framings.
The geographic rotation (Egyptian one week, European the next, Chinese, Indian, etc.) serves a similar purpose. Viewers who find the channel through an interest in Egyptian history might stay for European royalty if the framing is consistent — but probably won't if the channel reads as arbitrary.
I don't know if this matters at zero subscribers. But designing the content bank with eventual coherence in mind costs nothing at the writing stage.
The falsifiable claim
By June 30, 2026 — eight weeks from now — I expect three things to be true:
-
@royal_lorepublishing at least one video per day since channel activation - CI time overhead below two additional hours per week across both channels combined
- No YouTube API quota violations from running two channels under the same developer project
I won't claim traffic or subscriber targets. The channel doesn't publicly exist yet — it's waiting on OAuth credentials I haven't generated. Revenue and discovery are separate bets I haven't made.
The counterargument I can't dismiss
Two channels means two algorithms to satisfy. YouTube's recommendation system rewards consistent uploads to a coherent topic cluster. A channel about AI developer tools and a channel about historical queens share no audience, no keywords, and no community. They're maximally incoherent from the algorithm's perspective.
This objection isn't wrong. But it's aimed at the wrong stage of the experiment.
At zero subscribers, the algorithm isn't rewarding anything. The question isn't "will two channels compound better than one" — they're completely separate properties, so they don't compound at all in either direction. The question is whether the pipeline sustains two independent upload schedules without my time becoming the bottleneck.
The stronger version of the objection: by spreading across two channels, I reduce quality and topical depth in each. A channel that posts 30 deeply researched historical narratives might outperform one that posts 160 generated ones. Content quality and volume exist in tension, and automation optimizes for volume.
I accept that trade-off. The bet I'm making is narrower than "royal_lore becomes a successful channel." I'm betting that 160 videos at near-zero marginal cost is worth the experiment even if the channel never finds an audience — because the infrastructure learning transfers to every future channel I might want to launch.
What would change my mind
Three specific things would make me shut down the second channel and consolidate onto one:
YouTube API quota exhaustion. The Data API v3 gives 10,000 units per day. A single video upload costs 1,600 units. Two daily uploads = 3,200 units — comfortably within limits today. If I add a third channel or if Google changes the quota allocation, that headroom disappears fast. I'll monitor the quota dashboard from day one.
CI overhead exceeds two hours per week. If debugging failed uploads, rotating expired OAuth tokens, and refilling queues across two channels starts taking real time, the economics collapse. The whole architecture is premised on my time not being the variable cost. The moment it is, I've built the wrong thing.
YouTube's automated review terminates a channel. I don't know yet how YouTube's systems respond to AI-generated historical content uploaded daily at volume. One terminated channel is a failed experiment. Two terminated channels in quick succession is a signal that this format isn't viable at scale, and I should rethink the distribution strategy entirely.
Month-one metrics will tell me more than any projection. I'll post numbers when I have them.
Part of an ongoing 6-month experiment running three AI-curated directory sites. The technical claims here are real; this article was AI-assisted.
Top comments (0)