I Built an Automated YouTube News Network That Generates 5 Daily Video Recaps with Zero Human Input
What if you could launch a news network that runs itself? No editors, no cameras, no studio. Just code, RSS feeds, and AI.
That's exactly what I built this weekend: The Briefly Pulse — a fully automated system that generates professional video news recaps for 5 different topics every day and uploads them straight to YouTube.
What It Does
Every day at 8 PM, a cron job kicks off and:
- Scrapes 50+ RSS feeds and news sources across 5 categories (World, Tech, Crypto, Sports, Gaming)
- AI curates the top 8 stories for each category — no duplicates, ranked by impact
- Generates branded video slides with story images, headlines, and summaries
- Creates voiceover narration using Edge TTS with a natural female voice
- Assembles everything into a polished video with Ken Burns effects and smooth transitions
- Uploads to YouTube with SEO-optimized titles, descriptions, and tags
- Posts to Telegram channels for each niche audience
Total runtime: ~12 minutes for all 5 videos. Total cost: $0 (all free/open-source tools).
The Tech Stack
Here's what powers the entire pipeline:
| Component | Tool | Cost |
|---|---|---|
| News Aggregation | RSS + SearXNG (self-hosted) | $0 |
| AI Curation | GLM / Ollama (self-hosted) | $0 |
| Slide Generation | Python + Pillow | $0 |
| Voice Narration | Edge TTS (Microsoft) | $0 |
| Video Assembly | FFmpeg | $0 |
| YouTube Upload | Google YouTube Data API v3 | $0 |
| Telegram Posts | Telegram Bot API | $0 |
| Orchestration | OpenClaw + cron | $0 |
Total monthly cost: $0 (assuming you already have a server).
How the Video Generation Works
The core script is about 1,000 lines of Python. Here's the high-level flow:
# Simplified flow
async def generate_daily_recap(channel):
# 1. Get fresh stories (cached every 5 min)
stories = get_todays_stories(channel, config, target_count=8)
# 2. Fetch story images
images = await fetch_story_images(stories)
# 3. Generate branded slides
slides = generate_slides(channel, stories, images)
# 4. Generate voiceover for each slide
voiceovers = await generate_all_voices(
build_voiceover_scripts(channel, stories)
)
# 5. Assemble video with Ken Burns + transitions
assemble_video(slides, voiceovers, output_dir)
Slide Design
Each video has:
- Intro slide — Channel branding + list of today's top 8 stories
- 8 story slides — Each with a relevant image, headline, summary, and category tag
- Outro slide — Call to action + social links
The slides use consistent branding per channel with custom color schemes:
- 🌍 World Pulse — Royal blue
- 💻 Tech Pulse — Cyan
- ₿ Crypto Pulse — Orange/gold
- ⚽ Sports Pulse — Forest green
- 🎮 Pokémon Pulse — Red
Ken Burns Effect
Static slides are boring. Each slide gets a subtle zoom/pan animation:
# Alternating zoom patterns for visual variety
if slide_index % 3 == 0:
# Slow zoom in from center
scale = f"{start_scale}+({end_scale-start_scale})*t/{duration}"
elif slide_index % 3 == 1:
# Pan left to right with slight zoom
...
Voice Narration
Edge TTS (Microsoft's free text-to-speech) generates surprisingly natural narration. The script creates conversational voiceover text:
# Intro: conversational hook
scripts.append(f"Welcome to The Briefly {channel_name}. "
f"Here are your top {len(stories)} stories for today.")
# Each story gets a natural summary
scripts.append(f"Story number {i}. {story['title']}. {story['summary']}")
# Outro
scripts.append("That's your recap for today. Follow for daily updates.")
Parallel Video Generation with Subagents
Generating 5 videos sequentially would take ~20 minutes. Instead, I spawn 4 parallel subagents (the orchestrator handles the 5th):
Main Agent → Spawn subagent: "Generate tech video"
→ Spawn subagent: "Generate crypto video"
→ Spawn subagent: "Generate sports video"
→ Spawn subagent: "Generate pokemon video"
→ Meanwhile: Upload world video to YouTube
Result: All 5 videos done in ~4 minutes instead of 20. Each subagent is independent — they share no state and just run the same Python script with different channel arguments.
The News Refresh Pipeline
A cron job runs every 5 minutes and:
- Hits 50+ RSS feeds across all categories
- Searches SearXNG (self-hosted Metasearch) for trending stories
- Deduplicates by title similarity (60% word overlap = duplicate)
- Merges with existing cache (keeps last 50 stories per channel)
- Saves to JSON for the video generator to pick up
This means the video generator always has fresh stories within the last 5 minutes.
Results So Far
After one day:
- 5 channels generating daily content
- 8 YouTube videos uploaded
- 5 Telegram channels distributing content
- Zero human intervention required for daily operation
- ~$0/month operating cost
The quality isn't perfect yet — I've identified issues with headline truncation on slides and some AI-generated images that don't match the story context. But the pipeline works end-to-end, and improvements are incremental from here.
What I Learned
1. RSS Is Still King
For news aggregation, RSS feeds are more reliable than scraping. BBC, Guardian, Ars Technica, CoinTelegraph — they all have excellent RSS feeds that update frequently and include good summaries.
2. Edge TTS Is Underrated
Microsoft's free text-to-speech is shockingly good. The "Aria" voice sounds natural and energetic — not robotic at all. And it's completely free with no API key required.
3. FFmpeg Can Do Anything
Ken Burns effects, cross-fade transitions, audio mixing — FFmpeg handles it all. The learning curve is steep, but once you understand the filter graph syntax, you can do broadcast-quality video processing.
4. Parallel Execution Matters
The difference between 4 minutes and 20 minutes for daily video generation is the difference between "this is viable" and "this is painful." Subagent delegation makes parallel work trivial.
5. Distribution > Creation
Building the generator was the easy part. Getting people to actually watch the videos? That's the real challenge. A YouTube channel with 0 subscribers needs aggressive distribution to get initial traction.
What's Next
- YouTube Shorts — Vertical 60-second versions for mobile discovery
- Better AI curation — Scoring stories by impact, recency, and diversity
- Custom thumbnails — Generated automatically for each video
- Analytics dashboard — Track views, engagement, and growth across all channels
- More channels — Science, business, gaming verticals
Try It Yourself
The core dependencies are all free and open-source:
- Python 3.10+
- Pillow (image generation)
- edge-tts (voice synthesis)
- FFmpeg (video assembly)
- feedparser or xml.etree (RSS parsing)
- google-api-python-client (YouTube upload)
The entire pipeline runs on a single VPS. No GPU required — everything is CPU-based.
This is part of the MFS Corp project — building autonomous AI-driven business operations. Follow for more posts about AI automation, infrastructure, and the journey to $1,400/month revenue.
What automated content pipelines have you built? I'd love to hear about your approach in the comments.
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.