How I use Gemini to score news virality — and why it actually works
When I built AsiafeedTech, I had a problem: 200+ Asian tech articles per day, but only 5 slots in the newsletter and 3 YouTube Shorts to produce.
Something had to decide which stories mattered. I built a viral scoring system powered by Gemini — and it turned out to be one of the most interesting engineering challenges of the project.
The naive approach (and why it fails)
My first instinct was keyword matching. If an article mentions "OpenAI" or "BYD" or "IPO" — high score. Simple, fast, wrong.
The problem: keyword matching finds popular topics, not viral stories. An article titled "BYD reports quarterly results" and "BYD just quietly shipped a car that charges in 5 minutes" both match "BYD" — but only one of them spreads.
What makes something actually go viral?
I broke it down into signals that a human editor would intuitively feel:
- Novelty — is this genuinely new information, or a rehash?
- Surprise factor — does it contradict expectations?
- Stakes — does it affect a lot of people or a lot of money?
- Specificity — concrete numbers beat vague claims every time
- Western relevance — will a founder in SF actually care?
The last one is critical for AsiafeedTech. A story about a Chinese domestic policy change might be huge locally but irrelevant to my audience. Gemini needs to apply a cultural filter, not just a relevance filter.
The prompt
I pass the translated article title + summary to Gemini with a structured scoring prompt:
You are an editor for a Western tech audience (founders, investors, developers).
Score this Asian tech news article on a scale of 1-100 for viral potential.
Consider:
- Novelty and surprise factor
- Relevance to Western tech/startup ecosystem
- Specificity (concrete numbers, named companies, tangible outcomes)
- Stakes (market size, geopolitical impact, technological leap)
- Emotional resonance (inspiring, alarming, surprising)
Return ONLY a JSON object:
{"score": 85, "reason": "one sentence"}
The reason field is gold — it's what I display in the admin panel and what Gemini uses as a forcing function to justify the score rather than guess.
Temperature matters more than you think
I run scoring at temperature: 0.2. Low temperature = consistent, repeatable scores across similar articles. Higher temperature introduced too much variance — the same article would score 45 one run and 72 the next.
For creative tasks like script generation I use temperature: 0.9. For scoring: determinism wins.
Does it actually work?
Honestly — better than I expected. The top-scored articles are consistently the ones I'd have picked manually. Stories about Chinese EV range breakthroughs, surprise AI model releases, or unexpected regulatory moves consistently outscore routine earnings reports.
The system isn't perfect. It occasionally over-scores geopolitical news that's more alarming than actionable, and under-scores slow-burn trends. But for a fully automated pipeline with zero human editors — it holds up.
What's next
I'm considering adding a feedback loop: if a YouTube Short generated from a high-scored article gets above-average views, that reinforces the scoring pattern. Essentially fine-tuning the prompt based on actual engagement data.
Building in public at 👉 asiafeedtech.com
What scoring signals would you add? Drop them in the comments.
Top comments (0)