DEV Community

AI Businessman
AI Businessman

Posted on • Edited on

I Built a Tool That Predicts Which Blog Topics Will Go Viral (Before Writing Them)

I Built a Tool That Predicts Which Blog Topics Will Go Viral (Before Writing Them)

The Problem:

You spend 4 hours writing an article. You edit, polish, add code snippets, find the perfect cover image. You hit publish.

It gets 12 views.

You have no idea why it flopped. You just wasted a Sunday afternoon.

I've been there. As someone building an autonomous content system (846 pieces generated so far), I needed a way to validate topics before writing them.

So I built AI Topic Scorer - a CLI tool that uses 5 AI personas to predict engagement before you write a word.


How It Works

Instead of guessing, the tool asks 5 specialized AI personas to rate your topic:

┌─────────────────────────────────────┐
│  Your Topic + Context               │
└──────────────┬──────────────────────┘
               │
               ▼
    ┌──────────────────────────────┐
    │  5 AI Personas Score 1-10:   │
    │  • Interest (will I click?)  │
    │  • Share (will I share?)     │
    │  • Value (is it useful?)     │
    │  • Uniqueness (is it new?)   │
    └──────────────┬───────────────┘
                   │
                   ▼
         ┌─────────────────────┐
         │  Predicted Score    │
         │  + Reactions        │
         └─────────────────────┘
Enter fullscreen mode Exit fullscreen mode

The personas represent real audience segments:

  • Alex - Senior Developer (reads Dev.to, values code-heavy content)
  • Sam - Startup Founder (non-technical, wants actionable insights)
  • Jordan - CTO (makes tech decisions, shares with team)
  • Morgan - Indie Hacker (builds for revenue, active on Reddit/Twitter)
  • Riley - Data Scientist (ML engineer, values benchmarks)

Show Me The Code

Here's a real example:

$ python scorer.py "Building a RAG system with local LLMs" \
  "For developers wanting to avoid API costs"
Enter fullscreen mode Exit fullscreen mode

Output:

======================================================================
📊 TOPIC: Building a RAG system with local LLMs
======================================================================

🎯 PREDICTED SCORE: 7.8/10

📈 Breakdown:
   Interest:    8.2/10  (weight: 30%)
   Shareability: 7.4/10  (weight: 25%)
   Value:        8.4/10  (weight: 30%)
   Uniqueness:   6.8/10  (weight: 15%)

💬 Persona Reactions (5 personas):

   Alex - Senior Developer:
     Scores: I:9 S:8 V:9 U:7
     → This is exactly what I've been researching - practical guide 
       would save me weeks.

   Sam - Startup Founder:
     Scores: I:7 S:6 V:8 U:6
     → Interesting cost-saving angle, but I'd need to understand 
       the technical tradeoffs.

   Morgan - Indie Hacker:
     Scores: I:9 S:8 V:9 U:8
     → Perfect for my side project - paying $200/mo for embeddings 
       is killing margins.

✅ HIGH ENGAGEMENT - Write this!
Enter fullscreen mode Exit fullscreen mode

7.8/10 = Strong candidate. That's a topic worth writing.


The Architecture

The tool uses a multi-persona prediction engine I extracted from my larger autonomous content system. Here's the core logic:

PERSONAS = [
    {
        "name": "Alex - Senior Developer",
        "profile": "Senior full-stack developer at mid-size company. "
                   "Reads Dev.to daily. Values practical, code-heavy content."
    },
    # ... 4 more personas
]

def predict_topic_engagement(topic, context=""):
    results = []

    for persona in PERSONAS:
        prompt = f"""You are {persona['name']}.
        Profile: {persona['profile']}

        Rate this topic (1-10):
        Topic: {topic}
        Context: {context}

        Dimensions:
        - INTEREST: How likely to click and read?
        - SHARE: How likely to share with peers?
        - VALUE: How useful for your work?
        - UNIQUENESS: How different from usual content?
        """

        score = ollama_predict(prompt)
        results.append(score)

    # Weighted aggregation
    predicted_score = (
        avg_interest * 0.30 +    # Click likelihood matters most
        avg_value * 0.30 +       # Usefulness matters most
        avg_share * 0.25 +       # Viral potential
        avg_uniqueness * 0.15    # Novelty bonus
    )

    return predicted_score, results
Enter fullscreen mode Exit fullscreen mode

Why these weights?

  • Interest + Value (60%) - If people don't click and find it useful, nothing else matters
  • Share (25%) - Viral potential multiplies reach
  • Uniqueness (15%) - Novel angles get a bonus, but rehashing basics can still work

Real-World Usage

Content Creators

Test 5 ideas in 2 minutes. Double down on winners.

python scorer.py "10 AI tools to automate your workflow"
python scorer.py "Building vs buying: when to code it yourself"
python scorer.py "I analyzed 100 failed startups - here's what I learned"
Enter fullscreen mode Exit fullscreen mode

Pick the highest scorer. Ignore the rest.

Newsletter Writers

Validate your weekly topic:

python scorer.py "The hidden costs of microservices" "For CTOs"
Enter fullscreen mode Exit fullscreen mode

If it scores below 6.0, try a different angle or save it for later.

Indie Hackers

Test "build in public" topics:

python scorer.py "How I got 1000 users in 30 days" "For solo founders"
python scorer.py "My SaaS makes $5k/month - here's the tech stack"
Enter fullscreen mode Exit fullscreen mode

The persona reactions tell you why a topic scores high/low.


How I Use It

My workflow:

  1. Brainstorm 5 topics (2 minutes)
  2. Score all 5 (2 minutes)
  3. Read persona reactions - do they match my target audience?
  4. Pick the winner - write the highest scorer
  5. A/B test variations - try different angles on the same topic

Example from last week:

Topic Score Action
"Replacing OpenAI with local LLMs" 6.2 Too generic
"Building a RAG system with local LLMs" 7.8 ✅ Write this
"I saved $200/mo replacing OpenAI embeddings" 8.4 🔥 Even better

I wrote #3. It got 3x more engagement than my usual posts.


The Tech Stack

Design decisions:

  • Zero dependencies - Python stdlib only, runs anywhere
  • Local-first - Uses Ollama (free, open-source, runs on your machine)
  • Fast - 5 personas in ~10 seconds
  • Transparent - See exactly what each persona thinks

Why Ollama?

  • Free and open source
  • No API costs or rate limits
  • Works offline
  • Multiple model options (I use llama3.2:3b)

Installation

# 1. Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# 2. Pull the model
ollama pull llama3.2:3b

# 3. Clone the repo
git clone https://github.com/HappyPilot/ai-topic-scorer.git
cd ai-topic-scorer

# 4. Test it
python src/scorer.py "Your topic here"
Enter fullscreen mode Exit fullscreen mode

No pip packages. No API keys. Just works.


Understanding Scores

Score Interpretation Action
8.0 - 10.0 🔥 Viral potential Write immediately
7.0 - 7.9 ✅ High engagement Strong candidate
6.0 - 6.9 ⚠️ Moderate Refine or add angle
5.0 - 5.9 😐 Risky Consider alternatives
< 5.0 ❌ Low engagement Rethink topic

Pro tip: Don't just look at the aggregate score. Read the persona reactions. Sometimes one persona's strong enthusiasm signals a niche opportunity.


Limitations

It's not a crystal ball:

  • Scores are predictions, not guarantees
  • Execution matters (title, intro, code quality)
  • Distribution matters (where you publish, when)
  • Timing matters (is this topic trending?)

Best for:

  • Early-stage topic validation
  • Comparing multiple ideas
  • Understanding audience perspectives

Not good for:

  • Final headline optimization (use A/B tests)
  • SEO keyword validation (use keyword tools)
  • Trend detection (use social listening)

What's Next

I'm using this tool to validate topics for my autonomous content system. Future plans:

  • [ ] JSON output mode for automation
  • [ ] Save predictions to SQLite for learning
  • [ ] Historical accuracy tracking
  • [ ] Web UI for non-technical users

Try It Yourself

The repo is live: github.com/HappyPilot/ai-topic-scorer

MIT licensed. No strings attached.

If you find a topic worth $200/mo of API costs, consider buying me a coffee.

Questions? Drop them in the comments below.


This tool was extracted from OpenClaw - my autonomous revenue swarm that's generated 846+ content pieces. Follow along as I build in public:

Subscribe to AI Businessman Weekly for more open-source AI tools and business insights.


Need help implementing AI workflows?

RankedToolkit offers hands-on implementation sprints. We audit your workflow, design the automation, and ship it — in under a week.

  • Agent Workflow Audit — $2,000-$4,000 | Learn more
  • GPU Inference Stack Setup — $1,500-$3,000 | Learn more
  • AI Evaluation Harness — $900-$1,800 | Learn more
  • LLM Workflow Setup — $1,200-$2,500 | Learn more

View all services | Book a free discovery call

Top comments (0)