DEV Community

The AI Entrepreneur
The AI Entrepreneur

Posted on

How to Monitor Brand Mentions in Google News (Free API + Code)

Most brand monitoring tools charge $200-500/month. Here's how to build your own for ~$3/1,000 articles using Google News RSS feeds and a browser-based fallback.

The Problem

You need to know when your brand (or competitor) is mentioned in the news. Google Alerts is unreliable — it misses articles, has random delays, and you can't filter by time range or country.

Commercial tools like Mention, Brandwatch, and Meltwater solve this, but they start at $200/month and lock you into annual contracts.

The Solution: A Triple-Fallback Google News Scraper

I built a scraper that uses three approaches to ensure you always get results:

  1. RSS Feed (fastest) — Google News exposes RSS feeds at news.google.com/rss/search?q=YOUR_KEYWORD
  2. Google News Website (browser) — If RSS fails, scrape the news.google.com interface directly
  3. Google Search News Tab (deepest) — Last resort: use Google's main search with tbm=nws

Why Three Approaches?

Google aggressively blocks automated access. RSS feeds work 90% of the time, but sometimes return empty. The browser-based approaches handle edge cases — CAPTCHAs, consent screens, and regional blocks.

How the RSS Approach Works

Google News RSS feeds follow a simple URL pattern:

https://news.google.com/rss/search?q=KEYWORD&hl=LANGUAGE-COUNTRY&gl=COUNTRY&ceid=COUNTRY:LANGUAGE
Enter fullscreen mode Exit fullscreen mode

For example, searching for "Tesla" in English/US:

https://news.google.com/rss/search?q=Tesla&hl=en-US&gl=US&ceid=US:en
Enter fullscreen mode Exit fullscreen mode

The response is standard XML with <item> elements:

<item>
  <title>Tesla Announces New Gigafactory</title>
  <link>https://www.reuters.com/...</link>
  <pubDate>Mon, 10 Mar 2026 14:30:00 GMT</pubDate>
  <description>Tesla Inc said on Monday...</description>
  <source>Reuters</source>
</item>
Enter fullscreen mode Exit fullscreen mode

Parse this with regex (no XML library needed):

function parseRssXml(xmlText, keyword) {
    const articles = [];
    const itemRegex = /<item>([\s\S]*?)<\/item>/g;
    let match;

    while ((match = itemRegex.exec(xmlText)) !== null) {
        const itemXml = match[1];
        const title = extractTag(itemXml, 'title');
        const link = extractTag(itemXml, 'link');
        const pubDate = extractTag(itemXml, 'pubDate');
        const source = extractTag(itemXml, 'source');

        if (title && link) {
            articles.push({
                keyword,
                title,
                source: source || 'Unknown',
                publishedAt: pubDate ? new Date(pubDate).toISOString() : null,
                articleUrl: link,
            });
        }
    }
    return articles;
}
Enter fullscreen mode Exit fullscreen mode

Quick Start: Use the Hosted Version

If you don't want to maintain infrastructure, you can use the hosted version on Apify:

Node.js

import { ApifyClient } from 'apify-client';

const client = new ApifyClient({ token: 'YOUR_TOKEN' });
const run = await client.actor('george.the.developer/google-news-monitor').call({
    keywords: ['your-brand', 'competitor-brand'],
    timeRange: 'past_day',
    maxArticlesPerKeyword: 50,
});

const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(`Found ${items.length} mentions today`);
Enter fullscreen mode Exit fullscreen mode

Python

from apify_client import ApifyClient

client = ApifyClient("YOUR_TOKEN")
run = client.actor("george.the.developer/google-news-monitor").call(run_input={
    "keywords": ["your-brand", "competitor-brand"],
    "timeRange": "past_day",
    "maxArticlesPerKeyword": 50,
})

for article in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(f"[{article['source']}] {article['title']}")
Enter fullscreen mode Exit fullscreen mode

Cost: ~$0.003 per article. Monitoring 5 keywords daily = ~$4.50/month.

Building a Daily Dashboard

Combine this with a cron job and you have a free brand monitoring system:

import schedule
from apify_client import ApifyClient
from datetime import datetime

def check_mentions():
    client = ApifyClient("YOUR_TOKEN")
    run = client.actor("george.the.developer/google-news-monitor").call(run_input={
        "keywords": ["YourBrand", "CompetitorA", "CompetitorB"],
        "timeRange": "past_day",
    })

    articles = list(client.dataset(run["defaultDatasetId"]).iterate_items())

    # Send summary email or Slack notification
    for keyword in ["YourBrand", "CompetitorA", "CompetitorB"]:
        count = len([a for a in articles if a["keyword"] == keyword])
        print(f"{keyword}: {count} mentions today")

schedule.every().day.at("09:00").do(check_mentions)
Enter fullscreen mode Exit fullscreen mode

Lessons Learned

  1. RSS is king — It's faster, more reliable, and returns cleaner data than browser scraping. Always try it first.
  2. Rotate user agents — Google fingerprints requests. A pool of 5+ user agents reduces blocks by 90%.
  3. Add random delays — 2-5 seconds between requests. Predictable timing = bot detection.
  4. CAPTCHA is inevitable — Don't fight it. Fall back to the next approach.

Source Code

The full implementation is open source: github.com/the-ai-entrepreneur-ai-hub/google-news-scraper

Or run it directly on Apify: apify.com/george.the.developer/google-news-monitor


What brand monitoring tools are you using? I'd love to hear about your setup in the comments.

Top comments (0)