DEV Community

Cover image for Build a Real-Time Brand Mention Monitor in 30 Minutes (Python + Webhooks)
Olamide Olaniyan
Olamide Olaniyan

Posted on

Build a Real-Time Brand Mention Monitor in 30 Minutes (Python + Webhooks)

Last month, a TikTok creator with 500K followers roasted one of my products.

I found out 3 days later. By then, the video had 2M views and the narrative was set.

Never again.

I built a brand mention monitor that alerts me within minutes of any mention across TikTok, Instagram, Twitter, and Reddit. Today I'll show you exactly how.

What We're Building

A Python service that:

  1. Searches multiple platforms for your brand keywords
  2. Filters out noise (old mentions, irrelevant results)
  3. Sends instant alerts to Slack/Discord with context
  4. Runs continuously without manual intervention

Total setup time: ~30 minutes

The Architecture

┌─────────────────────────────────────────────────────────┐
│                    Cron Job (Every 5 min)               │
└─────────────────────────┬───────────────────────────────┘
                          │
                          ▼
┌─────────────────────────────────────────────────────────┐
│                  Brand Monitor Service                   │
│                                                          │
│  ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐   │
│  │  TikTok  │ │Instagram │ │ Twitter  │ │  Reddit  │   │
│  │ Search   │ │ Search   │ │ Search   │ │ Search   │   │
│  └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘   │
│       │            │            │            │          │
│       └────────────┴─────┬──────┴────────────┘          │
│                          │                               │
│                          ▼                               │
│              ┌───────────────────────┐                  │
│              │   Deduplication &     │                  │
│              │   Sentiment Analysis  │                  │
│              └───────────┬───────────┘                  │
│                          │                               │
└──────────────────────────┼──────────────────────────────┘
                           │
              ┌────────────┴────────────┐
              ▼                         ▼
       ┌─────────────┐          ┌─────────────┐
       │    Slack    │          │   Discord   │
       │   Webhook   │          │   Webhook   │
       └─────────────┘          └─────────────┘
Enter fullscreen mode Exit fullscreen mode

Step 1: Project Setup

mkdir brand-monitor && cd brand-monitor
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

pip install requests python-dotenv redis
Enter fullscreen mode Exit fullscreen mode

Create your .env file:

# API Keys
SOCIAVAULT_API_KEY=your_api_key_here

# Webhook URLs
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/xxx/xxx/xxx
DISCORD_WEBHOOK_URL=https://discord.com/api/webhooks/xxx/xxx

# Redis (optional, for deduplication)
REDIS_URL=redis://localhost:6379

# Monitoring Config
BRAND_KEYWORDS=sociavault,socia vault,sociavault api
COMPETITOR_KEYWORDS=apify,brightdata,scraperapi
Enter fullscreen mode Exit fullscreen mode

Step 2: The Core Monitor Class

# monitor.py
import os
import json
import hashlib
import requests
from datetime import datetime, timedelta
from typing import List, Dict, Optional
from dataclasses import dataclass
from dotenv import load_dotenv

load_dotenv()

@dataclass
class Mention:
    """Represents a brand mention across any platform."""
    platform: str
    content: str
    author: str
    author_followers: int
    url: str
    engagement: int  # likes + comments + shares
    timestamp: datetime
    sentiment: str  # positive, negative, neutral

    @property
    def priority(self) -> str:
        """Calculate mention priority based on reach and sentiment."""
        if self.sentiment == "negative" and self.author_followers > 10000:
            return "🔴 CRITICAL"
        elif self.author_followers > 100000:
            return "🟠 HIGH"
        elif self.author_followers > 10000:
            return "🟡 MEDIUM"
        else:
            return "🟢 LOW"

    @property
    def id(self) -> str:
        """Generate unique ID for deduplication."""
        content = f"{self.platform}:{self.author}:{self.content[:100]}"
        return hashlib.md5(content.encode()).hexdigest()


class BrandMonitor:
    def __init__(self):
        self.api_key = os.getenv("SOCIAVAULT_API_KEY")
        self.base_url = "https://api.sociavault.com/v1"
        self.headers = {"Authorization": f"Bearer {self.api_key}"}

        # Keywords to track
        self.brand_keywords = os.getenv("BRAND_KEYWORDS", "").split(",")
        self.competitor_keywords = os.getenv("COMPETITOR_KEYWORDS", "").split(",")

        # Seen mentions (in production, use Redis)
        self.seen_ids = set()

    def search_tiktok(self, keyword: str, hours_back: int = 1) -> List[Mention]:
        """Search TikTok for keyword mentions."""
        mentions = []

        try:
            response = requests.get(
                f"{self.base_url}/scrape/tiktok/search",
                params={"keyword": keyword, "limit": 50},
                headers=self.headers,
                timeout=30
            )
            response.raise_for_status()
            data = response.json()

            cutoff = datetime.now() - timedelta(hours=hours_back)

            for video in data.get("data", []):
                # Parse timestamp
                created = datetime.fromisoformat(
                    video.get("createTime", "").replace("Z", "+00:00")
                )

                if created < cutoff:
                    continue

                mentions.append(Mention(
                    platform="tiktok",
                    content=video.get("desc", ""),
                    author=video.get("author", {}).get("uniqueId", "unknown"),
                    author_followers=video.get("author", {}).get("followerCount", 0),
                    url=f"https://tiktok.com/@{video.get('author', {}).get('uniqueId')}/video/{video.get('id')}",
                    engagement=video.get("playCount", 0) + video.get("commentCount", 0),
                    timestamp=created,
                    sentiment=self.analyze_sentiment(video.get("desc", ""))
                ))

        except Exception as e:
            print(f"TikTok search error: {e}")

        return mentions

    def search_instagram(self, keyword: str, hours_back: int = 1) -> List[Mention]:
        """Search Instagram hashtags for mentions."""
        mentions = []

        # Search via hashtag
        hashtag = keyword.replace(" ", "").lower()

        try:
            response = requests.get(
                f"{self.base_url}/scrape/instagram/hashtag",
                params={"hashtag": hashtag, "limit": 50},
                headers=self.headers,
                timeout=30
            )
            response.raise_for_status()
            data = response.json()

            cutoff = datetime.now() - timedelta(hours=hours_back)

            for post in data.get("data", []):
                created = datetime.fromisoformat(
                    post.get("timestamp", "").replace("Z", "+00:00")
                )

                if created < cutoff:
                    continue

                mentions.append(Mention(
                    platform="instagram",
                    content=post.get("caption", ""),
                    author=post.get("ownerUsername", "unknown"),
                    author_followers=post.get("ownerFollowerCount", 0),
                    url=post.get("url", ""),
                    engagement=post.get("likesCount", 0) + post.get("commentsCount", 0),
                    timestamp=created,
                    sentiment=self.analyze_sentiment(post.get("caption", ""))
                ))

        except Exception as e:
            print(f"Instagram search error: {e}")

        return mentions

    def search_twitter(self, keyword: str, hours_back: int = 1) -> List[Mention]:
        """Search Twitter for keyword mentions."""
        mentions = []

        try:
            response = requests.get(
                f"{self.base_url}/scrape/twitter/search",
                params={"query": keyword, "limit": 50},
                headers=self.headers,
                timeout=30
            )
            response.raise_for_status()
            data = response.json()

            cutoff = datetime.now() - timedelta(hours=hours_back)

            for tweet in data.get("data", []):
                created = datetime.fromisoformat(
                    tweet.get("createdAt", "").replace("Z", "+00:00")
                )

                if created < cutoff:
                    continue

                mentions.append(Mention(
                    platform="twitter",
                    content=tweet.get("text", ""),
                    author=tweet.get("author", {}).get("username", "unknown"),
                    author_followers=tweet.get("author", {}).get("followersCount", 0),
                    url=f"https://twitter.com/{tweet.get('author', {}).get('username')}/status/{tweet.get('id')}",
                    engagement=tweet.get("likeCount", 0) + tweet.get("retweetCount", 0),
                    timestamp=created,
                    sentiment=self.analyze_sentiment(tweet.get("text", ""))
                ))

        except Exception as e:
            print(f"Twitter search error: {e}")

        return mentions

    def search_reddit(self, keyword: str, hours_back: int = 1) -> List[Mention]:
        """Search Reddit for keyword mentions."""
        mentions = []

        try:
            response = requests.get(
                f"{self.base_url}/scrape/reddit/search",
                params={"query": keyword, "limit": 50, "sort": "new"},
                headers=self.headers,
                timeout=30
            )
            response.raise_for_status()
            data = response.json()

            cutoff = datetime.now() - timedelta(hours=hours_back)

            for post in data.get("data", []):
                created = datetime.fromtimestamp(post.get("created_utc", 0))

                if created < cutoff:
                    continue

                content = post.get("title", "") + " " + post.get("selftext", "")

                mentions.append(Mention(
                    platform="reddit",
                    content=content[:500],
                    author=post.get("author", "unknown"),
                    author_followers=0,  # Reddit doesn't expose this easily
                    url=f"https://reddit.com{post.get('permalink', '')}",
                    engagement=post.get("score", 0) + post.get("num_comments", 0),
                    timestamp=created,
                    sentiment=self.analyze_sentiment(content)
                ))

        except Exception as e:
            print(f"Reddit search error: {e}")

        return mentions

    def analyze_sentiment(self, text: str) -> str:
        """Simple keyword-based sentiment analysis."""
        text_lower = text.lower()

        negative_words = [
            "scam", "terrible", "worst", "hate", "broken", "doesn't work",
            "waste", "avoid", "disappointed", "frustrating", "garbage",
            "ripoff", "awful", "horrible", "sucks", "useless", "overpriced"
        ]

        positive_words = [
            "love", "amazing", "best", "great", "awesome", "excellent",
            "fantastic", "recommend", "perfect", "impressed", "helpful",
            "easy", "works great", "saved", "thank", "wonderful"
        ]

        neg_count = sum(1 for word in negative_words if word in text_lower)
        pos_count = sum(1 for word in positive_words if word in text_lower)

        if neg_count > pos_count:
            return "negative"
        elif pos_count > neg_count:
            return "positive"
        else:
            return "neutral"

    def search_all_platforms(self, hours_back: int = 1) -> List[Mention]:
        """Search all platforms for brand mentions."""
        all_mentions = []

        for keyword in self.brand_keywords:
            keyword = keyword.strip()
            if not keyword:
                continue

            print(f"Searching for: {keyword}")

            all_mentions.extend(self.search_tiktok(keyword, hours_back))
            all_mentions.extend(self.search_instagram(keyword, hours_back))
            all_mentions.extend(self.search_twitter(keyword, hours_back))
            all_mentions.extend(self.search_reddit(keyword, hours_back))

        # Deduplicate
        unique_mentions = []
        for mention in all_mentions:
            if mention.id not in self.seen_ids:
                self.seen_ids.add(mention.id)
                unique_mentions.append(mention)

        # Sort by priority (critical first)
        priority_order = {"🔴 CRITICAL": 0, "🟠 HIGH": 1, "🟡 MEDIUM": 2, "🟢 LOW": 3}
        unique_mentions.sort(key=lambda m: priority_order.get(m.priority, 4))

        return unique_mentions
Enter fullscreen mode Exit fullscreen mode

Step 3: Alert System

# alerts.py
import os
import requests
from typing import List
from monitor import Mention

class AlertManager:
    def __init__(self):
        self.slack_webhook = os.getenv("SLACK_WEBHOOK_URL")
        self.discord_webhook = os.getenv("DISCORD_WEBHOOK_URL")

    def send_slack_alert(self, mention: Mention):
        """Send mention alert to Slack."""
        if not self.slack_webhook:
            return

        # Color based on sentiment
        color_map = {
            "negative": "#FF0000",
            "positive": "#00FF00",
            "neutral": "#808080"
        }

        payload = {
            "attachments": [{
                "color": color_map.get(mention.sentiment, "#808080"),
                "blocks": [
                    {
                        "type": "header",
                        "text": {
                            "type": "plain_text",
                            "text": f"{mention.priority} Brand Mention on {mention.platform.title()}"
                        }
                    },
                    {
                        "type": "section",
                        "fields": [
                            {
                                "type": "mrkdwn",
                                "text": f"*Author:*\n@{mention.author}"
                            },
                            {
                                "type": "mrkdwn",
                                "text": f"*Followers:*\n{mention.author_followers:,}"
                            },
                            {
                                "type": "mrkdwn",
                                "text": f"*Engagement:*\n{mention.engagement:,}"
                            },
                            {
                                "type": "mrkdwn",
                                "text": f"*Sentiment:*\n{mention.sentiment.title()}"
                            }
                        ]
                    },
                    {
                        "type": "section",
                        "text": {
                            "type": "mrkdwn",
                            "text": f"*Content:*\n{mention.content[:500]}..."
                        }
                    },
                    {
                        "type": "actions",
                        "elements": [
                            {
                                "type": "button",
                                "text": {
                                    "type": "plain_text",
                                    "text": "View Post"
                                },
                                "url": mention.url,
                                "style": "primary"
                            }
                        ]
                    }
                ]
            }]
        }

        try:
            requests.post(self.slack_webhook, json=payload, timeout=10)
        except Exception as e:
            print(f"Slack alert error: {e}")

    def send_discord_alert(self, mention: Mention):
        """Send mention alert to Discord."""
        if not self.discord_webhook:
            return

        # Color based on sentiment (Discord uses decimal)
        color_map = {
            "negative": 16711680,   # Red
            "positive": 65280,      # Green
            "neutral": 8421504      # Gray
        }

        payload = {
            "embeds": [{
                "title": f"{mention.priority} Brand Mention on {mention.platform.title()}",
                "color": color_map.get(mention.sentiment, 8421504),
                "fields": [
                    {"name": "Author", "value": f"@{mention.author}", "inline": True},
                    {"name": "Followers", "value": f"{mention.author_followers:,}", "inline": True},
                    {"name": "Engagement", "value": f"{mention.engagement:,}", "inline": True},
                    {"name": "Sentiment", "value": mention.sentiment.title(), "inline": True},
                    {"name": "Content", "value": mention.content[:500] + "...", "inline": False}
                ],
                "url": mention.url,
                "timestamp": mention.timestamp.isoformat()
            }]
        }

        try:
            requests.post(self.discord_webhook, json=payload, timeout=10)
        except Exception as e:
            print(f"Discord alert error: {e}")

    def send_alerts(self, mentions: List[Mention]):
        """Send alerts for all mentions."""
        for mention in mentions:
            self.send_slack_alert(mention)
            self.send_discord_alert(mention)
            print(f"Alert sent: {mention.priority} - {mention.platform} - @{mention.author}")
Enter fullscreen mode Exit fullscreen mode

Step 4: Production Deduplication with Redis

In production, you need persistent deduplication so restarting the service doesn't cause duplicate alerts:

# dedup.py
import os
import redis
from typing import Set

class RedisDeduplicator:
    def __init__(self):
        redis_url = os.getenv("REDIS_URL", "redis://localhost:6379")
        self.redis = redis.from_url(redis_url)
        self.key_prefix = "brand_monitor:seen:"
        self.ttl = 86400 * 7  # Keep IDs for 7 days

    def is_seen(self, mention_id: str) -> bool:
        """Check if we've already processed this mention."""
        return self.redis.exists(f"{self.key_prefix}{mention_id}")

    def mark_seen(self, mention_id: str):
        """Mark a mention as processed."""
        self.redis.setex(
            f"{self.key_prefix}{mention_id}",
            self.ttl,
            "1"
        )

    def filter_new(self, mentions: list) -> list:
        """Filter out already-seen mentions."""
        new_mentions = []

        for mention in mentions:
            if not self.is_seen(mention.id):
                new_mentions.append(mention)
                self.mark_seen(mention.id)

        return new_mentions
Enter fullscreen mode Exit fullscreen mode

Update the monitor to use Redis:

# In monitor.py, update BrandMonitor.__init__
def __init__(self):
    # ... existing code ...

    # Use Redis in production
    try:
        self.dedup = RedisDeduplicator()
        self.use_redis = True
    except:
        self.seen_ids = set()
        self.use_redis = False

def search_all_platforms(self, hours_back: int = 1) -> List[Mention]:
    # ... existing search code ...

    # Deduplicate
    if self.use_redis:
        unique_mentions = self.dedup.filter_new(all_mentions)
    else:
        unique_mentions = []
        for mention in all_mentions:
            if mention.id not in self.seen_ids:
                self.seen_ids.add(mention.id)
                unique_mentions.append(mention)

    return unique_mentions
Enter fullscreen mode Exit fullscreen mode

Step 5: Main Runner

#!/usr/bin/env python3
# main.py
"""
Brand Mention Monitor
Run every 5-15 minutes via cron or scheduler.
"""

import time
from datetime import datetime
from monitor import BrandMonitor
from alerts import AlertManager

def run_monitor():
    print(f"\n{'='*50}")
    print(f"Brand Monitor Run: {datetime.now().isoformat()}")
    print(f"{'='*50}")

    monitor = BrandMonitor()
    alerts = AlertManager()

    # Search last hour of content
    mentions = monitor.search_all_platforms(hours_back=1)

    print(f"\nFound {len(mentions)} new mentions")

    if mentions:
        # Print summary
        print("\nMention Summary:")
        print("-" * 40)

        for mention in mentions:
            print(f"{mention.priority} | {mention.platform:10} | @{mention.author[:20]:20} | {mention.sentiment}")

        # Send alerts
        print("\nSending alerts...")
        alerts.send_alerts(mentions)

        print("Done!")
    else:
        print("No new mentions found.")

    return mentions

if __name__ == "__main__":
    run_monitor()
Enter fullscreen mode Exit fullscreen mode

Step 6: Scheduling

Option A: Cron (Linux/Mac)

# Run every 5 minutes
*/5 * * * * cd /path/to/brand-monitor && /path/to/venv/bin/python main.py >> /var/log/brand-monitor.log 2>&1
Enter fullscreen mode Exit fullscreen mode

Option B: Windows Task Scheduler

Create run_monitor.bat:

@echo off
cd C:\path\to\brand-monitor
call venv\Scripts\activate
python main.py >> monitor.log 2>&1
Enter fullscreen mode Exit fullscreen mode

Schedule with Task Scheduler to run every 5 minutes.

Option C: Continuous Runner

# continuous.py
import time
import schedule
from main import run_monitor

def job():
    try:
        run_monitor()
    except Exception as e:
        print(f"Error in monitor run: {e}")

# Run every 5 minutes
schedule.every(5).minutes.do(job)

print("Brand Monitor started. Running every 5 minutes...")
print("Press Ctrl+C to stop.")

# Run immediately on start
job()

while True:
    schedule.run_pending()
    time.sleep(60)
Enter fullscreen mode Exit fullscreen mode

Advanced: Priority Escalation

For critical mentions (negative + high follower count), you might want phone alerts:

# twilio_alerts.py
import os
from twilio.rest import Client

class TwilioAlerter:
    def __init__(self):
        self.client = Client(
            os.getenv("TWILIO_ACCOUNT_SID"),
            os.getenv("TWILIO_AUTH_TOKEN")
        )
        self.from_number = os.getenv("TWILIO_FROM_NUMBER")
        self.to_number = os.getenv("ALERT_PHONE_NUMBER")

    def send_sms(self, mention):
        """Send SMS for critical mentions."""
        if mention.priority != "🔴 CRITICAL":
            return

        message = (
            f"CRITICAL: Negative mention on {mention.platform} "
            f"by @{mention.author} ({mention.author_followers:,} followers). "
            f"Check immediately: {mention.url}"
        )

        self.client.messages.create(
            body=message,
            from_=self.from_number,
            to=self.to_number
        )
Enter fullscreen mode Exit fullscreen mode

Competitor Monitoring Bonus

Want to track when competitors are mentioned? Add this:

def search_competitor_mentions(self, hours_back: int = 1) -> List[Mention]:
    """Search for competitor mentions - potential customers."""
    all_mentions = []

    for keyword in self.competitor_keywords:
        keyword = keyword.strip()
        if not keyword:
            continue

        print(f"Searching competitor: {keyword}")

        # Focus on platforms where people ask for alternatives
        all_mentions.extend(self.search_reddit(keyword, hours_back))
        all_mentions.extend(self.search_twitter(keyword, hours_back))

    # Filter for "alternative" or "looking for" type posts
    opportunity_keywords = [
        "alternative", "looking for", "recommend", "better than",
        "switch from", "replace", "instead of", "vs", "compared to"
    ]

    opportunities = [
        m for m in all_mentions
        if any(kw in m.content.lower() for kw in opportunity_keywords)
    ]

    return opportunities
Enter fullscreen mode Exit fullscreen mode

Cost Breakdown

What does this actually cost to run?

API costs (SociaVault):

  • ~200 requests/day (4 platforms × 5 keywords × 10 runs)
  • At $0.001/request = ~$6/month

Infrastructure:

  • Redis (free tier on most platforms)
  • Cron server (free on any VPS)
  • Slack/Discord webhooks (free)

Total: ~$6/month for real-time brand monitoring across 4 platforms.

Compare that to Mention.com ($29/mo), Brand24 ($79/mo), or Sprout Social ($249/mo).

What I Actually Learned

After running this for 6 months:

  1. 90% of mentions are neutral - Don't alert on everything
  2. Negative + high followers = respond within 1 hour - This is where reputation damage happens
  3. Reddit is gold - People there give honest feedback
  4. Competitor mentions = sales opportunities - People looking for alternatives are warm leads

Get Started

  1. Get your API key at SociaVault
  2. Set up Slack/Discord webhooks
  3. Configure your keywords
  4. Run the monitor

Questions? Hit me up on Twitter @sociavault.


Related tutorials:

Top comments (0)