This is a submission for the AI Agents Challenge powered by n8n and Bright Data
What I Built
I built an intelligent Brand and Keyword Mention Monitoring Agent that runs every 2 hours, searching the web for brand mentions and delivering curated reports directly to your inbox. It's essentially a tireless assistant that can process hundreds of posts while you sleep, filtering out the noise and highlighting what actually matters.
The core problem this solves: manual brand monitoring is both overwhelming and incomplete. You're either drowning in irrelevant mentions or missing important conversations entirely. This agent finds the sweet spot between comprehensive coverage and actionable insights.
Demo
The system processes hundreds of potential mentions autonomously, delivering only the most relevant insights through formatted email reports.
n8n Workflow
Complete workflow available here: N8N BrightMentions Workflow JSON
Setup Guide
-
Import & Clone
- Import the workflow JSON into your n8n instance
- Clone this Airtable database to your account
-
Connect Your Services
- Airtable: Create a token for your BrightMentions database
-
Bright Data: Connect your account and create two zones:
serp_api1
andweb_unlocker1
- OpenRouter: Add your API credentials (recommended for rate limit flexibility)
- SMTP: Configure email credentials for notifications
-
Configure Your Settings
- Follow the yellow notes in the workflow
- Set your keywords, exclusion URLs, and email recipients in Airtable
Activate the workflow and monitoring begins automatically.
Technical Implementation
AI Agent Configuration
System Instructions: The heart of this system is a prompt that transforms the AI into a specialized mention analysis expert. Rather than generic sentiment analysis, the agent understands the context, urgency levels, and response strategies. It evaluates mentions across multiple dimensions: sentiment intensity, potential impact, author influence, and recommended actions.
Model Choice: I specifically chose OpenAI's GPT-4.1 mini and nano models via OpenRouter for two key reasons:
- Massive context windows: These models can handle extremely long extracted content without hitting token limits, which is crucial when analyzing detailed social media posts or lengthy articles
- No rate limiting on OpenRouter: Unlike direct API access, this ensures the workflow never stalls due to request limits during high-volume processing periods
Memory: This agent operates statelessly by design. Each mention analysis is independent, with all necessary context provided in the prompt. This approach ensures consistency and eliminates potential memory-related errors in automated workflows.
Tools: The agent uses n8n's Structured Output Parser to ensure the AI returns properly formatted JSON responses. This tool validates the output format and prevents parsing errors that could break the downstream workflow. All mention analysis happens within a single, comprehensive system instruction that guides the AI through structured evaluation processes.
The prompt instructs the agent to return structured JSON output covering sentiment, urgency classification, impact assessment, and specific response recommendations, essentially turning raw mentions into actionable data.
Bright Data Verified Node
Bright Data makes the "unstoppable" part of this workflow actually work. Instead of building and maintaining scrapers for dozens of platforms, I leverage their pre-built solutions that handle the complexity for me.
Pre-built Scrapers: The system uses 18 different Bright Data verified scrapers covering major platforms:
- Social networks: X, Instagram, Facebook, LinkedIn, TikTok, BlueSky
- Content platforms: YouTube, Pinterest, Reddit
- Professional networks: LinkedIn
SERP API Integration: Clean, structured Google search results without dealing with CAPTCHAs or anti-bot measures. The serp_api
zone handles search queries with time filters and exclusions seamlessly.
Web Unlocker: For any URL that doesn't fit pre-built scrapers, the Web Unlocker extracts content from virtually any website, bypassing anti-bot protections automatically.
Batch Processing: The workflow uses snapshot IDs for efficient parallel processing. Submit multiple URLs, receive a snapshot ID, then poll for completion - this approach scales naturally as monitoring requirements grow.
The reliability here is what sets this apart from DIY scraping solutions. When Instagram changes their layout, Bright Data's scrapers adapt automatically. When a site implements new bot detection, the Web Unlocker handles it transparently.
Journey
Key Challenges
The Keyword Ambiguity Problem
Initial versions captured every mention of a keyword regardless of context. Monitoring "n8n" would return results about random abbreviations, postal codes, basically anything containing those three letters in sequence.
Solution: Context-aware filtering. The system scrapes context URLs to understand what each keyword actually represents, then uses AI to filter search results before expensive content extraction begins.
URL Routing Complexity
Initially built routing with switch cases to direct URLs to appropriate scrapers. Problem: some URLs failed with Web Unlocker but worked perfectly with pre-built scrapers, creating unpredictable extraction failures.
Solution: Migrated to regex-based routing that intelligently categorizes URLs and automatically selects the optimal extraction method. Much cleaner and handles edge cases seamlessly.
Reliability at Scale
Web scraping fails frequently, even with robust infrastructure. Early versions would crash when individual requests failed, losing entire batches of data.
Solution: Comprehensive error handling with email notifications. Failed extractions are logged and reported, while successful ones continue processing. This provides visibility into problematic domains while maintaining system resilience.
Technical Insights
Context is everything: The difference between useful monitoring and spam is understanding what you're actually looking for. Automated context extraction from URLs transformed accuracy dramatically.
Smart routing matters: Categorizing URLs and directing them to appropriate extraction methods (pre-built scrapers vs. Web Unlocker) optimizes both speed and reliability.
Simple architectures scale better: Using stateless AI analysis rather than complex memory systems keeps the workflow maintainable and predictable.
Results
The system delivers measurable improvements over manual monitoring:
- Comprehensive coverage: 24/7 monitoring across all major platforms
- High accuracy: Context-aware filtering eliminates irrelevant mentions
- Cost efficiency: Pay-per-use pricing scales with actual monitoring needs
- Zero maintenance: Bright Data handles platform changes automatically
- Actionable insights: AI analysis provides response recommendations, not just raw data
Top comments (0)