DEV Community

Cover image for Build a Multi-Platform Ad Intelligence Dashboard That Monitors Every Ad Library
Olamide Olaniyan
Olamide Olaniyan

Posted on

Build a Multi-Platform Ad Intelligence Dashboard That Monitors Every Ad Library

Your competitor runs ads everywhere. Facebook. Google. LinkedIn. Reddit. They're spending six figures across four platforms, testing different messaging for different audiences.

You're checking one ad library at a time, maybe once a month.

Let's fix that. We're building a unified Ad Intelligence Dashboard that pulls from every major ad transparency library simultaneously — Facebook, Google, LinkedIn, and Reddit — so you can see your competitor's entire paid strategy in one place.

Why One Dashboard for All Ad Libraries

Every platform has its own ad transparency center:

  • Facebook Ad Library → social/display ads
  • Google Ads Transparency → search/display/YouTube ads
  • LinkedIn Ad Library → B2B targeted ads
  • Reddit Ads → community-targeted ads

Each one tells you something different:

  • Facebook ads reveal their creative strategy (images, video, copy testing)
  • Google ads reveal their keyword strategy (what intent they're bidding on)
  • LinkedIn ads reveal their target persona (job titles, company sizes)
  • Reddit ads reveal their community strategy (which subreddits matter to them)

Separately, they're interesting. Together, they're a complete picture of how a company spends money to acquire customers.

The Stack

  • Python: Language (with pandas for analysis)
  • SociaVault API: All four ad library endpoints
  • SQLite: Unified tracking database
  • OpenAI: Cross-platform strategic analysis

Step 1: Setup

mkdir ad-intelligence
cd ad-intelligence
pip install requests pandas openai python-dotenv tabulate
Enter fullscreen mode Exit fullscreen mode

Create .env:

SOCIAVAULT_API_KEY=your_key_here
OPENAI_API_KEY=your_openai_key
Enter fullscreen mode Exit fullscreen mode

Step 2: Unified Database

Create db.py:

import sqlite3

def get_db():
    conn = sqlite3.connect("ad_intelligence.db")
    conn.row_factory = sqlite3.Row
    conn.execute("PRAGMA journal_mode=WAL")

    conn.executescript("""
        CREATE TABLE IF NOT EXISTS competitors (
            id TEXT PRIMARY KEY,
            name TEXT,
            domain TEXT,
            added_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
        );

        CREATE TABLE IF NOT EXISTS ads (
            id TEXT,
            platform TEXT,
            competitor_id TEXT,
            format TEXT,
            headline TEXT,
            body TEXT,
            cta TEXT,
            landing_url TEXT,
            image_url TEXT,
            first_seen TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
            last_seen TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
            days_running INTEGER DEFAULT 0,
            is_active BOOLEAN DEFAULT 1,
            raw_data TEXT,
            PRIMARY KEY (id, platform),
            FOREIGN KEY (competitor_id) REFERENCES competitors(id)
        );

        CREATE TABLE IF NOT EXISTS reports (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            competitor_id TEXT,
            report_type TEXT,
            content TEXT,
            created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
        );
    """)

    return conn
Enter fullscreen mode Exit fullscreen mode

Step 3: Platform-Specific Fetchers

Create fetchers.py:

import os
import json
import requests
from dotenv import load_dotenv

load_dotenv()

API_BASE = "https://api.sociavault.com"
HEADERS = {"Authorization": f"Bearer {os.getenv('SOCIAVAULT_API_KEY')}"}


def fetch_facebook_ads(company_name: str) -> list:
    """Pull ads from Facebook Ad Library."""
    print(f"  📘 Facebook Ad Library: '{company_name}'...")

    all_ads = []

    # Search for the company
    resp = requests.get(
        f"{API_BASE}/v1/scrape/facebook-ad-library/search-companies",
        params={"query": company_name},
        headers=HEADERS
    )
    companies = resp.json().get("data", [])

    if not companies:
        print("    No Facebook advertiser found")
        return []

    company = companies[0]
    company_id = company.get("id") or company.get("pageId")

    # Get their ads
    resp = requests.get(
        f"{API_BASE}/v1/scrape/facebook-ad-library/company-ads",
        params={"company_id": company_id, "limit": 50},
        headers=HEADERS
    )
    ads = resp.json().get("data", {})
    ad_list = ads.get("ads", ads) if isinstance(ads, dict) else ads
    if isinstance(ad_list, list):
        all_ads = ad_list

    print(f"    Found {len(all_ads)} ads")

    return [normalize_ad(ad, "facebook") for ad in all_ads]


def fetch_google_ads(company_name: str) -> list:
    """Pull ads from Google Ad Transparency Center."""
    print(f"  🔍 Google Ad Library: '{company_name}'...")

    # Search for advertiser
    resp = requests.get(
        f"{API_BASE}/v1/scrape/google-ad-library/search-advertisers",
        params={"query": company_name},
        headers=HEADERS
    )
    advertisers = resp.json().get("data", [])

    if not advertisers:
        print("    No Google advertiser found")
        return []

    advertiser = advertisers[0]
    adv_id = advertiser.get("id") or advertiser.get("advertiserId")

    # Get their ads
    resp = requests.get(
        f"{API_BASE}/v1/scrape/google-ad-library/company-ads",
        params={"advertiser_id": adv_id},
        headers=HEADERS
    )
    ads = resp.json().get("data", {})
    ad_list = ads.get("ads", ads) if isinstance(ads, dict) else ads
    if isinstance(ad_list, list):
        result = ad_list
    else:
        result = []

    print(f"    Found {len(result)} ads")

    return [normalize_ad(ad, "google") for ad in result]


def fetch_linkedin_ads(company_name: str) -> list:
    """Pull ads from LinkedIn Ad Library."""
    print(f"  💼 LinkedIn Ad Library: '{company_name}'...")

    resp = requests.get(
        f"{API_BASE}/v1/scrape/linkedin-ad-library/search",
        params={"query": company_name},
        headers=HEADERS
    )
    ads = resp.json().get("data", {})
    ad_list = ads.get("ads", ads) if isinstance(ads, dict) else ads
    if isinstance(ad_list, list):
        result = ad_list
    else:
        result = []

    print(f"    Found {len(result)} ads")

    return [normalize_ad(ad, "linkedin") for ad in result]


def fetch_reddit_ads(company_name: str) -> list:
    """Pull ads from Reddit Ad Library."""
    print(f"  🟠 Reddit Ad Library: '{company_name}'...")

    resp = requests.get(
        f"{API_BASE}/v1/scrape/reddit/ads",
        params={"query": company_name},
        headers=HEADERS
    )
    ads = resp.json().get("data", {})
    ad_list = ads.get("ads", ads) if isinstance(ads, dict) else ads
    if isinstance(ad_list, list):
        result = ad_list
    else:
        result = []

    print(f"    Found {len(result)} ads")

    return [normalize_ad(ad, "reddit") for ad in result]


def normalize_ad(ad: dict, platform: str) -> dict:
    """Normalize ad data across platforms into a common format."""
    return {
        "id": ad.get("id") or ad.get("adId") or ad.get("creativeId") or f"{platform}_{hash(json.dumps(ad, default=str)) % 10**8}",
        "platform": platform,
        "format": detect_format(ad, platform),
        "headline": ad.get("headline") or ad.get("title") or "",
        "body": ad.get("body") or ad.get("description") or ad.get("text") or ad.get("introText") or "",
        "cta": ad.get("ctaText") or ad.get("callToAction") or ad.get("cta") or "",
        "landing_url": ad.get("landingPage") or ad.get("finalUrl") or ad.get("url") or ad.get("displayUrl") or "",
        "image_url": ad.get("imageUrl") or ad.get("thumbnailUrl") or "",
        "raw_data": json.dumps(ad, default=str),
    }


def detect_format(ad: dict, platform: str) -> str:
    if ad.get("videoUrl") or ad.get("video"):
        return "video"
    if ad.get("carousel") or ad.get("cards"):
        return "carousel"
    if ad.get("imageUrl") and (ad.get("headline") or ad.get("title")):
        return "display"
    if ad.get("headline") and not ad.get("imageUrl"):
        return "text"
    return "unknown"
Enter fullscreen mode Exit fullscreen mode

Step 4: Core Intelligence Engine

Create intelligence.py:

import os
import json
import time
from datetime import datetime
from collections import Counter

import pandas as pd
from openai import OpenAI
from dotenv import load_dotenv

from db import get_db
from fetchers import (
    fetch_facebook_ads, fetch_google_ads,
    fetch_linkedin_ads, fetch_reddit_ads
)

load_dotenv()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))


def add_competitor(name: str, domain: str = ""):
    """Add a competitor to track."""
    db = get_db()
    comp_id = name.lower().replace(" ", "-")
    db.execute(
        "INSERT OR REPLACE INTO competitors (id, name, domain) VALUES (?, ?, ?)",
        (comp_id, name, domain)
    )
    db.commit()
    print(f"✅ Added competitor: {name}")
    return comp_id


def spy_all_platforms(competitor_name: str):
    """Pull ads from ALL platforms for a competitor."""
    db = get_db()
    comp_id = competitor_name.lower().replace(" ", "-")

    print(f"\n🕵️  Full ad sweep for '{competitor_name}'...\n")

    all_ads = []

    # Fetch from all platforms with delays
    platforms = [
        ("facebook", fetch_facebook_ads),
        ("google", fetch_google_ads),
        ("linkedin", fetch_linkedin_ads),
        ("reddit", fetch_reddit_ads),
    ]

    for platform_name, fetcher in platforms:
        try:
            ads = fetcher(competitor_name)
            all_ads.extend(ads)
        except Exception as e:
            print(f"    ⚠️  {platform_name} error: {e}")
        time.sleep(1.5)

    # Store all ads
    upsert = """
        INSERT INTO ads (id, platform, competitor_id, format, headline, body, cta, landing_url, image_url, raw_data)
        VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
        ON CONFLICT(id, platform) DO UPDATE SET
            last_seen = CURRENT_TIMESTAMP,
            is_active = 1,
            days_running = CAST(
                (julianday('now') - julianday(COALESCE(ads.first_seen, CURRENT_TIMESTAMP))) AS INTEGER
            )
    """

    for ad in all_ads:
        try:
            db.execute(upsert, (
                ad["id"], ad["platform"], comp_id, ad["format"],
                ad["headline"], ad["body"], ad["cta"],
                ad["landing_url"], ad["image_url"], ad["raw_data"]
            ))
        except Exception:
            pass

    db.commit()

    # Summary
    by_platform = Counter(ad["platform"] for ad in all_ads)
    by_format = Counter(ad["format"] for ad in all_ads)

    print(f"\n📊 COLLECTION SUMMARY")
    print(f"{'' * 40}")
    print(f"  Total ads: {len(all_ads)}")
    print(f"\n  By platform:")
    for p, c in by_platform.most_common():
        print(f"    {p}: {c}")
    print(f"\n  By format:")
    for f, c in by_format.most_common():
        print(f"    {f}: {c}")

    return all_ads
Enter fullscreen mode Exit fullscreen mode

Step 5: Cross-Platform Analysis

def cross_platform_report(competitor_name: str):
    """Generate a unified cross-platform intelligence report."""
    db = get_db()
    comp_id = competitor_name.lower().replace(" ", "-")

    ads = db.execute(
        "SELECT * FROM ads WHERE competitor_id = ?", (comp_id,)
    ).fetchall()

    if not ads:
        print("No ads found. Run spy first.")
        return

    df = pd.DataFrame([dict(ad) for ad in ads])

    print(f"\n{'' * 60}")
    print(f"📊 CROSS-PLATFORM AD INTELLIGENCE: {competitor_name}")
    print(f"{'' * 60}")

    # Platform breakdown
    print(f"\n  Total ads tracked: {len(df)}")
    print(f"\n  📱 Platform Distribution:")
    for platform, count in df["platform"].value_counts().items():
        pct = count / len(df) * 100
        bar = "" * int(pct / 2)
        print(f"    {platform:<12} {count:>4} ({pct:.0f}%) {bar}")

    # Format breakdown
    print(f"\n  🎨 Creative Formats:")
    for fmt, count in df["format"].value_counts().items():
        pct = count / len(df) * 100
        print(f"    {fmt:<12} {count:>4} ({pct:.0f}%)")

    # Long-running winners
    winners = df[df["days_running"] >= 14].sort_values("days_running", ascending=False)
    if len(winners) > 0:
        print(f"\n  🏆 Long-Running Winners ({len(winners)} ads, 14+ days):")
        for _, row in winners.head(10).iterrows():
            platform_icon = {"facebook": "📘", "google": "🔍", "linkedin": "💼", "reddit": "🟠"}.get(row["platform"], "📌")
            print(f"    {platform_icon} [{row['days_running']}d] {row['headline'][:60]}")

    # Landing page analysis
    urls = df[df["landing_url"].notna() & (df["landing_url"] != "")]["landing_url"].tolist()
    if urls:
        paths = Counter()
        for url in urls:
            path = url.split("?")[0].split("#")[0]
            if "/" in path:
                paths[path.split("/")[-1] or path.split("/")[-2]] += 1

        print(f"\n  🔗 Top Landing Pages:")
        for page, count in paths.most_common(5):
            print(f"    /{page}{count} ads point here")

    return df
Enter fullscreen mode Exit fullscreen mode

Step 6: AI Strategic Analysis

def ai_strategic_analysis(competitor_name: str):
    """Use AI to analyze the full ad strategy across platforms."""
    db = get_db()
    comp_id = competitor_name.lower().replace(" ", "-")

    ads = db.execute(
        "SELECT platform, format, headline, body, cta, landing_url, days_running FROM ads WHERE competitor_id = ? ORDER BY days_running DESC LIMIT 40",
        (comp_id,)
    ).fetchall()

    if len(ads) < 3:
        print("Need more ad data for analysis.")
        return

    ad_sample = [dict(ad) for ad in ads]

    print(f"\n🧠 Running AI strategic analysis on {len(ad_sample)} ads...\n")

    completion = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{
            "role": "user",
            "content": f"""Analyze this company's advertising strategy across ALL platforms.

Company: {competitor_name}
Ads from Facebook, Google, LinkedIn, and Reddit:
{json.dumps(ad_sample, indent=2)}

Return JSON:
{{
  "overall_strategy": "3-4 sentence summary of their multi-platform approach",
  "platform_strategies": {{
    "facebook": "what they use Facebook ads for",
    "google": "what they use Google ads for",
    "linkedin": "what they use LinkedIn ads for",
    "reddit": "what they use Reddit ads for"
  }},
  "messaging_consistency": "how consistent is messaging across platforms (score 1-10 and explanation)",
  "target_audiences": [
    {{"platform": "platform", "likely_audience": "who", "evidence": "why"}}
  ],
  "funnel_mapping": {{
    "top_of_funnel": "platforms + tactics for awareness",
    "middle_of_funnel": "platforms + tactics for consideration",
    "bottom_of_funnel": "platforms + tactics for conversion"
  }},
  "budget_estimation": {{
    "heaviest_spend": "platform they seem to spend most on and why",
    "testing_ground": "platform they seem to test new messaging on",
    "scaling_platform": "platform where they scale winners"
  }},
  "winning_patterns": [
    "patterns in their longest-running ads across platforms"
  ],
  "vulnerabilities": [
    "strategic gaps or weaknesses in their multi-platform approach"
  ],
  "counter_strategy": {{
    "immediate_actions": ["3 things you can do this week"],
    "medium_term": ["3 things to implement this month"],
    "strategic_plays": ["2 longer-term competitive moves"]
  }}
}}"""
        }],
        response_format={"type": "json_object"}
    )

    analysis = json.loads(completion.choices[0].message.content)

    print("🎯 MULTI-PLATFORM STRATEGIC ANALYSIS")
    print("" * 60)

    print(f"\n📋 Overview: {analysis['overall_strategy']}")

    print(f"\n📱 Platform Strategies:")
    icons = {"facebook": "📘", "google": "🔍", "linkedin": "💼", "reddit": "🟠"}
    for platform, strategy in analysis.get("platform_strategies", {}).items():
        if strategy:
            print(f"  {icons.get(platform, '📌')} {platform.title()}: {strategy}")

    print(f"\n🔄 Messaging Consistency: {analysis.get('messaging_consistency', 'N/A')}")

    funnel = analysis.get("funnel_mapping", {})
    print(f"\n📈 Funnel Mapping:")
    print(f"  Top: {funnel.get('top_of_funnel', 'N/A')}")
    print(f"  Mid: {funnel.get('middle_of_funnel', 'N/A')}")
    print(f"  Bottom: {funnel.get('bottom_of_funnel', 'N/A')}")

    budget = analysis.get("budget_estimation", {})
    print(f"\n💰 Budget Signals:")
    print(f"  Heaviest spend: {budget.get('heaviest_spend', 'N/A')}")
    print(f"  Testing on: {budget.get('testing_ground', 'N/A')}")
    print(f"  Scaling on: {budget.get('scaling_platform', 'N/A')}")

    print(f"\n⚠️ Vulnerabilities:")
    for v in analysis.get("vulnerabilities", []):
        print(f"{v}")

    counter = analysis.get("counter_strategy", {})
    print(f"\n⚔️  Counter-Strategy:")
    print(f"  This week:")
    for a in counter.get("immediate_actions", []):
        print(f"{a}")
    print(f"  This month:")
    for a in counter.get("medium_term", []):
        print(f"{a}")
    print(f"  Long-term:")
    for a in counter.get("strategic_plays", []):
        print(f"{a}")

    # Save report
    db = get_db()
    db.execute(
        "INSERT INTO reports (competitor_id, report_type, content) VALUES (?, ?, ?)",
        (comp_id, "full_analysis", json.dumps(analysis))
    )
    db.commit()

    return analysis
Enter fullscreen mode Exit fullscreen mode

Step 7: CLI

Create dashboard.py:

import sys
from intelligence import (
    add_competitor, spy_all_platforms,
    cross_platform_report, ai_strategic_analysis
)


def main():
    if len(sys.argv) < 2:
        print("Multi-Platform Ad Intelligence Dashboard\n")
        print("Usage:")
        print('  python dashboard.py add "Monday.com" monday.com')
        print('  python dashboard.py spy "Monday.com"')
        print('  python dashboard.py report "Monday.com"')
        print('  python dashboard.py analyze "Monday.com"')
        print('  python dashboard.py compare "Monday.com" "Asana" "ClickUp"')
        return

    command = sys.argv[1]
    target = sys.argv[2] if len(sys.argv) > 2 else None

    if command == "add":
        domain = sys.argv[3] if len(sys.argv) > 3 else ""
        add_competitor(target, domain)

    elif command == "spy":
        spy_all_platforms(target)

    elif command == "report":
        cross_platform_report(target)

    elif command == "analyze":
        cross_platform_report(target)
        ai_strategic_analysis(target)

    elif command == "compare":
        companies = sys.argv[2:]
        print(f"\n📊 Comparing {len(companies)} competitors...\n")
        for company in companies:
            spy_all_platforms(company)
            print()
        for company in companies:
            cross_platform_report(company)


if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

Running It

# Add a competitor
python dashboard.py add "HubSpot" hubspot.com

# Full sweep across all ad libraries
python dashboard.py spy "HubSpot"

# Cross-platform report
python dashboard.py report "HubSpot"

# Full analysis with AI
python dashboard.py analyze "HubSpot"

# Compare multiple competitors
python dashboard.py compare "HubSpot" "Salesforce" "Pipedrive"
Enter fullscreen mode Exit fullscreen mode

What You'll Learn

A full multi-platform sweep reveals things no single-platform tool can:

Insight What It Means
Same headline on Facebook + Google They validated it on social, now scaling on search
LinkedIn-only messaging That's their enterprise pitch, different from self-serve
Reddit ads with community angles They're investing in bottom-up adoption
50+ ads on Google, 5 on LinkedIn They prioritize search intent over B2B targeting

Cost Comparison

Tool Price Platforms Covered
Semrush Advertising $249/mo Google only
Pathmatics ~$10K/mo Multi-platform, enterprise
AdClarity $169/mo Google + Facebook
SpyFu $39/mo Google only
Competitive Adanalysis $250/mo Google + Facebook
This dashboard ~$0.10/sweep Facebook + Google + LinkedIn + Reddit

No other tool under $500/mo covers all four ad libraries. Most cover one or two.

Get Started

  1. Get your API key at sociavault.com
  2. Pick your top competitor
  3. Run a full sweep: python dashboard.py analyze "Competitor"
  4. Share the report with your marketing team

Their entire ad budget is public information, spread across four libraries. We just consolidated it into one dashboard.


Your competitors spend six figures testing ads across every platform. Their results are public. Read them all in one place.

python #marketing #saas #webdev

Top comments (0)