Creative fatigue is one of those problems teams usually notice too late.
The ad is still live. Spend is still flowing. Nobody wants to admit the creative has gone stale because the dashboard does not scream about it yet.
Then performance gets weird, the team starts talking about "market conditions," and suddenly everyone agrees the ad has probably been tired for weeks.
The frustrating part is that you often can see fatigue clues earlier, even from public data.
Not perfect proof. Not ROAS. Not internal frequency.
But enough signal to make better decisions.
That is what this post is about: how to use public ad library snapshots to detect creative fatigue, how I score it, what the signal can and cannot tell you, and how to build the workflow in both JavaScript and Python.
What Creative Fatigue Looks Like in Public Data
If you do not have access to the ad account, you cannot see the internal performance metrics that paid teams obsess over.
But you can still look for a few strong clues:
- the same headlines show up again and again
- the same CTA keeps repeating
- landing pages do not meaningfully change
- the creative set stays large enough to look active, but not fresh enough to look exploratory
- new variants feel cosmetic instead of strategic
That is usually the beginning of fatigue.
Or at least the beginning of slowed testing.
Either way, it is worth paying attention to.
Why Snapshots Matter More Than Screenshots
One screenshot tells you an ad exists.
A sequence of snapshots tells you whether the advertiser is still learning.
That is the core idea.
Creative fatigue is not a single-state problem. It is a time problem.
If you want to catch it, you need a timeline.
That means saving structured snapshots of:
- headlines
- body text
- CTA
- landing page URL
- platform
- date captured
Once you have that, you can calculate things like:
- creative carryover rate
- novelty rate
- CTA repetition
- landing-page stagnation
Those are surprisingly good directional signals.
The Simple Model I Actually Use
I am not trying to build a perfect fatigue oracle.
I am trying to answer a useful question:
does this advertiser look like they are still testing, or just recycling?
That is enough.
My basic model uses four signals:
- Carryover rate: how much of the current creative set also existed in the last snapshot
- Novelty rate: the inverse of carryover
- CTA repetition: whether new creative comes with new asks, or the same one repeated everywhere
- Landing-page stagnation: whether the destination set changes at all
If carryover is high and both CTA and landing-page variety are low, I start flagging fatigue.
JavaScript Version: Snapshot + Fatigue Score
This is a straightforward version using Facebook and LinkedIn public ad data.
import fs from 'fs/promises';
const headers = { 'X-API-Key': process.env.SOCIAVAULT_API_KEY };
async function fetchJson(url) {
const response = await fetch(url, { headers });
if (!response.ok) {
throw new Error(`Request failed with ${response.status}`);
}
return response.json();
}
function normalizeAds(items = []) {
return (items || []).map(item => ({
headline: item.headline || item.title || item.snapshot?.title || '',
body: item.body || item.text || item.snapshot?.body?.markup || '',
cta: item.cta || item.call_to_action || item.snapshot?.cta_text || '',
url: item.url || item.landingPageUrl || item.snapshot?.link_url || '',
}));
}
async function captureSnapshot(brand) {
const [facebook, linkedin] = await Promise.all([
fetchJson(
`https://api.sociavault.com/v1/scrape/facebook-ad-library/company-ads?companyName=${encodeURIComponent(brand)}&status=ACTIVE&trim=true`
),
fetchJson(
`https://api.sociavault.com/v1/scrape/linkedin-ad-library/search?company=${encodeURIComponent(brand)}`
),
]);
return {
capturedAt: new Date().toISOString(),
facebook: normalizeAds(facebook.data),
linkedin: normalizeAds(linkedin.data),
};
}
function signature(ad) {
return [ad.headline, ad.body, ad.cta, ad.url]
.map(value => (value || '').trim().toLowerCase())
.join('|');
}
function scoreFatigue(currentAds, previousAds) {
const currentSignatures = new Set(currentAds.map(signature));
const previousSignatures = new Set(previousAds.map(signature));
const repeated = [...currentSignatures].filter(sig => previousSignatures.has(sig)).length;
const carryoverRate = currentSignatures.size ? repeated / currentSignatures.size : 0;
const noveltyRate = 1 - carryoverRate;
const currentCtas = new Set(currentAds.map(ad => ad.cta).filter(Boolean));
const currentUrls = new Set(currentAds.map(ad => ad.url).filter(Boolean));
let score = 0;
if (carryoverRate > 0.85) score += 45;
else if (carryoverRate > 0.70) score += 30;
if (currentCtas.size <= 1) score += 20;
if (currentUrls.size <= 1) score += 20;
if (noveltyRate < 0.15) score += 15;
let label = 'low';
if (score >= 40) label = 'moderate';
if (score >= 70) label = 'high';
return {
creativeCount: currentSignatures.size,
repeatedCreatives: repeated,
carryoverRate: Number(carryoverRate.toFixed(2)),
noveltyRate: Number(noveltyRate.toFixed(2)),
uniqueCtas: currentCtas.size,
uniqueLandingPages: currentUrls.size,
fatigueScore: score,
fatigueLabel: label,
};
}
async function loadSnapshot(path) {
try {
const raw = await fs.readFile(path, 'utf8');
return JSON.parse(raw);
} catch {
return null;
}
}
async function saveSnapshot(path, snapshot) {
await fs.writeFile(path, JSON.stringify(snapshot, null, 2));
}
const path = './hubspot-fatigue.json';
const previousSnapshot = await loadSnapshot(path);
const currentSnapshot = await captureSnapshot('HubSpot');
if (previousSnapshot) {
const currentAds = [...currentSnapshot.facebook, ...currentSnapshot.linkedin];
const previousAds = [...previousSnapshot.facebook, ...previousSnapshot.linkedin];
console.log(scoreFatigue(currentAds, previousAds));
}
await saveSnapshot(path, currentSnapshot);
This is not trying to guess exact ad performance.
It is trying to measure how much of the public creative surface still looks fresh.
That is a more realistic goal.
If you want to skip the collection layer and focus on the snapshot and scoring logic, that is exactly where SociaVault fits well.
Python Version: Same Signals, Different Stack
If your monitoring jobs run in Python, the same idea is easy to port.
import json
import os
from pathlib import Path
from datetime import datetime, timezone
import requests
HEADERS = {'X-API-Key': os.environ['SOCIAVAULT_API_KEY']}
def fetch_json(url):
response = requests.get(url, headers=HEADERS, timeout=30)
response.raise_for_status()
return response.json()
def normalize_ads(items=None):
items = items or []
normalized = []
for item in items:
normalized.append({
'headline': item.get('headline') or item.get('title') or item.get('snapshot', {}).get('title', ''),
'body': item.get('body') or item.get('text') or item.get('snapshot', {}).get('body', {}).get('markup', ''),
'cta': item.get('cta') or item.get('call_to_action') or item.get('snapshot', {}).get('cta_text', ''),
'url': item.get('url') or item.get('landingPageUrl') or item.get('snapshot', {}).get('link_url', ''),
})
return normalized
def capture_snapshot(brand):
facebook = fetch_json(
f'https://api.sociavault.com/v1/scrape/facebook-ad-library/company-ads?companyName={brand}&status=ACTIVE&trim=true'
)
linkedin = fetch_json(
f'https://api.sociavault.com/v1/scrape/linkedin-ad-library/search?company={brand}'
)
return {
'capturedAt': datetime.now(timezone.utc).isoformat(),
'facebook': normalize_ads(facebook.get('data')),
'linkedin': normalize_ads(linkedin.get('data')),
}
def signature(ad):
return '|'.join([
(ad.get('headline') or '').strip().lower(),
(ad.get('body') or '').strip().lower(),
(ad.get('cta') or '').strip().lower(),
(ad.get('url') or '').strip().lower(),
])
def score_fatigue(current_ads, previous_ads):
current_signatures = {signature(ad) for ad in current_ads}
previous_signatures = {signature(ad) for ad in previous_ads}
repeated = len(current_signatures & previous_signatures)
carryover_rate = repeated / len(current_signatures) if current_signatures else 0
novelty_rate = 1 - carryover_rate
current_ctas = {ad.get('cta') for ad in current_ads if ad.get('cta')}
current_urls = {ad.get('url') for ad in current_ads if ad.get('url')}
score = 0
if carryover_rate > 0.85:
score += 45
elif carryover_rate > 0.70:
score += 30
if len(current_ctas) <= 1:
score += 20
if len(current_urls) <= 1:
score += 20
if novelty_rate < 0.15:
score += 15
label = 'low'
if score >= 40:
label = 'moderate'
if score >= 70:
label = 'high'
return {
'creativeCount': len(current_signatures),
'repeatedCreatives': repeated,
'carryoverRate': round(carryover_rate, 2),
'noveltyRate': round(novelty_rate, 2),
'uniqueCtas': len(current_ctas),
'uniqueLandingPages': len(current_urls),
'fatigueScore': score,
'fatigueLabel': label,
}
path = Path('./hubspot-fatigue.json')
previous_snapshot = json.loads(path.read_text()) if path.exists() else None
current_snapshot = capture_snapshot('HubSpot')
if previous_snapshot:
current_ads = current_snapshot['facebook'] + current_snapshot['linkedin']
previous_ads = previous_snapshot['facebook'] + previous_snapshot['linkedin']
print(score_fatigue(current_ads, previous_ads))
path.write_text(json.dumps(current_snapshot, indent=2))
What This Signal Is Good For
This kind of fatigue analysis is especially useful when you want to answer questions like:
- Is this competitor still testing new ideas?
- Are they over-leaning on one winner?
- Has their category narrative stopped evolving?
- Is a pricing, offer, or positioning change overdue?
That is valuable even if you never know the exact ROAS.
What This Signal Cannot Tell You
This is the important caveat.
High carryover does not automatically mean the campaign is failing.
Sometimes a "tired" public ad is still profitable enough internally that the team keeps it live on purpose.
So I do not treat fatigue as a verdict.
I treat it as a prompt.
Something like:
this advertiser looks less exploratory than they did three weeks ago. Why?
That is a useful question.
Honest Alternatives
There are a few other ways to approach this.
Performance data from your own ad account
Best if you are measuring your own fatigue.
Not useful for competitor monitoring.
Manual creative review
Still valuable, especially for strategy teams.
But it does not scale well or give you historical baselines.
Full creative clustering with embeddings
Powerful, but heavier than most teams need at first.
I would start with string-level signatures and only get more sophisticated if the workflow proves useful.
Final Take
Creative fatigue gets easier to spot once you stop treating the ad library like a gallery and start treating it like a time series.
Capture snapshots. Compare them. Measure carryover. Watch whether the CTA and landing-page set are evolving.
That alone will tell you more than most ad-library browsing ever will.
And if you want the public ad data layer without building four separate collection systems first, SociaVault is a good place to plug in.
Top comments (0)