The fastest way to waste money in influencer marketing is to approve creators too quickly.
I know because the first versions of my creator review process were basically vibes plus screenshots.
The account looked polished.
The follower count looked big enough.
The average views looked decent.
Then the campaign underdelivered and everyone acted surprised.
That is what finally pushed me into building a more structured audience quality audit.
Not a giant enterprise platform. Not some fake-AI fraud detector that claims absolute certainty.
Just a workflow that helps answer a practical question before money goes out:
does this audience look believable enough, engaged enough, and useful enough to justify the spend?
This post is how I would build that audit today, what signals I trust most, and how I would wire it together in JavaScript and Python using public social data.
The Goal Is Not Perfect Fraud Detection
This is the first thing worth getting straight.
You are not trying to prove every follower is real.
You are trying to reduce obvious bad bets.
That is a different job.
So the output I want from an audience quality audit is not "fraud" or "clean."
It is something more useful:
- pass
- caution
- fail
That framing makes the system much more honest.
The Five Signals I Care About First
There are a lot of possible social metrics. Most of them are less useful than people think.
These are the five I start with.
1. Follower-to-engagement sanity
If someone has a huge audience but almost no visible response, that matters.
2. Recent-post consistency
I care more about the last 10 to 12 posts than a flattering overall average.
3. Comment quality
This is often more revealing than top-line engagement rate.
4. Audience sample quality
If follower sampling is available, it helps catch obviously low-quality audiences fast.
5. Cross-platform coherence
If a creator claims broad influence, the public narrative should mostly hold together across platforms.
That alone catches a surprising amount of bad inventory.
The Audit Flow I Would Actually Build
My version is simple.
- Pull profile data
- Pull recent posts or videos
- Calculate engagement and volatility
- Pull a follower sample when possible
- Score comments qualitatively
- Return pass, caution, or fail with reasons
That is enough for a useful first version.
JavaScript Version: Quick Creator Audit Workflow
This version uses TikTok as the example because the public signal set is strong enough to make the audit practical.
const headers = {
'X-API-Key': process.env.SOCIAVAULT_API_KEY,
};
async function fetchJson(url) {
const response = await fetch(url, { headers });
if (!response.ok) {
throw new Error(`Request failed with ${response.status}`);
}
return response.json();
}
function average(values) {
if (!values.length) return 0;
return values.reduce((sum, value) => sum + value, 0) / values.length;
}
function classifyAudit({ engagementRate, volatility, suspiciousFollowerShare, weakCommentShare }) {
let score = 100;
const reasons = [];
if (engagementRate < 1) {
score -= 25;
reasons.push('Low engagement relative to audience size');
}
if (volatility > 1.5) {
score -= 15;
reasons.push('Recent performance is unusually inconsistent');
}
if (suspiciousFollowerShare > 0.3) {
score -= 30;
reasons.push('Follower sample includes too many low-quality accounts');
}
if (weakCommentShare > 0.4) {
score -= 20;
reasons.push('Comment quality looks weak or generic');
}
let verdict = 'pass';
if (score < 70) verdict = 'caution';
if (score < 45) verdict = 'fail';
return { score, verdict, reasons };
}
function isWeakComment(text = '') {
const value = text.toLowerCase().trim();
if (!value) return true;
if (value.length < 6) return true;
if (/^(nice|great|love this|amazing|wow|cool)[!. ]*$/.test(value)) return true;
if (!/[a-z]/i.test(value)) return true;
return false;
}
function isSuspiciousFollower(follower = {}) {
const username = follower.username || follower.unique_id || '';
const posts = follower.aweme_count || follower.videoCount || 0;
const followers = follower.follower_count || follower.followerCount || 0;
if (posts === 0 && followers < 20) return true;
if (/\d{5,}/.test(username)) return true;
return false;
}
async function auditCreator(handle) {
const [profileJson, videosJson, followersJson] = await Promise.all([
fetchJson(`https://api.sociavault.com/v1/scrape/tiktok/profile?handle=${encodeURIComponent(handle)}`),
fetchJson(`https://api.sociavault.com/v1/scrape/tiktok/videos?handle=${encodeURIComponent(handle)}&amount=12&sort_by=latest&trim=true`),
fetchJson(`https://api.sociavault.com/v1/scrape/tiktok/followers?handle=${encodeURIComponent(handle)}&trim=true`),
]);
const profile = profileJson.data || {};
const videos = videosJson.data || [];
const followers = followersJson.data || [];
const followerCount = profile.stats?.followerCount || profile.followerCount || 0;
const engagements = videos.map(video => (video.stats?.diggCount || 0) + (video.stats?.commentCount || 0));
const engagementRate = followerCount > 0 ? (average(engagements) / followerCount) * 100 : 0;
const avgEngagement = average(engagements);
const variance = average(engagements.map(value => Math.pow(value - avgEngagement, 2)));
const volatility = avgEngagement > 0 ? Math.sqrt(variance) / avgEngagement : 0;
const comments = videos.flatMap(video => video.comments || []);
const weakCommentShare = comments.length
? comments.filter(comment => isWeakComment(comment.text || comment.content)).length / comments.length
: 0;
const suspiciousFollowerShare = followers.length
? followers.filter(isSuspiciousFollower).length / followers.length
: 0;
return {
handle,
followerCount,
engagementRate: Number(engagementRate.toFixed(2)),
volatility: Number(volatility.toFixed(2)),
suspiciousFollowerShare: Number(suspiciousFollowerShare.toFixed(2)),
weakCommentShare: Number(weakCommentShare.toFixed(2)),
...classifyAudit({ engagementRate, volatility, suspiciousFollowerShare, weakCommentShare }),
};
}
auditCreator('creator_handle')
.then(report => console.log(report))
.catch(error => console.error(error));
Is that perfect? No.
Is it useful enough to stop some obviously bad approvals? Absolutely.
Python Version: Same Audit, Easier to Batch
If you want to run this on lists of creators, Python is a nice fit.
import os
import re
import requests
HEADERS = {'X-API-Key': os.environ['SOCIAVAULT_API_KEY']}
def fetch_json(url):
response = requests.get(url, headers=HEADERS, timeout=30)
response.raise_for_status()
return response.json()
def average(values):
return sum(values) / len(values) if values else 0
def is_weak_comment(text=''):
value = text.lower().strip()
if not value:
return True
if len(value) < 6:
return True
if re.match(r'^(nice|great|love this|amazing|wow|cool)[!. ]*$', value):
return True
if not re.search(r'[a-z]', value):
return True
return False
def is_suspicious_follower(follower):
username = follower.get('username') or follower.get('unique_id') or ''
posts = follower.get('aweme_count') or follower.get('videoCount') or 0
followers = follower.get('follower_count') or follower.get('followerCount') or 0
if posts == 0 and followers < 20:
return True
if re.search(r'\d{5,}', username):
return True
return False
def classify_audit(engagement_rate, volatility, suspicious_follower_share, weak_comment_share):
score = 100
reasons = []
if engagement_rate < 1:
score -= 25
reasons.append('Low engagement relative to audience size')
if volatility > 1.5:
score -= 15
reasons.append('Recent performance is unusually inconsistent')
if suspicious_follower_share > 0.3:
score -= 30
reasons.append('Follower sample includes too many low-quality accounts')
if weak_comment_share > 0.4:
score -= 20
reasons.append('Comment quality looks weak or generic')
verdict = 'pass'
if score < 70:
verdict = 'caution'
if score < 45:
verdict = 'fail'
return {'score': score, 'verdict': verdict, 'reasons': reasons}
def audit_creator(handle):
profile_json = fetch_json(f'https://api.sociavault.com/v1/scrape/tiktok/profile?handle={handle}')
videos_json = fetch_json(f'https://api.sociavault.com/v1/scrape/tiktok/videos?handle={handle}&amount=12&sort_by=latest&trim=true')
followers_json = fetch_json(f'https://api.sociavault.com/v1/scrape/tiktok/followers?handle={handle}&trim=true')
profile = profile_json.get('data') or {}
videos = videos_json.get('data') or []
followers = followers_json.get('data') or []
follower_count = profile.get('stats', {}).get('followerCount') or profile.get('followerCount') or 0
engagements = [(video.get('stats', {}).get('diggCount', 0) + video.get('stats', {}).get('commentCount', 0)) for video in videos]
engagement_rate = (average(engagements) / follower_count * 100) if follower_count else 0
avg_engagement = average(engagements)
variance = average([(value - avg_engagement) ** 2 for value in engagements]) if engagements else 0
volatility = (variance ** 0.5 / avg_engagement) if avg_engagement else 0
comments = []
for video in videos:
comments.extend(video.get('comments') or [])
weak_comment_share = (
sum(1 for comment in comments if is_weak_comment(comment.get('text') or comment.get('content') or '')) / len(comments)
if comments else 0
)
suspicious_follower_share = (
sum(1 for follower in followers if is_suspicious_follower(follower)) / len(followers)
if followers else 0
)
report = {
'handle': handle,
'followerCount': follower_count,
'engagementRate': round(engagement_rate, 2),
'volatility': round(volatility, 2),
'suspiciousFollowerShare': round(suspicious_follower_share, 2),
'weakCommentShare': round(weak_comment_share, 2),
}
report.update(classify_audit(engagement_rate, volatility, suspicious_follower_share, weak_comment_share))
return report
print(audit_creator('creator_handle'))
The Output I Actually Want
The audit is only useful if the result is actionable.
That means not just numbers.
It should say something like:
- pass: audience looks strong enough to proceed
- caution: creator could still work, but pricing or structure should change
- fail: too many signals point to weak audience quality or unreliable performance
That is the decision layer.
Without it, you are just generating more homework.
Honest Alternatives
There are a few valid ways to approach this.
Manual review only
Fine for one or two creators.
Terrible once the list gets bigger.
Enterprise influencer tools
Useful if you are already a larger team with budget and a fixed workflow.
Often too heavy or too expensive for teams that just need better screening.
Public-data workflow with SociaVault
This is what I like if I want a flexible audit layer without building a full data-collection stack from scratch.
That is the whole appeal: I can focus on the scoring and decision logic instead of the collection plumbing.
Final Take
An audience quality audit does not need to be perfect to be valuable.
It just needs to be disciplined enough to catch the mistakes your team keeps making under time pressure.
If you want to build that kind of pre-payment review system without wiring up the public social data layer from zero, SociaVault is a strong place to start.
Then keep the rest honest: recent posts, comment quality, follower samples, volatility, and a clear decision at the end.
That is already much better than vibes.
Top comments (0)