DEV Community

Apex Stack
Apex Stack

Posted on

Most Ad Audits Miss 5 of 7 Critical Dimensions. Here's the Framework I Built to Fix That.

Most ad campaign audits are shallow. They check ROAS, maybe creative performance, and call it a day.

But after auditing dozens of campaigns across Meta, Google, TikTok, and LinkedIn, I noticed the same pattern: the problems that bleed the most budget are hiding in dimensions nobody checks. Audience overlap silently inflates CPMs. Bid strategies contradict campaign objectives. Conversion tracking gaps make every other metric unreliable.

So I built a structured 7-dimension audit framework, packaged it as a Claude Skill, and started using it on every campaign I touch. Here's how it works — and why most audits only scratch the surface.

The Problem: Partial Audits Lead to Partial Results

A typical ad audit looks like this: export your Meta Ads Manager report, sort by ROAS, kill the losers, scale the winners. Maybe check frequency if you're thorough.

That approach misses five critical dimensions that often have more impact on profitability than ROAS alone:

  1. Audience overlap — Are your ad sets bidding against each other? I've seen accounts where 40%+ of audiences overlap, artificially inflating CPMs by 15-30%.
  2. Conversion tracking integrity — If your Meta Pixel isn't paired with Conversions API (CAPI), you're likely underreporting conversions by 20-30% post-iOS 14.5. Every decision built on that data is compromised.
  3. Bid strategy alignment — A cost-cap bid on a campaign getting 15 conversions/week will spend erratically. The algorithm needs 50+ weekly conversions to optimize properly.
  4. Landing page experience — A 5-second load time on mobile kills conversion rates, but the ad platform doesn't flag this. Your "low-performing" ad might actually have a landing page problem.
  5. Budget allocation efficiency — Equal budget across unequal performers is the most common waste pattern. Marginal ROAS analysis often reveals 20-30% of spend is better allocated elsewhere.

The 7-Dimension Campaign Health Framework

After systematizing audits across different platforms and verticals, I settled on 7 dimensions that together give a complete picture of campaign health. Each is scored 1-10, giving a total health score out of 70.

Dimension 1: Creative Performance & Fatigue
CTR trends over time, frequency thresholds, creative variant count, hook rates for video. The key insight: frequency above 3.0 almost always correlates with declining CTR. Most advertisers wait until frequency hits 5.0+ before refreshing — by then, you've wasted 2-3 weeks of budget on exhausted creatives.

Dimension 2: Audience Quality & Overlap
Audience definition mapping, overlap estimation between ad sets, conversion rates by segment. The "overlap tax" — wasted spend from self-competition — is invisible in standard reporting. I've calculated it at 10-25% of total spend in poorly structured accounts.

Dimension 3: Budget Allocation & Pacing
Marginal ROAS analysis, CBO vs. ABO appropriateness, dayparting opportunities, budget headroom testing. The question isn't "which campaign has the best ROAS?" — it's "where does the next dollar generate the most return?"

Dimension 4: Conversion Tracking & Attribution
Pixel + CAPI implementation, Event Match Quality scores, UTM consistency, cross-platform attribution conflicts. This dimension is the foundation — if your measurement is broken, every other optimization is built on sand.

Dimension 5: Bid Strategy & Campaign Structure
Bid strategy matching to objectives, campaign consolidation score, Learning Limited status, ad set feeding levels. Meta's algorithm needs 50+ conversions per ad set per week to optimize properly. Most accounts have fragmented structures with ad sets getting 5-10 conversions weekly.

Dimension 6: Landing Page & Post-Click Experience
Load time, message match between ad and page, mobile optimization, CTA clarity, form friction. This is where ad audits traditionally stop — but the post-click experience determines whether clicks become conversions.

Dimension 7: ROAS & Profitability Analysis
Blended vs. marginal ROAS, break-even calculation, new vs. returning customer splits, contribution margin per campaign. The final dimension ties everything together — are you actually making money, or is retargeting revenue masking a prospecting problem?

What a Full Audit Output Looks Like

Here's the structure the skill generates from your campaign data:

## Campaign Audit: [Brand Name]
### Overall Score: 48/70 (69%) — Needs Attention

| Dimension                        | Score | Priority | Top Action                           |
|----------------------------------|-------|----------|--------------------------------------|
| Creative Performance & Fatigue   | 6/10  | 🟡       | Refresh 3 creatives above freq 4.0  |
| Audience Quality & Overlap       | 5/10  | 🔴       | Merge 2 overlapping ad sets (38%)   |
| Budget Allocation & Pacing       | 7/10  | 🟡       | Shift $200/day from Campaign C → A  |
| Conversion Tracking              | 8/10  | 🟢       | Add CAPI for Purchase event         |
| Bid Strategy & Structure         | 6/10  | 🟡       | Consolidate ad sets in Learning Ltd |
| Landing Page Experience          | 4/10  | 🔴       | Fix mobile load time (4.2s → <3s)   |
| ROAS & Profitability             | 7/10  | 🟡       | Separate new vs returning reporting |

### Top 5 Priority Actions
1. Fix landing page mobile speed → Est. +18% conversion rate
2. Merge overlapping audiences → Est. -22% CPM savings
3. Refresh exhausted creatives → Est. +12% CTR recovery
4. Consolidate underfed ad sets → Exit Learning Limited
5. Implement CAPI for Purchase → +15% conversion attribution
Enter fullscreen mode Exit fullscreen mode

The skill generates this from raw campaign exports — CSV data, pasted tables, or even a manual description of your setup. No API keys needed. No subscriptions.

The Benchmark Problem (and How I Solved It)

One of the biggest challenges in ad auditing is knowing what "good" looks like. A 1.2% CTR on Meta Feed might be excellent for B2B finance ($3.89 avg CPC) but mediocre for eCommerce ($1.12 avg CPC).

The paid version includes industry benchmarks compiled from WordStream, Databox, Revealbot, and Varos data for 2025-2026. Here's a sample:

Platform Metric Average Good Excellent
Meta (Feed) CTR 0.90% 1.5%+ 2.5%+
Meta CPC $1.72 <$1.20 <$0.70
Google Search CTR 3.17% 5%+ 8%+
TikTok Hook Rate (3s) 30% 45%+ 60%+
LinkedIn CPC $5.26 <$4.00 <$2.50

Without benchmarks, you're scoring in a vacuum. A 2.0x ROAS sounds decent until you realize your break-even ROAS at 50% gross margin is... 2.0x. You're not making money — you're treading water.

The break-even formula: Break-Even ROAS = 1 / Gross Margin %. At 70% margin, you break even at 1.43x. At 30% margin, you need 3.33x just to cover COGS. Most advertisers I've worked with don't have this number memorized — and they should.

Why a Claude Skill Instead of a SaaS Tool?

The existing landscape for ad auditing tools is dominated by monthly SaaS subscriptions. Madgicx runs $44-166/month. Revealbot is $99+/month. Adzooma has a free tier but upsells aggressively.

These tools are powerful, but they're overkill for many advertisers who need periodic audits, not always-on monitoring. A Claude Skill gives you the audit framework, the benchmarks, and the structured analysis — for a one-time $19 purchase. Run it whenever you need it. No recurring fees.

The skill also covers what SaaS tools often skip: qualitative assessment of campaign structure, bid strategy alignment, and creative pipeline health. These require judgment, not just data aggregation.

Specialized Modules Beyond the Core Audit

The full skill includes six focused modules that go deeper on specific problems:

Creative Fatigue Detector — Feed it time-series creative data and it classifies each ad into lifecycle stages (Fresh → Mature → Fatiguing → Exhausted) with specific refresh recommendations.

Audience Overlap Analyzer — Maps audience definitions across ad sets, estimates overlap percentages, and calculates the "overlap tax" in wasted spend.

Budget Allocation Optimizer — Calculates marginal ROAS per campaign to find the optimal spend distribution with expected impact projections.

Scaling Readiness Assessment — Evaluates six criteria (ROAS stability, CPA variance, Learning Limited status, frequency, conversion volume, creative freshness) to determine which campaigns can absorb more budget safely.

Weekly Performance Brief — Generates a stakeholder-ready report with WoW comparisons. Saves 30-60 minutes of manual reporting every week.

Try It: Free Lite Version Available

The lite version covers 3 of 7 dimensions (Creative Fatigue, Budget Allocation, ROAS & Profitability) — enough to find the biggest problems in most accounts.

Free lite version: Ad Performance Auditor Lite on GitHub

Full 7-dimension version ($19): Ad Performance Auditor on Gumroad

If you're building a financial data product and want to see programmatic SEO at scale, check out how I'm analyzing stock performance data across 8,000+ tickers — the same data-driven methodology applies to ad campaign analysis.

What's Next

The Ad Performance Auditor pairs well with two other tools in the Apex Stack lineup:

Performance marketing isn't just about spending more on what works. It's about systematically finding and fixing the leaks across all seven dimensions — creative, audience, budget, tracking, structure, post-click, and profitability.

The framework is public. The benchmarks are in the skill. The question is whether you'll keep running partial audits or start catching the problems hiding in the dimensions nobody checks.


Built by Apex Stack — tools and frameworks for builders who ship.

Top comments (2)

Collapse
 
dannwaneri profile image
Daniel Nwaneri

The conversion tracking dimension being the foundation that makes every other metric unreliable — that's the finding that should be at the top, not dimension 4. If the pixel isn't paired with CAPI post-iOS 14.5, you're not auditing a campaign, you're auditing a measurement artifact. Everything built on that data is optimizing against a fiction.

The scoring framework solves a problem I've run into on the SEO side too: without a consistent unit of measurement across dimensions, audits produce a list of problems with no clear prioritization. A 48/70 with a red flag on landing page load time tells you something a pass/fail checklist doesn't — where the next dollar of attention actually goes.

Curious about the overlap tax calculation — is the 10-25% estimate derived from audience size overlap percentages, or are you measuring it directly from CPM lift when overlapping ad sets are running simultaneously? The difference matters for whether you can catch it pre-launch or only post-spend.
Just a heads-up that a few links in the article are hitting 404s. Might be worth updating them. Thanks

Collapse
 
apex_stack profile image
Apex Stack

Great catch on the conversion tracking ordering — you're absolutely right that it should logically sit at the foundation of the framework, not at dimension 4. The way I structured it in the article was more about the typical audit workflow (you usually start with what's easiest to check and escalate), but from a dependency standpoint, if your pixel+CAPI setup is broken, every other metric is suspect. I'll consider restructuring the framework to make that dependency more explicit.

On the overlap tax question: the 10-25% estimate comes from a combination of both signals — audience overlap percentages from the platform's built-in tools (Meta's Audience Overlap tool, for instance) and then observed CPM inflation when overlapping ad sets run concurrently. You're right that it matters for timing — the audience overlap percentage is something you can catch pre-launch, while the CPM lift is only measurable post-spend. The skill actually flags both: pre-launch overlap detection as a preventive check, and post-spend CPM anomaly detection as a diagnostic one.

Thanks for the heads-up on the 404s — I'll get those fixed. And appreciate the thoughtful engagement with the scoring framework. The cross-domain application to SEO audits is exactly the kind of use case I was hoping people would find.