DEV Community

Cover image for Your Black Friday Attribution Model Is Probably Lying to You
Drew Madore
Drew Madore

Posted on

Your Black Friday Attribution Model Is Probably Lying to You

Black Friday 2024 is done. Cyber Monday's in the books. And somewhere right now, a marketing team is staring at a dashboard trying to figure out which channel actually deserves credit for that $847,000 in holiday revenue.

Spoiler: It's complicated.

Here's what actually happened during your holiday campaign: Someone saw your Instagram ad on their phone while standing in line at Starbucks. Forgot about it. Got a retargeting ad three days later. Clicked through, browsed, left. Received an email. Ignored it. Saw another email. Opened it but didn't click. Searched your brand name on Google a week later. Clicked a Shopping ad. Added items to cart. Abandoned it. Got a cart abandonment email. Came back direct. Made a purchase.

Now tell me: which channel gets the credit?

If you said "all of them, proportionally weighted by their actual influence on the purchase decision," congratulations—you understand attribution modeling in theory. In practice, your analytics platform just gave 100% credit to that last direct visit and called it a day.

The Holiday Attribution Problem Nobody Talks About

Multi-touch attribution gets exponentially harder during Q4. Not just "a bit more complex"—genuinely harder to model accurately.

Why? Because holiday shopping behavior breaks all your baseline assumptions. That customer journey that usually takes 3-5 days? During Black Friday week, it compresses into 18 hours. Or stretches across three weeks because someone's browsing for gifts in early November but not buying until December.

Your attribution window settings? Probably wrong for holiday traffic.

Google Analytics 4 defaults to a 90-day lookback window. Sounds generous. But when someone interacts with your brand across email, paid social, organic search, display ads, and direct visits within a 10-day span—and you're running 4x your normal ad spend across all those channels simultaneously—that 90-day window captures a whole lot of noise from other campaigns.

I pulled data from three different e-commerce brands I've worked with. During Black Friday week, the average customer touchpoint count before purchase increased by 73% compared to October. Not 7%. Seventy-three percent.

Your attribution model wasn't built for that.

What Last-Click Attribution Actually Tells You

Let's be honest about last-click for a second. It's not useless. It's just answering a different question than you think.

Last-click tells you: "What was the final thing that happened before someone bought?" That's valuable data. It's just not the same as "What made someone buy?"

During the holidays, last-click attribution systematically over-credits:

  • Direct traffic (because brand awareness campaigns pushed people to search for you)
  • Branded search (same reason)
  • Email (because it's often the final nudge after other channels did the heavy lifting)

And it systematically under-credits:

  • Display ads (which nobody clicks but everyone sees)
  • Upper-funnel social campaigns (which introduce your brand to cold audiences)
  • Organic content (which builds trust before someone's ready to buy)

One brand I analyzed spent $43,000 on prospecting Facebook campaigns during Black Friday week. Last-click attribution credited those campaigns with $8,200 in revenue. First-click attribution? $67,400. The truth was probably somewhere in between, but the 8x difference tells you something important: your model matters more than your data.

First-Click Isn't Better, It's Just Different

Some marketers swing hard in the other direction. "Last-click is broken, so let's use first-click!"

Sure. If you want to over-credit every cold prospecting campaign and under-value everything that actually convinced someone to buy.

First-click tells you what introduced someone to your brand. That's genuinely useful for understanding acquisition channels. But it completely ignores the nurture sequence, the retargeting, the email follow-ups, and the final conversion push that actually closed the deal.

During a normal month, first-click might give you decent directional data. During the holidays? It's chaos. Someone might first interact with your brand through an organic social post in early November, then not buy until a Cyber Monday email hits their inbox. First-click gives that social post 100% credit for a purchase that happened 25 days and 8 touchpoints later.

That's not insight. That's fiction.

Linear Attribution: Democracy Is Overrated

Linear attribution sounds fair. Every touchpoint gets equal credit. Very egalitarian. Also very wrong.

Because not all touchpoints are equally valuable. The Instagram ad that introduced your brand to someone new did more work than the third retargeting impression they scrolled past without clicking. The email that finally convinced them to buy deserves more credit than the display ad they saw but didn't consciously register.

Linear attribution treats your carefully orchestrated marketing funnel like a participation trophy ceremony. Everyone gets the same credit just for showing up.

Here's what linear attribution told one e-commerce client during last year's holiday season: their display campaign (which generated 2,847 impressions but only 23 clicks) deserved the same credit as their cart abandonment email sequence (which had a 34% conversion rate).

Does that pass the common sense test? Not even close.

Time-Decay Models: Getting Warmer

Time-decay attribution gives more credit to touchpoints closer to the conversion. This is... actually pretty reasonable for holiday campaigns.

The logic: interactions closer to the purchase probably had more influence on the decision. Someone who clicked your ad yesterday is more relevant to today's purchase than someone who saw an impression three weeks ago.

Google Analytics 4 uses a 7-day half-life for time-decay, meaning touchpoints from 7 days ago get half the credit of touchpoints today. You can adjust this, and you probably should for holiday campaigns.

During Black Friday week, I'd recommend shortening that half-life to 3-4 days. Holiday shopping momentum is compressed. What happened three days ago is more relevant than what happened three weeks ago, even if both are within your attribution window.

But time-decay still has a problem: it assumes recency equals importance. Sometimes the first touchpoint (the one that introduced your brand) was actually the most important, even if it happened two weeks ago. Time-decay systematically under-values top-of-funnel awareness work.

Which brings us to the models that might actually work.

Position-Based (U-Shaped) Attribution: The Compromise That Works

Position-based attribution gives 40% credit to the first touchpoint, 40% to the last touchpoint, and splits the remaining 20% among everything in the middle.

This is the model I default to for most holiday campaign analysis. Not because it's perfect—no model is—but because it acknowledges two important truths:

  1. The first interaction matters (it got someone into your funnel)
  2. The last interaction matters (it closed the deal)
  3. Everything in the middle contributed something

For a Black Friday campaign, this gives meaningful credit to your prospecting ads (which introduced new customers) and your conversion-focused tactics (emails, retargeting, branded search) while not completely ignoring the nurture sequence.

One retail client switched from last-click to position-based attribution for their 2024 holiday analysis. Suddenly, their prospecting campaigns looked 3x more valuable, and their retargeting campaigns looked about 30% less valuable. Neither last-click nor position-based was "true"—but position-based gave them a more complete picture of which channels were actually pulling their weight.

The key insight: their Facebook prospecting campaigns weren't underperforming. They were doing exactly what prospecting campaigns should do—introducing new customers who then converted through other channels. Last-click attribution just couldn't see it.

Data-Driven Attribution: Let The Algorithm Figure It Out

Google Analytics 4 and most major ad platforms now offer data-driven attribution. The algorithm analyzes your actual conversion paths and assigns credit based on statistical modeling of what actually influences purchases.

In theory, this is the holy grail. Let machine learning figure out the complex multi-touch reality.

In practice, it requires a lot of data to work properly. Google recommends at least 400 conversions per month with at least 10,000 ad interactions. During Black Friday week, most e-commerce brands clear that threshold easily. During a normal Tuesday in February? Maybe not.

The bigger issue: data-driven attribution is a black box. You can't see how it's weighting different touchpoints or why. You just have to trust the algorithm.

For some marketers, that's fine. For others (especially when you're defending budget decisions to a CFO), "the algorithm said so" isn't a satisfying explanation.

I've seen data-driven attribution work beautifully for large e-commerce operations with clean data and high volume. I've also seen it produce bizarre results for smaller brands with messy tracking and sparse data. Your mileage will vary.

What Actually Worked: Three Post-Black Friday Attribution Strategies

Enough theory. Here's what I'm seeing work for actual brands analyzing their 2024 holiday performance.

Strategy 1: Use Different Models for Different Questions

Stop trying to find one "true" attribution model. Use multiple models to answer different questions:

  • Last-click to understand final conversion triggers (which emails, retargeting ads, or search terms close deals)
  • First-click to evaluate acquisition channel performance (which campaigns bring in new customers)
  • Position-based for overall campaign ROI and budget allocation decisions
  • Time-decay for understanding momentum and recency effects

One beauty brand I worked with built a simple dashboard that showed revenue by channel across all four models side-by-side. The differences were illuminating. Their Instagram ads looked terrible in last-click ($4,200 attributed revenue) but strong in first-click ($38,900). Their email looked incredible in last-click ($127,000) but mediocre in first-click ($31,000).

The insight: Instagram was doing its job (prospecting), and email was doing its job (converting). Neither channel was underperforming. The last-click model was just telling an incomplete story.

Strategy 2: Segment Holiday vs. Non-Holiday Attribution Windows

Your attribution window settings shouldn't be the same in November as they are in March. Holiday shopping behavior is different.

For post-Black Friday analysis, try this:

  • Shorten your click-through attribution window to 7-14 days (down from 30)
  • Keep your view-through window at 1 day (or disable it entirely if your display volume is high)
  • Create separate conversion segments for Black Friday week, Cyber Week, and December to see how behavior differs

One electronics retailer found that Black Friday purchases had an average of 4.2 touchpoints over 6 days, while December purchases averaged 6.7 touchpoints over 18 days. Same customers, different behavior patterns. Using the same attribution settings for both periods was distorting their channel performance data.

Strategy 3: Build a Holdout Test for Your Biggest Spend

Attribution models are directional, not definitive. If you really want to know if a channel is working, you need incrementality testing.

Here's the simple version: Take your biggest holiday marketing channel (probably paid social or Google Ads). For a small percentage of your audience (5-10%), turn it off completely. Compare conversion rates between the exposed group and the holdout group.

The difference is your actual incremental impact. Everything else is modeling and assumptions.

A fashion brand did this with their Facebook retargeting during Cyber Week. Attribution models credited retargeting with $89,000 in revenue. The holdout test showed actual incremental revenue of $34,000. Still positive ROI, but 62% lower than the attribution model suggested.

Why the gap? Because a lot of people who saw retargeting ads were going to buy anyway. They were already in-market, already familiar with the brand, already planning to purchase. The retargeting ads got credit, but they didn't necessarily cause the purchase.

This is the dirty secret of attribution modeling: correlation isn't causation, and most models can't tell the difference.

The Tracking Infrastructure Nobody Wants to Fix

Let's talk about why your attribution data is probably messier than you think.

UTM parameters. Server-side tracking. iOS privacy changes. Cookie deprecation. Ad blockers. Cross-device journeys. Multiple browsers. Incognito mode.

Every one of these things breaks your attribution tracking in small ways. During Black Friday week, when traffic spikes 5-10x and customers are bouncing between devices and channels frantically, all those small breaks compound into big data gaps.

I audited one e-commerce brand's tracking after Black Friday. Here's what I found:

  • 23% of email clicks weren't properly tagged with UTM parameters
  • 31% of conversions showed as "direct" traffic (probably actually from untracked sources)
  • Cross-device journeys (starting on mobile, finishing on desktop) weren't being connected
  • Their TikTok ads weren't passing UTM parameters correctly

When a third of your data is miscategorized, your attribution model doesn't matter. Garbage in, garbage out.

The fix isn't sexy: audit your tracking infrastructure before next holiday season. Test every email template, every ad platform, every link. Make sure UTM parameters are consistent and complete. Implement server-side tracking if you haven't already. Connect Google Analytics 4 to your actual purchase data, not just pageviews.

Nobody wants to do this work. Everyone wants to skip to the "optimize your attribution model" part. But if your tracking is broken, your model is just sophisticated nonsense.

What to Do Right Now

You're reading this in December 2024. Black Friday data is fresh. Here's what to do before you forget and move on to January planning:

Pull your conversion paths report. In GA4, go to Advertising > Conversion paths. Look at the actual sequences of touchpoints that led to purchases. You'll see patterns. Maybe email always shows up late in the journey. Maybe paid social is consistently the first touchpoint for new customers. Maybe organic search appears right before purchase (because people are searching your brand name after seeing ads).

Those patterns tell you more than any attribution model percentage.

Compare revenue by channel across multiple attribution models. Don't just accept whatever your default model shows. Export the data using different models and see how the story changes. The channels that perform well across multiple models are probably genuinely valuable. The channels that only look good in one specific model? That's a red flag worth investigating.

Calculate your blended CAC and ROAS. Sometimes the smartest move is to stop trying to attribute every dollar to a specific channel and just look at overall efficiency. If you spent $X on all marketing during Black Friday week and generated $Y in revenue from new customers, your blended numbers tell you if the overall strategy worked. Then you can use attribution modeling for optimization, not justification.

Document what you learned for next year. Which channels exceeded expectations? Which disappointed? What attribution insights changed your understanding of channel performance? Write it down now, because you won't remember the nuances in 10 months when you're planning 2025's holiday strategy.

The Truth About Attribution

Here's what I've learned after analyzing attribution data for dozens of e-commerce brands over multiple holiday seasons:

No attribution model is "correct." They're all simplified versions of a messy reality where humans make decisions based on a complex mix of touchpoints, timing, context, and factors your analytics can't possibly capture (like seeing your product mentioned in a Reddit thread or hearing about it from a friend).

The goal isn't to find the perfect model. The goal is to use imperfect models to make better decisions than you would with no data at all.

Last-click attribution is wrong, but it tells you something useful about conversion triggers. First-click is wrong, but it tells you something useful about acquisition. Position-based is wrong, but it tells you something useful about the full journey. Data-driven attribution is probably the least wrong, but it's a black box that requires trust and volume.

Use multiple models. Compare them. Look for patterns. And remember that attribution modeling is a tool for optimization, not a source of absolute truth.

Your Black Friday campaign succeeded or failed based on revenue, profit, and customer acquisition. Attribution modeling just helps you understand which channels contributed what, so you can make smarter decisions about where to invest next year.

That's valuable. Just don't mistake the model for reality.

Top comments (0)