DEV Community

Zackrag
Zackrag

Posted on

Website Visitor Identification Match Rates: What Vendors Claim vs. What You Actually Get

Three vendors demoed the same product to my team in the same week. Warmly said 65%. RB2B said "identify your LinkedIn visitors." Dealfront said 40–60%. I ran the numbers after 90 days: we were identifying 11% of visitors, and maybe a third of those contacts were accurate enough to act on.

The vendors weren't lying, exactly. They just weren't answering the question you actually need answered.

"Match Rate" Doesn't Mean What You Think It Means

When a vendor says "we achieve a 65% match rate," they're usually counting company-level identification against traffic that includes repeat visitors, bots, and sessions already in your CRM. That number is not what lands in your sales rep's inbox.

There are four separate metrics that get collapsed into the phrase "match rate":

  1. Company-level identification rate: What percentage of sessions resolve to a company name. Typical range: 30–65% for US B2B traffic. Sounds impressive; is table stakes.
  2. Person-level identification rate: What percentage of sessions resolve to a named individual. Real-world range: 5–20%. This is the number that actually matters.
  3. Contact accuracy rate: Of the people identified, how many are correctly identified — right person, right company, reachable email. Vendors almost never disclose this.
  4. Actionable identification rate: Net-new contacts your team doesn't already have, in-ICP, not already in a sequence. This determines ROI.

A vendor claiming 65% is talking about metric #1. Your RevOps lead asking "how many net-new leads will this generate" is asking about metric #4. These numbers diverge by 5–10x in practice.

Remote Work Didn't Just Hurt These Tools — It Broke the Core Assumption

IP-to-company matching works on one premise: people browse the web from their employer's network. In 2019, that was mostly true.

By 2026, over 60% of knowledge workers are fully remote or hybrid. A marketing manager at a Series B SaaS company is browsing your pricing page from her apartment in Austin on her home Comcast connection. Her IP resolves to Comcast — not to her employer.

I tested this on a 10,000-session sample from a B2B SaaS client last quarter. Of those sessions:

  • ~3,200 came from clearly corporate IPs (data centers, known company ranges)
  • ~4,100 resolved to residential ISPs
  • ~1,800 were mobile traffic
  • ~900 were VPN exits

Leadfeeder's documentation is honest about this: IP matching works best when employees browse from corporate offices or corporate VPNs. That caveat quietly excludes the majority of your traffic in a post-2020 workforce.

6sense WebSights and Clearbit Breeze handle this better than pure IP-resolution tools by layering cookie data and identity graphs on top of IP matching. The improvement is real — but it's bounded. They're mitigating a structural problem, not solving it.

Why Demo Match Rates Are Meaningless

Every tool performs better in a demo environment. This isn't malicious — it's structural.

Vendors demo against:

  • Their own website traffic (self-selected, high-intent visitors)
  • Traffic pre-seeded with enriched contacts
  • Accounts already in their identity graph from other customers
  • US-only traffic (international data coverage drops sharply outside North America)

Production environments have bots, current customers, job applicants, competitors, international visitors (often 30–50% of sessions), and mobile traffic where IP identification rarely works at all.

Warmly published an acknowledgment in their own blog that "demo environments show 3–5x higher match rates than production." That's their admission. Read it before you sign anything.

What the Independent Accuracy Tests Show

The most rigorous test I've reviewed used 500 known individuals — people whose identities the auditors could verify — browsing target sites under natural conditions, with six tools running simultaneously. Scoring weighted correct person identification (30%), correct company (25%), contact info validity (25%), and contact relevance (20%).

Tool Overall Score Correct ID Rate Contact Relevance
6sense WebSights 6.5/10 ~65% 6.0/10
Leadfeeder 6.2/10 ~62% 5.5/10
Clearbit Breeze 5.8/10 ~58% 5.0/10
RB2B 5.2/10 ~52% 4.0/10
Warmly 4.0/10 ~40% 3.0/10

A few things stood out when I looked at these results.

RB2B scored poorly on contact relevance (4.0/10) because it routinely surfaced contacts with the wrong seniority. You configure it for VP-and-above, and individual contributors show up. The LinkedIn dependency creates a systematic blind spot too — anyone without an active LinkedIn profile is invisible to the tool.

Warmly's 4.0/10 surprised me given how aggressively they market the waterfall approach. In one test case, when a known contact visited a pricing page, Warmly identified a different person at a different organization entirely. That's not a miss — that's a false positive, which is worse, because it sends your sales team chasing the wrong person.

Dealfront isn't in this table because the auditors focused on US traffic, and that's where Dealfront is weakest. For European traffic — particularly German and Nordic companies — their data provenance gives them a real edge over Clearbit Breeze and RB2B. If 40%+ of your traffic comes from Europe, the rankings above don't reflect your reality.

Koala also sat out this particular audit, but in my own testing their strength is intent signal layering, not raw match rate. They'll show you fewer people than some competitors but tell you more about what those people did.

What 1,000 Monthly Visitors Actually Gets You

Let me run the math that vendors almost never show you.

Start with 1,000 monthly visitors:

  • Remove bots, crawlers, current customers: ~700 net-new sessions
  • Company-level identification at 40%: ~280 companies
  • Person-level identification at 12%: ~84 individuals
  • Remove out-of-ICP, wrong seniority, invalid contact data: ~40 actionable contacts
  • Remove people already in CRM or active sequences: ~25 net-new actionable contacts per month

That's 25 net-new leads from 1,000 visitors. Not 650. Not even 280.

At a 10% outreach-to-meeting rate (generous for cold outreach to people who didn't request contact), you're booking 2–3 meetings per month from those 1,000 visitors. At a typical B2B ACV of $15–25K, you need to close roughly one deal every few months from this channel to break even on a $500–$1,500/month subscription.

That math can work — but only if you build an outreach motion around the data. Most teams buy the tool, get the data, and then have no one running outreach.

A 30-Day Trial Framework That Gives You Honest Numbers

Before signing an annual contract, run this test:

Week 1 — Baseline: Install the tracker, don't tell your sales team. Let it run. Record total sessions, identified companies, identified individuals.

Week 2 — Accuracy audit: Pull a random sample of 50 identified individuals. Look each one up manually. Score on three dimensions: correct person (does this person match the session source?), valid contact info (does the email bounce? Is the phone reachable?), correct seniority (does it match your ICP filter?). Calculate a percentage for each.

Week 3 — CRM deduplication: Export all identified contacts. Match against your CRM. What percentage are existing customers, existing leads, or in active sequences? Subtract them — they're not leads.

Week 4 — Outreach pilot: Take your remaining net-new, accurate, in-ICP contacts and run a simple three-step sequence. Measure reply rate and meetings booked. Compare against your existing cold outreach baseline.

Now you have an actual ROI number, derived from your traffic, your ICP, and your sales motion — not a demo environment your vendor curated.

What I Actually Use

My current stack depends on the traffic profile and what we're trying to do with the data.

For accounts-first work — matching sessions to named accounts already in my pipeline — 6sense WebSights is the most reliable I've tested. The account-level accuracy holds up across different traffic profiles, and the intent signal layer helps prioritize which accounts to contact this week versus next quarter.

For companies with predominantly European traffic, Dealfront is underrated. The data quality on German and Nordic companies is meaningfully better than Clearbit Breeze or RB2B for those geographies, and most vendor comparisons are written by US-centric teams who miss this entirely.

For social profile cross-referencing — when I need to identify visitors who came through a Twitter or Facebook campaign and match their social identity to contact data — Ziwa has been faster for me than People Data Labs's direct API for that specific lookup type. Narrow use case, but one where the tool genuinely earns its keep.

The honest answer is that no single tool achieves what its demo suggests. What works is pairing a solid identity graph (6sense or Clearbit) with a disciplined, resourced outreach motion. The data is only worth anything if someone follows up on it within 24–48 hours of the visit. Without the second half, you're paying for a very expensive dashboard.

Run the 30-day test before you commit. The number that comes out of that test is the only match rate that matters for your business.

Top comments (0)