<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Zackrag</title>
    <description>The latest articles on DEV Community by Zackrag (@zackrag).</description>
    <link>https://dev.to/zackrag</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/zackrag"/>
    <language>en</language>
    <item>
      <title>Test edit</title>
      <dc:creator>Zackrag</dc:creator>
      <pubDate>Mon, 20 Apr 2026 06:23:52 +0000</pubDate>
      <link>https://dev.to/zackrag/why-30-of-b2b-email-verification-fails-the-catch-all-domain-problem-explained-1n7i</link>
      <guid>https://dev.to/zackrag/why-30-of-b2b-email-verification-fails-the-catch-all-domain-problem-explained-1n7i</guid>
      <description>&lt;p&gt;Three months ago I ran a cold outbound campaign to 4,200 "verified" leads. The list had been processed by a well-known verifier—overall accuracy rate: 97%. My bounce rate on the first send was 11.4%. The domain I was using had been sending cleanly for two years. It took six weeks of re-warmup to recover it.&lt;/p&gt;

&lt;p&gt;The culprit: catch-all domains. Nobody told me they were a separate category with completely different verification mechanics.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a catch-all domain actually is
&lt;/h2&gt;

&lt;p&gt;When your email server checks whether an address is valid, it initiates an SMTP handshake with the receiving mail server and asks: "Does this mailbox exist?" Most servers answer honestly—250 means yes, 550 means no.&lt;/p&gt;

&lt;p&gt;Catch-all servers lie. Or more precisely, they're configured to accept everything. They respond 250 OK to every address—&lt;code&gt;real@company.com&lt;/code&gt;, &lt;code&gt;fake@company.com&lt;/code&gt;, &lt;code&gt;asdfgh@company.com&lt;/code&gt;—regardless of whether any mailbox behind that address is actually active.&lt;/p&gt;

&lt;p&gt;This is usually intentional. Companies configure catch-all so they don't lose emails sent to misspelled or retired addresses. It's especially common in mid-market and enterprise: smaller IT teams, older infrastructure, less aggressive inbox hygiene.&lt;/p&gt;

&lt;p&gt;On a typical B2B prospecting list, &lt;strong&gt;30–40% of addresses sit on catch-all domains&lt;/strong&gt;. For lists heavy with enterprise accounts, that number climbs higher.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why SMTP probing fails completely here
&lt;/h2&gt;

&lt;p&gt;Standard email verification runs five steps: syntax check, DNS lookup, MX record check, SMTP handshake, and catch-all detection. Steps 1–4 work fine. Step 5 is where things break.&lt;/p&gt;

&lt;p&gt;"Catch-all detection" identifies &lt;em&gt;that&lt;/em&gt; a domain is catch-all. It doesn't tell you whether the &lt;em&gt;specific mailbox&lt;/em&gt; behind it is active. Once a server is flagged as catch-all, the verifier has no SMTP-based path to probe individual addresses. The server will say 250 OK to anything you throw at it. The protocol makes deeper probing impossible.&lt;/p&gt;

&lt;p&gt;So what do tools do when they hit a catch-all domain? Most mark every address on it as "risky," "unknown," or "accept-all," then move on. Some try heuristics—checking whether the username format matches LinkedIn patterns, or whether the domain's MX records belong to Google Workspace or Microsoft 365 managed hosting. None of them can tell you definitively whether &lt;code&gt;james.porter@bigcorp.com&lt;/code&gt; has an active inbox.&lt;/p&gt;

&lt;p&gt;This is not a bug in any specific tool. It's a protocol ceiling every tool hits equally.&lt;/p&gt;

&lt;h2&gt;
  
  
  The accuracy number vendors don't show you
&lt;/h2&gt;

&lt;p&gt;Every verification vendor publishes a headline accuracy figure. ZeroBounce claims 98%+. Kickbox claims 99%. NeverBounce's marketing lands around 99.9%. These numbers are accurate. They're also misleading.&lt;/p&gt;

&lt;p&gt;When Hunter.io ran an independent benchmark of 15 email verification tools across roughly 3,000 real business emails, the top scorer hit &lt;strong&gt;70% overall accuracy&lt;/strong&gt;. Clay ran a separate controlled test and found that when catch-all domains were &lt;em&gt;excluded from the sample&lt;/em&gt;, ZeroBounce hit 99.25%, Findymail hit 98.92%, Hunter hit 98.52%.&lt;/p&gt;

&lt;p&gt;The tools are genuinely excellent—on the addresses they can actually verify. The gap is the catch-all segment. Vendors report accuracy on verifiable addresses and either exclude catch-alls from their benchmark methodology or group them under "risky: send at your own risk."&lt;/p&gt;

&lt;p&gt;A list with 40% catch-all addresses and 99% accuracy on everything else can still produce a 9%+ bounce rate if you send to the catch-all segment without further triage. Scrubby's data puts the hard-bounce rate on unverified catch-all emails at around 23%.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the major tools actually handle catch-alls
&lt;/h2&gt;

&lt;p&gt;Here's what I found after running the same 1,200-address catch-all segment through each tool:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Marketed Accuracy&lt;/th&gt;
&lt;th&gt;Catch-All Strategy&lt;/th&gt;
&lt;th&gt;Real-World Verdict&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ZeroBounce&lt;/td&gt;
&lt;td&gt;98%+&lt;/td&gt;
&lt;td&gt;Flags as "accept-all," assigns risk score&lt;/td&gt;
&lt;td&gt;Conservative; useful API structure for pipelines&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NeverBounce&lt;/td&gt;
&lt;td&gt;99.9%&lt;/td&gt;
&lt;td&gt;Flags as "accept-all," excludes from safe-send&lt;/td&gt;
&lt;td&gt;Most conservative; 0 bounces on approved in independent tests&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kickbox&lt;/td&gt;
&lt;td&gt;99%&lt;/td&gt;
&lt;td&gt;Flags as "accept-all," engagement scoring&lt;/td&gt;
&lt;td&gt;Best at &lt;em&gt;detecting&lt;/em&gt; catch-all domains accurately&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Findymail&lt;/td&gt;
&lt;td&gt;98.9%&lt;/td&gt;
&lt;td&gt;Pattern-based scoring for catch-all addresses&lt;/td&gt;
&lt;td&gt;Good username heuristics, useful as second pass&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hunter.io&lt;/td&gt;
&lt;td&gt;95%+&lt;/td&gt;
&lt;td&gt;Returns "catch-all" status, leaves triage to you&lt;/td&gt;
&lt;td&gt;Honest about limitations, no false confidence&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ZoomInfo&lt;/td&gt;
&lt;td&gt;Enterprise tier&lt;/td&gt;
&lt;td&gt;Supplements SMTP with in-platform activity signals&lt;/td&gt;
&lt;td&gt;Best supplemental signal, worst price point&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The table makes something visible that most comparisons hide: &lt;strong&gt;differentiation is in what tools do with uncertainty, not with addresses they can cleanly verify&lt;/strong&gt;. A tool that aggressively marks catch-all addresses as risky will score &lt;em&gt;lower&lt;/em&gt; in aggregate accuracy benchmarks—because it refuses to give a clean verdict—but will protect your deliverability better in practice.&lt;/p&gt;

&lt;p&gt;NeverBounce's conservatism looks bad in head-to-head tests where "gave a clean result on more addresses" is counted as accuracy. In real campaigns, that conservatism is a feature. Kickbox's advantage isn't its overall number—it's that it's better at detecting catch-all domains in the first place, so your risky segment is more accurately populated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why your current workflow is probably wrong
&lt;/h2&gt;

&lt;p&gt;The common mistake: treating "risky" and "invalid" as interchangeable. They're not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invalid&lt;/strong&gt; means the SMTP handshake returned a hard rejection—the mailbox definitively doesn't exist. Never send to these.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Risky/accept-all&lt;/strong&gt; means the tool couldn't verify the address. Some of these are real inboxes. Depending on the domain, delivery rates on catch-all addresses range from 50% to 85%—the spread is large because it depends on the specific company's mail configuration. Suppressing your entire catch-all segment can mean losing 20–30% of legitimate leads.&lt;/p&gt;

&lt;p&gt;Most people run verification once before a campaign and assume they're done. With catch-all domains, that's a single point of failure.&lt;/p&gt;

&lt;h2&gt;
  
  
  A triage workflow that actually works
&lt;/h2&gt;

&lt;p&gt;After losing a domain to a bad catch-all send, I rebuilt my process around five steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Separate your catch-all segment immediately.&lt;/strong&gt; Any verifier returns a status field. Pull every row where status is &lt;code&gt;accept-all&lt;/code&gt;, &lt;code&gt;catch-all&lt;/code&gt;, &lt;code&gt;unknown&lt;/code&gt;, or &lt;code&gt;risky&lt;/code&gt;. Never mix this with your clean-send list.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Apply username pattern scoring.&lt;/strong&gt; Addresses following &lt;code&gt;firstname.lastname&lt;/code&gt; or &lt;code&gt;firstname&lt;/code&gt; patterns at domains with Google Workspace or Microsoft 365 MX records are substantially more likely to be active. Addresses like &lt;code&gt;info@&lt;/code&gt;, &lt;code&gt;admin@&lt;/code&gt;, &lt;code&gt;support@&lt;/code&gt;, or auto-generated strings are almost always shared inboxes or role addresses—skip them regardless.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Verify the person, not just the address.&lt;/strong&gt; If you can confirm on LinkedIn that the person is currently employed at that company, the probability of a real inbox jumps meaningfully. You're not verifying the email through this step—you're verifying the &lt;em&gt;person&lt;/em&gt; behind it. Tools like Clay make this feasible at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Warm up your catch-all sends separately.&lt;/strong&gt; Never mix catch-all addresses into your main send sequence. Start with 10–15% of the catch-all segment, monitor bounce rate after 72 hours, and expand only if you stay under 1% hard bounce. Each new batch is its own experiment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Re-verify quarterly.&lt;/strong&gt; Catch-all configurations change. A domain that was catch-all six months ago might have tightened its policy. A clean domain might have moved to catch-all after an IT migration. Static verification is a snapshot.&lt;/p&gt;

&lt;p&gt;The difference between a 2% bounce rate and an 11% bounce rate is almost always whether you had a catch-all triage layer—not which primary verifier you used.&lt;/p&gt;

&lt;h2&gt;
  
  
  What published benchmarks miss
&lt;/h2&gt;

&lt;p&gt;Published verifier comparisons focus on aggregate accuracy across the full dataset. Almost none isolate catch-all accuracy as a separate metric. This creates a real problem: the tools that score highest in benchmarks are often the ones most likely to give you a false sense of safety.&lt;/p&gt;

&lt;p&gt;If a tool marks 40% of your list as "risky" and you ignore that, you're not using the tool correctly. If a tool marks only 15% as risky and lets the rest through, it's not necessarily more accurate—it may just be less cautious.&lt;/p&gt;

&lt;p&gt;The honest metric to demand from any verifier is: &lt;strong&gt;of the addresses you classified as safe-to-send, what percentage actually delivered?&lt;/strong&gt; Not overall accuracy. Safe-to-send accuracy. That number tells you what the tool is actually doing with catch-all uncertainty.&lt;/p&gt;

&lt;p&gt;Very few vendors publish this breakdown. NeverBounce comes closest with its conservative flagging. For everyone else, you're extrapolating from aggregate claims.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I actually use
&lt;/h2&gt;

&lt;p&gt;For the core verification pass, I run &lt;strong&gt;Kickbox&lt;/strong&gt; first for its catch-all detection reliability, then run the risky segment through &lt;strong&gt;Findymail&lt;/strong&gt; for pattern scoring on the usernames.&lt;/p&gt;

&lt;p&gt;For any batch where the catch-all rate exceeds 30%, I pull it through &lt;strong&gt;Clay&lt;/strong&gt; to add LinkedIn employment signals before deciding whether to include those addresses in an active sequence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ZeroBounce&lt;/strong&gt; is my default when I'm integrating verification into an automated pipeline—the API response structure is cleaner and the documentation is thorough. &lt;strong&gt;NeverBounce&lt;/strong&gt; is my choice when domain reputation matters more than list size: it's more conservative, which costs leads but protects deliverability.&lt;/p&gt;

&lt;p&gt;For workflows that start from social profiles rather than an existing email list—pulling contact data from Twitter or Facebook profiles—Ziwa has been faster for me than going through PDL's direct API, though that's a different use case than verifying a list you already have.&lt;/p&gt;

&lt;p&gt;If you're running more than 5,000 emails per month to B2B lists, the catch-all segment deserves its own process. Treating "risky" as interchangeable with "skip" loses real leads. Treating it as interchangeable with "safe to send" destroys domains. The workflow above sits between those two failure modes.&lt;/p&gt;

</description>
      <category>marketing</category>
      <category>networking</category>
      <category>saas</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
