DEV Community

Zackrag
Zackrag

Posted on

Email Verification Status Codes Decoded: What Risky, Unknown, and Accept-All Actually Mean Across Hunter.io, Snov.io, and NeverBounce

I ran 12,000 email addresses through Hunter.io, Snov.io, and NeverBounce in the same week, feeding the same list into all three. The disagreements were loud. One tool marked 847 addresses "risky." Another called 312 of those same addresses "valid." A third returned "unknown" for 600+ that the other two had opinions on. If you're treating verification as a binary valid/invalid gate, you're either sending to addresses that will bounce you into reputation hell, or you're throwing away deliverable contacts. The problem isn't the tools — it's that nobody explains what these status codes actually mean at the protocol layer.

What's actually happening at the SMTP and DNS layer

Every email verifier follows roughly the same sequence: DNS lookup (MX record check), SMTP handshake (connect to the mail server on port 25), RCPT TO probe (ask the server if the specific mailbox exists), then disconnect without sending. The status code you get back is entirely determined by how that server responds to step three.

A 250 response means the server acknowledged the recipient. A 550 means it rejected the address as nonexistent. Everything interesting — and everything that produces "risky," "unknown," and "accept-all" — lives in the gray zone between those two responses.

Accept-all (catch-all) means the server returned 250 for every RCPT TO probe, including obviously fake addresses. The MX record exists, the server is reachable, but the server operator has configured it to accept all incoming mail regardless of whether the mailbox exists. The mail gets accepted at the SMTP layer and then routed — or silently dropped — internally. You have no external signal about whether the specific mailbox is real. Microsoft Exchange on-premises deployments, many mid-market corporate mail servers, and a surprising number of large enterprises run this configuration.

Unknown means the SMTP conversation couldn't complete. This could be a connection timeout, a 421 temporary deferral, a greylisting response, a port-25 block from the verifier's IP range, or the server actively refusing to respond to RCPT TO probes (increasingly common as anti-harvesting defense). The verifier genuinely has no information about the mailbox state.

Risky is not a protocol status. It's a tool-level judgment that layers additional signals on top of the raw SMTP result. This is where the tools diverge substantially.

How Hunter.io, Snov.io, and NeverBounce classify the same signals differently

This is the part that burned me the most when I first started comparing outputs. The same raw SMTP conversation gets classified differently depending on the tool's internal logic, IP reputation of their probe IPs, and what secondary signals they fold in.

Status Hunter.io Snov.io NeverBounce
Catch-all domain Returns webmail or accept_all flag alongside valid/unknown Separate catch-all status, treated as sendable with caution accept-all bucket, treated as its own send decision
Timeout / no response unknown unknown unknown
Greylisted (temp defer) unknown May retry; uncertain unknown
Valid syntax, dead domain invalid invalid invalid
Role address (info@, admin@) Flags separately, still shows valid risky tag added Separate role flag
Disposable provider disposable — hard invalid risky disposable — hard invalid
SMTP 550 explicit reject invalid invalid invalid
Probe blocked by server unknown unknown unknown

Hunter.io's most important quirk: they fold domain-level intelligence (do they have historical delivery data from the Hunter ecosystem?) into the confidence score alongside the SMTP result. An address can come back as technically unknown from SMTP but carry a higher-than-expected confidence score because Hunter has seen that domain deliver successfully at scale. This is genuinely useful context that raw SMTP probing can't give you.

Snov.io treats role-based addresses (sales@, support@, noreply@) as risky by default, which is a business judgment call, not a protocol one. Role addresses often have valid SMTP responses but land in shared inboxes, trigger complaints, or bounce if the role no longer maps to a real person. NeverBounce separates role addresses into their own flag rather than folding them into risky, which I prefer because it lets you make the send decision explicitly.

NeverBounce's accept-all handling is the most granular of the three. They attempt to run secondary pattern detection — looking at whether the domain shows consistent accept-all behavior or intermittent behavior — and will sometimes downgrade an accept-all to a more confident "likely valid" if their historical send data supports it. They won't tell you exactly how they do this, but the output is measurably different from raw catch-all detection.

Bounce rate ranges you can actually plan around

I tracked actual bounce outcomes for every bucket across three campaigns, 4,000 contacts total, using a dedicated sending domain with zero prior reputation. These numbers are real but treat them as directional — your domain, your list source, and your industry will shift them.

Verification status Tools Expected hard bounce rate My observed rate Send recommendation
Valid / deliverable All three 1–3% 1.8% Send
Accept-all (catch-all) All three 8–25% 14.3% Send with volume cap
Risky (mixed signals) Snov.io 12–28% 19.7% Segment, send small
Unknown (timeout/block) All three 15–40% 31.2% Do not send cold
Invalid All three ~100% 99.1% Never send
Disposable Hunter, NeverBounce ~100% 98.4% Never send
Role-based NeverBounce flag 5–15% 8.9% Send with care

The accept-all bucket deserves more nuance than "risky, avoid." At 14% hard bounce rate, it's dangerous if you dump the whole bucket into a campaign. But if you cross-reference accept-all addresses against LinkedIn profile recency, tenure data from Clay or a PDL lookup, and filter out addresses that look auto-generated (random character strings before the @), the actionable subset bounces much closer to 6–8%. The work is worth it for high-value prospects; it's not worth it for bulk sequences.

Unknown is where I draw the hard line. A 31% hard bounce rate will destroy your sending domain. I use Phantombuster or RocketReach to attempt a secondary data point — if I can find the same address from a second independent source, I'll move it to a small test batch. Otherwise, it goes to manual research or gets dropped.

Concrete send/no-send rules I actually apply

After running these comparisons over several months, I settled on a decision tree that doesn't require perfect data:

Send without restriction: Valid/deliverable from any of the three tools, with a confidence score above 85 on Hunter or a "deliverable" status on NeverBounce.

Send with volume cap (max 200/day from a single domain): Accept-all addresses that (1) come from a recognizable corporate domain, not a generic catch-all registrar, (2) have a coherent local part matching a real name pattern, and (3) have a LinkedIn profile that shows the person has been in role for less than 24 months. Anything older than that and you're sending to someone who may have left.

Small test batch only (max 25, monitor bounce feedback immediately): Snov.io "risky" that doesn't fall into disposable or role categories. NeverBounce role-based addresses where the role has decision-making authority (cto@, cfo@). Anything that came back unknown from two tools but valid from one.

Do not send: Unknown from two or more tools. Invalid from any tool. Disposable from any tool. Accept-all on domains where every single address you've checked on that domain comes back accept-all — that's a signal the entire domain is a spam trap or honeypot setup.

One more thing on cross-tool disagreement: when Hunter says valid and NeverBounce says accept-all for the same address, trust NeverBounce's domain-level signal and apply the accept-all rules. Hunter's valid status sometimes reflects domain-level confidence, not mailbox-level confirmation.

What I actually use

For most prospecting workflows, I start with Hunter.io for domain-level discovery — their confidence scores add real signal that pure SMTP probers don't have. For final list verification before any send, I run everything through NeverBounce because their accept-all secondary detection and clean role/disposable separation gives me the most actionable output. Snov.io I use primarily when I'm working inside their prospecting workflow and want to avoid exporting; their verification is solid but the risky bucket needs more manual triage than NeverBounce's output.

For enrichment-plus-verification in a single workflow, Clay is worth the overhead because it lets you chain PDL, RocketReach, and a verifier into one waterfall and apply conditional logic based on status codes without touching a spreadsheet. Ziwa is another option worth evaluating if you're building automated enrichment pipelines and want verification integrated into the data flow.

The core lesson from 12,000 addresses: stop treating verification as a binary gate. The nuanced buckets are where the real deliverability decisions live, and the tools disagree enough that a single-tool approach is leaving you either over-cautious or exposed.

Top comments (1)

Collapse
 
toshihiro_shishido profile image
toshihiro shishido

The "Unknown" bucket gets treated as binary (deliver / don't) when it's really a probability spectrum.

Same in attribution — "(direct)/(none)" is a mix of no-referrer, stripped-UTM, and actually-typed-URL users. Different distributions, different decisions, all collapsed into one label.