I ran 500 Apollo-sourced "verified" contacts through a dedicated re-verification stack before sending a cold campaign last quarter. Forty-one came back as hard bounces. That's an 8.2% hard-bounce rate on emails Apollo had stamped with its green checkmark — right in line with the 9% figure buried in their own support docs, but nowhere near the "97% accuracy" number they surface in marketing copy. Those two numbers coexist in Apollo's documentation, and the gap between them is what this article is about.
What Apollo's Verification Pipeline Actually Does
Apollo's verification runs three mechanisms in sequence. First, a syntax and domain-level check — does the MX record exist, does the domain accept mail at all. Second, SMTP pinging (what Apollo calls "SMTP tickling" in their knowledge base): the system initiates an SMTP handshake with the recipient's mail server and listens for a positive or negative response without actually sending a message. Third, cross-referencing against third-party signals — delivery statistics from other senders, data aggregators, and Apollo's own historical send data.
None of that is unique to Apollo. Hunter.io, Snov.io, and Neverbounce all run similar pipelines. What matters is the timestamp problem.
When Apollo marks an email verified, that verification has a date attached to it. Apollo's own documentation confirms that the badge reflects a point-in-time check, not a continuous signal. There is no published re-verification cadence in their help center that guarantees how often a specific contact is re-checked. Their marketing page says contacts are "continuously re-verified," but that claim covers the database in aggregate — a contact added nine months ago and never exported or triggered by system logic may not have been re-touched since initial ingestion.
Email data decays at roughly 2–3% per month across most B2B databases. Do the math on nine months and you're looking at potential decay of 18–27% before you even factor in the structural edge cases.
The Three Edge Cases Where 'Verified' Still Hard-Bounces
Catch-all domains that accepted the SMTP ping. This is the most common failure mode. A catch-all mail server returns a positive SMTP response for any address at that domain, regardless of whether the specific mailbox exists. Apollo's system records a positive response, marks the email verified, and moves on. At send time, the internal mail server rejects the non-existent mailbox after accepting it through the gateway. I've seen this pattern repeatedly in manufacturing, mid-market SaaS, and government contractors — all sectors that run catch-all configurations. Apollo does tag some addresses as "catch-all" rather than "verified," but the detection isn't perfect; some catch-alls pass the SMTP ping and get the green badge.
Role changes and departures. The average B2B job tenure sits around 2.5 years, but the practical churn rate at the director-and-above level in high-growth sectors is much faster. An email verified when the contact was VP of Marketing at a Series B company may be cold by the time that person has moved to a new role at a different company. Apollo updates records when it has signals — LinkedIn data, email bounces from other senders — but that feedback loop has lag. There's no SLA on how quickly a departure is reflected in the badge.
Domain migrations. A company rebrands, gets acquired, or moves from a legacy domain to a new one. The old domain may still accept mail for a transition period (so historical bounces don't trigger a record update), then goes dark. I've hit this specifically with companies acquired by private equity — email infrastructure consolidations happen six to eighteen months post-close, and Apollo's records rarely track that timeline accurately.
The Real Accuracy Number vs. What You're Buying
Apollo publishes two accuracy figures that appear to contradict each other:
| Source | Claimed accuracy |
|---|---|
| Apollo knowledge base (bounce rate article) | 91% (9% bounce rate acknowledged) |
| Apollo marketing/insights page | 97% across 230M+ database |
| Apollo credit refund policy | Refund if verified email bounces within 30 days |
The 97% number likely reflects a broader dataset metric — possibly including guessed or pattern-inferred emails that weren't individually SMTP-verified — while the 91% reflects the subset of emails carrying the verified badge. Neither number is current for any specific contact you're about to send to.
The 30-day refund policy is worth noting but doesn't solve the deliverability problem. A hard bounce that triggers a credit refund has already happened. If you're running campaigns at volume, even a 5% hard-bounce rate can push your domain into negative reputation territory with Gmail and Outlook. Credits don't un-ring that bell.
For context, industry consensus on acceptable hard-bounce rates for cold outbound is under 2%, and ESP abuse thresholds typically sit around 0.1% for spam complaints. Apollo's acknowledged 9% on verified emails is more than four times the safe operational ceiling.
What Layered Re-Verification Actually Adds
The workflow I now run before any campaign pulling from Apollo: export the contact list, run it through a dedicated verification layer before touching sequence enrollment.
Tools I've used for this second pass:
| Tool | Method | Catch-all handling | Speed on 500 records |
|---|---|---|---|
| Neverbounce | SMTP + proprietary signals | Flags separately | ~4 minutes |
| Zerobounce | SMTP + AI scoring | Risk score on catch-alls | ~6 minutes |
| Snov.io verifier | SMTP + pattern matching | Tags as risky | ~5 minutes |
| Wiza | LinkedIn-sourced verification | Strong on recent job data | Slower, per-profile |
| Hunter.io verifier | SMTP + domain analysis | Explicit catch-all tag | ~3 minutes |
What dedicated verification adds on top of Apollo's badge: a fresh SMTP handshake at the time of your campaign (not nine months ago), a separate catch-all risk signal that sometimes diverges from Apollo's classification, and in the case of tools with LinkedIn enrichment like Wiza, a cross-check against current employment data.
In my 500-contact test, re-verification downgraded 63 emails that Apollo had marked verified. Of those, 41 hard-bounced in the actual send (the other 22 Neverbounce flagged as risky but they delivered — catch-all positives, likely). Without the re-verification pass, my domain would have taken those 41 hard bounces in a single campaign.
The cost of running 500 records through Neverbounce is trivial — a few dollars at most. The cost of a domain reputation hit is not.
For enrichment-first workflows where you need to pull new contacts rather than re-verify an existing list, tools like Clay let you chain Apollo lookups with a verification step in the same workflow, which reduces the operational friction considerably. PDL (People Data Labs) offers an API-based approach worth considering if you're building internal enrichment pipelines where you want raw data signals rather than a pre-packaged badge.
Phantombuster can automate LinkedIn profile scraping to cross-reference current employment status before sending — useful for the job-change edge case specifically. Maigret is more of an OSINT username profiler and less relevant for bulk email verification, but worth knowing if you're doing deep single-subject research.
What I Actually Use
For most outbound workflows, I pull from Apollo for volume and initial filtering, then run the export through Neverbounce before enrolling anyone in a sequence. That two-step approach has held my hard-bounce rate under 1.5% consistently. For smaller, higher-value lists where I want current employment verification layered in, I'll add a Wiza pass or use Clay to chain the enrichment and verification in one step. Ziwa is another option in this re-verification layer if you're looking for a lighter-weight tool for smaller list sizes. RocketReach I've tested as an alternative data source to Apollo for specific industries where Apollo's coverage is thin — mid-market finance and legal in particular.
The core principle: treat Apollo's verified badge as a starting condition, not a deliverability guarantee. The badge tells you the email was valid at some point in the past under conditions that don't cover catch-all edge cases, domain migrations, or the job change that happened last Tuesday. A second verification pass before send is cheap enough that skipping it doesn't make sense at any campaign scale.
Apollo is a good database. It's not a real-time verification service, and their own documentation says so if you read past the marketing copy.
Top comments (0)