<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Min8T</title>
    <description>The latest articles on DEV Community by Min8T (@min8t).</description>
    <link>https://dev.to/min8t</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/min8t"/>
    <language>en</language>
    <item>
      <title>Why email verification is 21 checks, not 1 (and why MCP makes it agent-ready)</title>
      <dc:creator>Min8T</dc:creator>
      <pubDate>Sat, 02 May 2026 18:14:17 +0000</pubDate>
      <link>https://dev.to/min8t/why-email-verification-is-21-checks-not-1-and-why-mcp-makes-it-agent-ready-3me6</link>
      <guid>https://dev.to/min8t/why-email-verification-is-21-checks-not-1-and-why-mcp-makes-it-agent-ready-3me6</guid>
      <description>&lt;h2&gt;
  
  
  "Valid email format" isn't enough.
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;hello@gmail.com&lt;/code&gt; and &lt;code&gt;hello@gmial.com&lt;/code&gt; both parse as RFC-5322 valid. One reaches a real inbox. The other bounces, hurts your sender reputation, and might land you on a blocklist.&lt;/p&gt;

&lt;p&gt;Real email-deliverability work is 20+ checks, not 1. Most "email verifier" services are a black box that returns &lt;code&gt;valid&lt;/code&gt; or &lt;code&gt;invalid&lt;/code&gt; and you get to trust them. I wanted something I could reason about end-to-end and that an AI agent could orchestrate piece by piece.&lt;/p&gt;

&lt;p&gt;This is what's actually inside the pipeline I shipped (&lt;code&gt;@deliveriq/mcp&lt;/code&gt; — open source, MIT, on npm), broken down stage by stage. If you've ever wondered why a single email check takes ~3 seconds and why each piece matters, this is the long version.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 5 stages
&lt;/h2&gt;

&lt;p&gt;The verification flow runs &lt;strong&gt;5 stages with 21 checks total&lt;/strong&gt;, in priority order. Each stage runs in parallel where possible. A failure at an early stage short-circuits the rest (you don't run SMTP if the syntax is broken).&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage 1 — Email format validation (2 checks)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1.1 Syntax validation (RFC 5322).&lt;/strong&gt; Local-part length, character set, proper &lt;code&gt;@&lt;/code&gt; placement, domain format. Catches things like &lt;code&gt;@@domain.com&lt;/code&gt; or 80-character local parts before any network calls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.2 Typo detection (Sift3 fuzzy match).&lt;/strong&gt; Compares the domain against a database of 1,200+ known misspellings. &lt;code&gt;gmial.com&lt;/code&gt; → &lt;code&gt;gmail.com&lt;/code&gt;. &lt;code&gt;outlokk.com&lt;/code&gt; → &lt;code&gt;outlook.com&lt;/code&gt;. Sift3 is a string-distance algorithm that's faster than Levenshtein for short strings — important when you're running this on every check.&lt;/p&gt;

&lt;p&gt;Total cost so far: zero network calls. If syntax is invalid, the pipeline returns immediately.&lt;/p&gt;




&lt;h3&gt;
  
  
  Stage 2 — Domain &amp;amp; provider checks (5 checks)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;2.1 Disposable domain detection.&lt;/strong&gt; A continuously maintained list of 164,000+ throwaway domains (Mailinator, Guerrilla Mail, 10minutemail, and 164k friends). Disposable addresses self-destruct, so they're flagged high-risk by default.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.2 Role-based detection.&lt;/strong&gt; 130+ role prefixes — &lt;code&gt;admin@&lt;/code&gt;, &lt;code&gt;info@&lt;/code&gt;, &lt;code&gt;support@&lt;/code&gt;, &lt;code&gt;noreply@&lt;/code&gt;, &lt;code&gt;postmaster@&lt;/code&gt;, &lt;code&gt;webmaster@&lt;/code&gt;, etc. These typically have lower engagement and higher bounce rates than personal addresses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.3 Free-provider check.&lt;/strong&gt; 200+ consumer email domains worldwide. Useful when the use case requires corporate addresses (B2B prospecting, etc.). Not a hard fail — just a signal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.4 Alias normalization.&lt;/strong&gt; Resolves &lt;code&gt;+tag&lt;/code&gt; and dot-aliases to canonical form. &lt;code&gt;user+newsletter@gmail.com&lt;/code&gt; → &lt;code&gt;user@gmail.com&lt;/code&gt;. &lt;code&gt;u.s.e.r@gmail.com&lt;/code&gt; → &lt;code&gt;user@gmail.com&lt;/code&gt;. This is how you collapse duplicates that look different.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.5 Email-pattern analysis.&lt;/strong&gt; Scores the local part for entropy, keyboard walks (&lt;code&gt;qwerty12@&lt;/code&gt;), and bot-style sequences. Distinguishes human-typed from auto-generated addresses.&lt;/p&gt;




&lt;h3&gt;
  
  
  Stage 3 — Mailbox verification (3 checks)
&lt;/h3&gt;

&lt;p&gt;This is where it gets interesting. The previous 7 checks are local — Stage 3 actually talks to the mail server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.1 MX record resolution.&lt;/strong&gt; DNS MX lookup with A-record fallback. Highest-priority MX is used for SMTP. Capped at the top 2 servers to prevent timeout stacking when a domain has 8 MX records.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.2 ISP identification.&lt;/strong&gt; MX-pattern matching against known ISPs (Gmail, Outlook, Yahoo, Apple iCloud, etc.). This matters because some major providers aggressively block SMTP probes — for those, the pipeline skips Stage 3.3 and uses heuristic scoring instead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.3 SMTP handshake verification.&lt;/strong&gt; This is the real one. The pipeline opens a TCP connection on port 25 and runs the conversation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; EHLO verifier.example.com
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; MAIL FROM: &amp;lt;neutral@verifier.example.com&amp;gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; RCPT TO: &amp;lt;hello@target.com&amp;gt;     &lt;span class="c"&gt;# the email we're checking&lt;/span&gt;
&amp;lt; 250 OK                            &lt;span class="c"&gt;# accepted = exists&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; RCPT TO: &amp;lt;random-1234567@target.com&amp;gt;  &lt;span class="c"&gt;# catch-all probe&lt;/span&gt;
&amp;lt; 250 OK                            &lt;span class="c"&gt;# also accepted = catch-all server&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; QUIT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A second &lt;code&gt;RCPT TO&lt;/code&gt; with a random address detects catch-all servers (where the server accepts every address regardless). 5-second timeout per connection. &lt;strong&gt;No email is ever sent&lt;/strong&gt; — &lt;code&gt;MAIL FROM&lt;/code&gt; and &lt;code&gt;RCPT TO&lt;/code&gt; are setup commands; we close before &lt;code&gt;DATA&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This is the slowest part of the pipeline. ~1–2 seconds typical, sometimes more if the receiving server greylists.&lt;/p&gt;




&lt;h3&gt;
  
  
  Stage 4 — Reputation &amp;amp; intelligence (7 checks)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;4.1 Gravatar lookup.&lt;/strong&gt; Whether the address has a registered Gravatar profile picture. Positive trust signal — someone set up a public identity with this address.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.2 DNSBL blacklist check.&lt;/strong&gt; Queries the domain across &lt;strong&gt;50 categorized DNSBL zones&lt;/strong&gt;: Spamhaus (SBL/XBL/PBL), SpamCop, SURBL, URIBL, SORBS, Barracuda BRBL, UCEProtect, DroneBL, and many more. Includes domain-based lists and IP-based lists. A single listing isn't a death sentence; multiple listings are.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.3 Domain age via RDAP.&lt;/strong&gt; Registration date, expiration, registrar, DNSSEC status, nameservers. Domains under 30 days are high-risk; under a year, elevated. Pulled from the registry, not WHOIS — RDAP is the modern replacement and returns structured JSON.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.4 HIBP breach check.&lt;/strong&gt; Have I Been Pwned database lookup. Compromised addresses are more likely to be abandoned or used by spammers leveraging credential dumps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.5 DKIM record check.&lt;/strong&gt; Probes 15 common DKIM selectors (&lt;code&gt;google&lt;/code&gt;, &lt;code&gt;default&lt;/code&gt;, &lt;code&gt;selector1&lt;/code&gt;, &lt;code&gt;s1&lt;/code&gt;, &lt;code&gt;mail&lt;/code&gt;, &lt;code&gt;k1&lt;/code&gt;, etc.) in parallel. Returns key type (RSA / Ed25519) and estimated key size. DKIM presence indicates the domain signs outbound email — a strong legitimacy signal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.6 Infrastructure analysis.&lt;/strong&gt; Evaluates 6 email-authentication standards in one composite score:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SPF record presence + syntax&lt;/li&gt;
&lt;li&gt;DKIM key availability&lt;/li&gt;
&lt;li&gt;DMARC policy strength (&lt;code&gt;none&lt;/code&gt; / &lt;code&gt;quarantine&lt;/code&gt; / &lt;code&gt;reject&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;MTA-STS for transport security&lt;/li&gt;
&lt;li&gt;BIMI for verified-sender brand indicators&lt;/li&gt;
&lt;li&gt;TLS-RPT for TLS reporting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Domains with the full stack score highest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.7 MX server reputation.&lt;/strong&gt; Reverse-DNS (PTR) and IP-DNSBL on the MX server itself. Servers without PTR records or with blacklisted IPs indicate lower-quality infrastructure.&lt;/p&gt;




&lt;h3&gt;
  
  
  Stage 5 — Scoring &amp;amp; classification (4 checks)
&lt;/h3&gt;

&lt;p&gt;The last stage rolls everything up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.1 Spam-trap heuristic scoring.&lt;/strong&gt; 12 weighted signals:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Domain age (younger = more suspicious)&lt;/li&gt;
&lt;li&gt;Role-based address&lt;/li&gt;
&lt;li&gt;No Gravatar&lt;/li&gt;
&lt;li&gt;No SMTP deliverability&lt;/li&gt;
&lt;li&gt;Disposable domain&lt;/li&gt;
&lt;li&gt;DNSBL listing&lt;/li&gt;
&lt;li&gt;Catch-all server&lt;/li&gt;
&lt;li&gt;Email-pattern entropy&lt;/li&gt;
&lt;li&gt;MX server reputation&lt;/li&gt;
&lt;li&gt;Local-part entropy&lt;/li&gt;
&lt;li&gt;Email-pattern type&lt;/li&gt;
&lt;li&gt;MX IP blacklist status&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Plus classification of the trap type: &lt;strong&gt;Pristine&lt;/strong&gt; (ISP-created, never used by a real person), &lt;strong&gt;Recycled&lt;/strong&gt; (formerly active, repurposed after abandonment), or &lt;strong&gt;Typo&lt;/strong&gt; (misspelling of a legitimate domain). Each comes with a confidence score.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.2 Domain trust score.&lt;/strong&gt; A 0–100 composite: age (25 pts) + infrastructure (25 pts) + reputation (25 pts) + trust signals like DNSSEC and registrar lock (25 pts). Maps to five trust levels: Trusted / Positive / Neutral / Suspicious / Malicious.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.3 Deliverability score.&lt;/strong&gt; All preceding signals roll into a single 0–100 score weighing SMTP deliverability, infrastructure quality, spam-trap probability, and address characteristics. The model gets one actionable number instead of a dump of 21 booleans.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.4 Reachability classification.&lt;/strong&gt; Score maps to &lt;strong&gt;Safe&lt;/strong&gt; (80–100, high confidence inbox) / &lt;strong&gt;Risky&lt;/strong&gt; (40–79, may bounce or land in spam) / &lt;strong&gt;Invalid&lt;/strong&gt; (1–39, almost certain bounce) / &lt;strong&gt;Unknown&lt;/strong&gt; (0, couldn't be determined due to greylisting / catch-all / timeout).&lt;/p&gt;




&lt;h2&gt;
  
  
  Why MCP, not "another dashboard"?
&lt;/h2&gt;

&lt;p&gt;Email deliverability is one of those problems where &lt;strong&gt;the right answer needs 6–8 API calls in sequence&lt;/strong&gt;, not one call.&lt;/p&gt;

&lt;p&gt;You ask "is this list safe to send to?" and the right answer involves: verify each email, then for each invalid → check if the domain is disposable, for each catch-all → check sender reputation, for each unknown → DNSBL the MX, then compute aggregate spam-trap probability for the whole list, then the recommendation.&lt;/p&gt;

&lt;p&gt;A dashboard makes you click through that. The model orchestrates it as tool calls — and explains the verdict in plain English at the end.&lt;/p&gt;

&lt;p&gt;That's why I exposed it as MCP rather than just a REST API + dashboard. The API is also there, but the MCP layer is what makes it agent-ready.&lt;/p&gt;

&lt;p&gt;The 12 MCP tools map to the pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;deliveriq_verify_email&lt;/code&gt; — runs all 5 stages on a single address&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;deliveriq_batch_verify&lt;/code&gt; — same, async, up to 100K addresses per job&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;deliveriq_blacklist_check&lt;/code&gt; — Stage 4.2 only&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;deliveriq_infrastructure_check&lt;/code&gt; — Stage 4.6 only&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;deliveriq_spam_trap_analysis&lt;/code&gt; — Stage 5.1 only&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;deliveriq_domain_intel&lt;/code&gt; — Stages 4 + 5.2 (composite)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;deliveriq_find_email&lt;/code&gt; — pattern-based discovery&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;deliveriq_org_intel&lt;/code&gt; — cached organization patterns&lt;/li&gt;
&lt;li&gt;(+ batch status, batch download, list jobs, check credits)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When Claude is asked "audit this list before we send", it composes calls across these tools rather than stuffing 21 questions into a single prompt.&lt;/p&gt;




&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add to ~/Library/Application Support/Claude/claude_desktop_config.json&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"mcpServers"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"deliveriq"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"command"&lt;/span&gt;: &lt;span class="s2"&gt;"npx"&lt;/span&gt;,
      &lt;span class="s2"&gt;"args"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;, &lt;span class="s2"&gt;"@deliveriq/mcp"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;,
      &lt;span class="s2"&gt;"env"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="s2"&gt;"DELIVERIQ_API_KEY"&lt;/span&gt;: &lt;span class="s2"&gt;"lc_your_key"&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Free tier with no credit card at &lt;a href="https://min8t.com/deliveriq" rel="noopener noreferrer"&gt;https://min8t.com/deliveriq&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;npm: &lt;a href="https://www.npmjs.com/package/@deliveriq/mcp" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/@deliveriq/mcp&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Repo: &lt;a href="https://github.com/Davison-Francis/min8t-sdks" rel="noopener noreferrer"&gt;https://github.com/Davison-Francis/min8t-sdks&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Glama: &lt;a href="https://glama.ai/mcp/servers/Davison-Francis/min8t-sdks" rel="noopener noreferrer"&gt;https://glama.ai/mcp/servers/Davison-Francis/min8t-sdks&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;Two things I learned building this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SMTP catch-all servers are why "valid email" is unprovable.&lt;/strong&gt; A server that returns &lt;code&gt;250 OK&lt;/code&gt; to every &lt;code&gt;RCPT TO&lt;/code&gt; is correct from a protocol standpoint — RFC 5321 doesn't require accuracy. So you can never be 100% certain &lt;code&gt;hello@catch-all-domain.com&lt;/code&gt; reaches a real person. The best you can do is detect the catch-all behavior and weight it accordingly. Stage 5's "Risky" category is mostly catch-all results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP is genuinely better than a dashboard for this workflow.&lt;/strong&gt; Composing 6–8 tools in sequence to debug "why isn't this email getting through?" is the kind of agent-task that's incredibly tedious in a dashboard but feels natural when an LLM is driving it.&lt;/p&gt;

&lt;p&gt;What MCP tools are you missing for email or deliverability work? Curious what others would add to a pipeline like this.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>javascript</category>
      <category>mcp</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
