DEV Community

cited
cited

Posted on

I Wrote 5 Cold Emails About AI Search Blind Spots — Here's the Engineering Behind Each Sentence

Cold outreach is a compression problem. You have ~100 words to transmit a signal through a noisy channel to a receiver who's actively filtering. Most emails fail not because the offer is bad — they fail because the signal-to-noise ratio is catastrophic.

I was building outreach for Topify.ai, a tool that fixes AI search visibility gaps. The ICP list had five distinct personas. Each has a different failure mode when it comes to how their brand shows up (or doesn't) in ChatGPT, Perplexity, and Claude responses. So I treated each email like a targeted patch: same underlying bug, different stack trace.

Here's the teardown.


The Setup

Each template has two subject line variants — hypothesis A (curiosity-driven) versus hypothesis B (specificity-driven) — a body capped at 120 words, and inline annotations on why each component is load-bearing.


1. Founder

Subject A: Your brand isn't in the answer
Subject B: ChatGPT mentioned 3 competitors. Not you.

Hi [Name],

67% of AI-generated answers cite sources indexed for old query patterns —
not what users are actually searching today. If your brand isn't optimized
for how LLMs retrieve and surface content, you're invisible in the channel
your customers are increasingly starting in.

Topify.ai runs a 90-second audit that shows exactly where you're missing
from AI-generated results — and why.

Worth a look? Drop me your domain and I'll send the snapshot.

— [Sender]
Enter fullscreen mode Exit fullscreen mode

Annotations:

  • Opening stat: Functions as an error log, not a claim. It reframes the problem as a known, measurable bug — not a vague threat.
  • "Channel your customers are increasingly starting in": Maps the risk to revenue without saying "revenue."
  • CTA: One input (domain), one output (snapshot). Mirrors a low-commitment API call — give input, receive value, no obligation.

2. CMO

Subject A: Your share of voice in AI answers is 0%
Subject B: How LLMs are rewriting your brand narrative without you

Hi [Name],

LLMs don't pull from your brand guidelines. They pull from whatever's
been indexed, cited, and reinforced in their training window. Right now,
that means your category story is being written by whoever optimized
for AI retrieval first — probably not you.

We've seen brands recover 40%+ of AI answer share within 60 days of
fixing their retrieval signal. Happy to show you the before/after from
a comparable brand.

Curious what your current AI footprint looks like? Reply with your domain.

— [Sender]
Enter fullscreen mode Exit fullscreen mode

Annotations:

  • "Brand guidelines... training window": Speaks the CMO's language (brand control) through the LLM's actual mechanism. Precision over persuasion.
  • Proof element: "40%+ answer share" with a timeframe — deliberately incomplete (no company name yet) to create pull rather than close.
  • "Curious": A low-pressure qualifier. Not requesting a meeting. Requesting one string.

3. SEO Lead

Subject A: Your domain ranks. Your brand doesn't answer.
Subject B: Perplexity is citing your competitor 6x more than you.

Hi [Name],

Traditional SEO gets you on page 1. AI SEO gets you into the answer.
Those are different problems with different signals.

We analyzed 1,200 brand queries across Perplexity, ChatGPT, and Gemini.
Brands with strong traditional rankings showed up in AI answers less than
30% of the time for their own category keywords.

Your domain is probably in that gap. Takes 90 seconds to check.

Send me your domain — I'll pull your AI visibility score.

— [Sender]
Enter fullscreen mode Exit fullscreen mode

Annotations:

  • "Different problems, different signals": SEO leads think in systems. This acknowledges their expertise rather than dismissing it — it's an extension, not an indictment.
  • Stat with methodology: "1,200 queries across three platforms" sounds like a data pull. Credibility through specificity, not assertion.
  • "Probably in that gap": Hedged language on purpose. SEO leads distrust certainty. Hedging builds trust.

4. Head of Growth

Subject A: Your AI acquisition channel has a zero-click problem
Subject B: LLMs are sending traffic. Just not to you.

Hi [Name],

AI-generated answers now influence 38% of product discovery journeys
before a user ever hits a search result. If your brand isn't present
in those answers, you're paying to acquire at the bottom of a funnel
you're not even entering at the top.

One brand we worked with recovered $120K in monthly pipeline by closing
their AI visibility gap in a single quarter.

Worth 90 seconds to see where you stand? Reply with your domain.

— [Sender]
Enter fullscreen mode Exit fullscreen mode

Annotations:

  • "Zero-click problem": Growth leads already know zero-click search. This maps a familiar pain point to a new surface — reduces cognitive load for buy-in.
  • Pipeline figure: "$120K in monthly pipeline" — revenue-framed proof for a revenue-obsessed ICP. No percentages, just dollars.
  • "90 seconds": Time-boxing the CTA removes commitment anxiety. It's an estimate, not a contract.

5. AI PM

Subject A: Your product's AI citations are hallucinated
Subject B: What LLMs say about your product when no one's watching

Hi [Name],

LLMs hallucinate product features, pricing, and use cases at a rate
most PMs don't track — because there's no monitoring layer for it.
We've detected AI-generated misinformation about SaaS products
appearing in up to 22% of category-level queries.

Topify.ai gives you visibility: what models say about your product,
where they're wrong, and how to correct the retrieval signal.

Want to see what the models currently believe about your product?
Drop your domain.

— [Sender]
Enter fullscreen mode Exit fullscreen mode

Annotations:

  • "No monitoring layer": AI PMs live in observability. Framing missing AI brand data as a missing alert system maps directly to how they think about production gaps.
  • "22% of category-level queries": Alarming enough to trigger action, specific enough to sound measured. The ceiling hedge ("up to") keeps it defensible.
  • "What models currently believe": Anthropomorphizing LLMs is standard shorthand in AI PM discourse. It makes a complex retrieval problem intuitively graspable.

Postmortem: What Held Across All 5

Three patterns survived intact across every variant:

Stats as stack traces, not claims. Every figure includes a methodology hint — query count, platform list, timeframe. Readers don't trust marketing statistics. They trust the appearance of measurement.

CTAs as single-input functions. "Reply with your domain" is a one-argument call with a defined return value. It works because it matches the reader's instinct for low-risk systems interactions: input in, value out, no side effects.

Subject line A/B tests one variable. Variant A probes curiosity framing ("Your brand isn't in the answer"). Variant B probes specificity ("ChatGPT mentioned 3 competitors. Not you."). You're not testing copy — you're testing which underlying anxiety is louder for that ICP. Open rate is your signal.

What I'd test next: swapping the anonymous stat for a named case study teaser in a second round. My hypothesis is named social proof outperforms anonymous stats for Founders and CMOs, but not for SEO Leads or AI PMs — who are more likely to treat brand-name citations as marketing noise.


Reusable Architecture

Strip these down and the skeleton is four components:

  1. Anomaly — a measurable problem the reader doesn't know they have
  2. Mechanism — why it's happening, with enough specificity to be credible
  3. Proof — a stat or teaser with methodology attached
  4. CTA — one input, one output, no friction

Personalization isn't in the words. It's in mapping each component to the ICP's specific failure mode. The Founder fears invisibility. The SEO Lead fears irrelevance. The AI PM fears misinformation propagating silently. Same product. Different stack trace.

Topify.ai audits the stack trace: topify.ai

Top comments (0)