LLMs don't crawl your site the way Googlebot does. They pull from entity graphs, training corpora, and citation patterns — and if your brand is thin on all three, you're invisible in AI-generated answers. That's a real pipeline problem, and almost no one in outbound is using it as a hook yet.
I went through 114 cold email submissions targeting an AI SEO tool. Most failed for the same reasons: vague openers, one-size-fits-all pain framing, CTAs that read like calendar spam. The ones that held up shared a structure I'll break down here — all five templates annotated so the why is as visible as the what.
The five ICPs aren't arbitrary. Founders care about brand survival. CMOs care about budget ROI. SEO leads care about ranking logic breaking. Heads of growth care about acquisition gaps. AI PMs care about ground-truth accuracy. Same product, five completely different entry points.
The Templates
1. Founder
# SUBJECT A (curiosity trigger): Your brand isn't in ChatGPT's answers — here's why
# SUBJECT B (specificity trigger): [Company] has zero entity coverage in AI search
Hey [Name],
# Hook: names the mechanism before pitching anything
65% of zero-click queries never reach a brand's website — they terminate
inside the AI answer. If ChatGPT or Perplexity summarizes your space
without mentioning [Company], that's not a ranking problem. It's an
entity coverage problem.
# Proof element: teaser, not a data dump
We mapped AI citation patterns for 200 SaaS brands. Companies with structured
entity coverage got referenced 4x more often in LLM answers than those
relying on traditional SEO alone.
# Soft CTA: asks for curiosity, not commitment
Worth a 10-minute look at where [Company] currently stands?
— Ed
2. CMO
# SUBJECT A (curiosity): Your content budget is funding competitors' AI citations
# SUBJECT B (specificity): The attribution gap no one's reporting to your board
Hi [Name],
# Hook: frames the pain as budget waste, not technical jargon
You're producing content. It ranks. But when a buyer asks ChatGPT for a
[category] recommendation, your content trains the model to cite competitors
who built entity graphs earlier.
# Proof element: case study teaser with a number
One SaaS CMO found 40% of their target queries returned competitor names —
not theirs — despite having stronger domain authority.
# Soft CTA: positions the ask as intelligence, not a sales call
Happy to pull a quick AI visibility snapshot for [Company] if that's
useful context for your next planning cycle.
[Name]
3. SEO Lead
# SUBJECT A (curiosity): Rank #1, still invisible to Perplexity — here's the gap
# SUBJECT B (specificity): Why your top-ranking pages aren't getting LLM citations
[Name],
# Hook: speaks their language — pivots from ranking to citation logic
Traditional ranking signals and LLM citation signals are diverging. Pages
with strong backlink profiles but weak entity markup aren't making it into
AI-generated answers — even when they're #1 on Google.
# Proof element: specific, testable claim
In a crawl of 500 top-ranking pages across 10 SaaS verticals, only 23%
had entity structures that LLMs could reliably pull from.
# Soft CTA: low friction, technically framed
I can share the exact attributes that correlate with AI citation — no pitch,
just the data. Want me to send it over?
Ed
4. Head of Growth
# SUBJECT A (curiosity): Pipeline leaking before users hit your site
# SUBJECT B (specificity): The acquisition channel your funnel doesn't track yet
Hey [Name],
# Hook: reframes AI search invisibility as a funnel gap
If a buyer asks an LLM for [category] options and [Company] doesn't appear
in the answer, you've lost them before they ever hit your tracking pixel.
Top-of-funnel loss with zero attribution signal.
# Proof element: quantified framing that creates urgency
AI-assisted discovery is influencing roughly 1 in 4 B2B software purchases
in 2025 — most growth teams have no visibility into that channel yet.
# Soft CTA: frames it as a diagnostic, not a demo
Curious whether your current entity footprint covers your top buyer queries.
I can run a quick check if you want the baseline.
— Ed
5. AI PM
# SUBJECT A (curiosity): Your product's knowledge graph has a hole in it
# SUBJECT B (specificity): LLMs are generating wrong answers about [Product] — verifiable
[Name],
# Hook: speaks to ground truth — the AI PM's core obsession
LLMs hallucinate about products when entity coverage is sparse. If [Product]'s
feature set, pricing model, or integration list isn't well-represented in
structured knowledge sources, models fill the gap with plausible-sounding
wrong answers.
# Proof element: concrete, recognizable scenario
We've seen this with three AI-native products — users getting confidently
wrong capability descriptions from ChatGPT because the entity graph hadn't
been updated post-launch.
# Soft CTA: appeals to accuracy instinct, not vanity
Want to see what LLMs currently say about [Product]? I can pull a summary.
Ed
What Makes These Work
- The hook anchors to a specific mechanism, not a vague problem. "AI search might hurt you" is noise. "Entity coverage determines citation likelihood" is something the reader can go verify themselves — that specificity builds trust before you've asked for anything.
- The proof element teases, it doesn't dump. A case study with a number and no hyperlink creates a curiosity gap. The reader has to reply to close it.
- The CTA asks for information exchange, not calendar access. "Want me to send it over?" has measurably lower friction than "Book a 30-minute demo" — and higher reply rates in the batch I reviewed.
- Subject line pairing targets two different decision states. The A variant (curiosity) works on people mid-scroll. The B variant (specificity) works on people in triage mode scanning for relevance to a current problem.
One thing I'd test next: removing the sender name from the sign-off entirely. In several of these, it adds nothing — the email could close on the CTA and might convert better without the formality. Curious if anyone else has run that variant.
Top comments (0)