Welcome sequence. Promotional blast. Abandoned cart. Re-engagement. Newsletter. Product launch.
Each one requires research, copywriting, segmentation logic, and subject line testing. Each one can eat a full afternoon. And each one follows a repeatable pattern that ChatGPT can accelerate — if you give it the right inputs.
These 50 prompts are built for working email marketers. They assume you have an email platform, a list with some behavioral data, and a product or service to sell. Each prompt is specific enough to generate drafts worth editing — not outlines to rewrite from scratch.
1. Subject Lines and Preview Text (Prompts 1–10)
Prompt 1 — Subject Line Pack for Promotional Email
Write 10 subject line options for a promotional email offering [discount/offer] on [product/service] to [subscriber segment]. Vary the psychological approach across: curiosity, urgency, social proof, direct benefit, question, personalization, fear of missing out, contrarian angle, humor, and number-led. Keep each under 50 characters. Include 10 corresponding preview text options (under 90 characters each).
Prompt 2 — A/B Test Hypothesis Generator
My current best-performing subject line for [campaign type] is: "[subject line]" with an open rate of [X]%. Generate 5 challenger subject lines that test a different variable each time: (1) length — shorter, (2) personalization token, (3) emoji, (4) question format, (5) urgency/deadline. For each variant, state the hypothesis being tested.
Prompt 3 — Re-engagement Subject Lines
Write 8 subject lines for a re-engagement campaign targeting subscribers who haven't opened in [X] days. Our brand voice is [describe: professional / casual / direct / warm]. Product category: [category]. Avoid clichés like "We miss you." Each subject line should use a different hook: honest admission, value reminder, question, humor, new-offer angle, curiosity, urgency, and benefit-forward.
Prompt 4 — Abandoned Cart Subject Line Sequence
Write a 3-email abandoned cart subject line sequence for a [product type] in the [price range] bracket. Email 1 sends at 1 hour, Email 2 at 24 hours, Email 3 at 72 hours. For each email, provide: subject line, preview text, and a one-sentence description of the primary persuasion lever used. Avoid the word "forgot."
Prompt 5 — Spam Word Audit
Here are 15 subject lines from our last campaign: [paste list]. Flag any subject lines that contain words or patterns likely to trigger spam filters or promotions-tab sorting. For each flagged item, explain the risk and provide a compliant rewrite that preserves the message.
Prompt 6 — Seasonal Subject Line Bank
Create a subject line bank of 20 emails for [upcoming season/holiday] relevant to a [industry] brand selling [product type]. Mix promotional, educational, and community-building email types. Include subject line + preview text for each. No generic holiday clichés — each must feel specific to our niche.
Prompt 7 — Preview Text Optimization
Here are 10 email subject lines with no preview text currently set: [paste list]. For each, write an optimized preview text that: complements but doesn't repeat the subject line, extends the hook or adds a secondary benefit, and creates a "subject line + preview" combination that increases the motivation to open. Keep each under 90 characters.
Prompt 8 — Curiosity Loop Subject Lines
Write 8 subject lines for [brand] that use the open-loop curiosity technique. Each should hint at a surprising finding, counterintuitive tip, or non-obvious insight related to [topic]. The subject line should feel incomplete — the answer is in the email. Avoid clickbait; the email body must deliver on the implied promise.
Prompt 9 — Subject Line Testing Calendar
I send [X] emails per month to a list of [Y] subscribers. Build a 3-month subject line A/B testing roadmap. Each month should test one variable: [Month 1: specify], [Month 2: specify], [Month 3: specify]. For each month, include the test setup, required sample size to reach statistical significance (assume 20% baseline open rate), and how to interpret results.
Prompt 10 — Personalization Beyond First Name
Our ESP supports these personalization variables beyond first name: [list available merge tags — city, last purchase category, purchase count, subscription tier, etc.]. Write 10 subject lines that use dynamic personalization in creative ways. For each, show the raw template with merge tags and an example of how it would render for a specific subscriber profile.
2. Drip and Automation Sequences (Prompts 11–20)
Prompt 11 — Welcome Sequence Blueprint
Write a 5-email welcome sequence for new subscribers who opted in via [lead magnet/opt-in source]. Our product is [describe]. The subscriber just joined; they haven't bought anything. Sequence goal: deliver value, build trust, and make a soft offer by email 5. For each email: subject line, preview text, 150-word body copy, and primary CTA. Tone: [specify brand voice].
Prompt 12 — Post-Purchase Onboarding Flow
Build a 4-email post-purchase onboarding sequence for customers who just bought [product] at [$X]. The product takes [X] days/steps to see value. Goals: reduce buyer's remorse, accelerate time-to-value, and plant a seed for the upsell [describe upsell]. Include specific subject lines and email body outlines, not just bullet-point ideas.
Prompt 13 — Lead Nurture Sequence for Long Sales Cycles
Our typical sales cycle for [product/service] is [X] months. Buyers evaluate [describe decision criteria]. Write an 8-email lead nurture sequence that maps to the buyer journey: awareness (emails 1–2), consideration (emails 3–5), decision (emails 6–8). For each email, specify: send timing, subject line, content type (case study, FAQ, demo invite, testimonial), and CTA.
Prompt 14 — Win-Back Sequence
Design a 3-email win-back sequence for customers who purchased once [X] months ago and haven't returned. Average order value: [$]. Last purchase category: [category]. Email 1: soft reminder. Email 2: value-add (not discount). Email 3: final offer with incentive. Write full subject lines and 100-word body copy for each. If no re-engagement after email 3, recommend the suppression logic.
Prompt 15 — Trial-to-Paid Conversion Sequence
Write a conversion sequence for users in a [X]-day free trial of [SaaS product]. The sequence runs inside the trial window. Key conversion barriers for our product are: [list 3 objections]. Build 5 emails: Day 1 (activation), Day 3 (feature spotlight), Day 7 (social proof), Day [X-3] (urgency), Day [X-1] (last chance). Include subject lines and the specific persuasion angle for each email.
Prompt 16 — Behavioral Trigger Email Copy
Write copy for a trigger email that fires when a subscriber [describe trigger: views pricing page 2+ times, watches 75% of a video, completes onboarding step 3, etc.]. The email should acknowledge the behavior without being creepy, address the most likely hesitation at that stage, and move them toward [next action]. Include subject line, 120-word body, and CTA.
Prompt 17 — Upsell/Cross-Sell Automation
A customer just purchased [product A] at [$X]. Our related upsell is [product B] at [$Y], which complements A because [reason]. Write a 2-email post-purchase upsell sequence: Email 1 at 3 days (value + soft upsell intro), Email 2 at 7 days (direct upsell with urgency). Include subject lines, 100-word bodies, and CTAs. Avoid being pushy — emphasize the logical next step.
Prompt 18 — Milestone/Anniversary Automation
Write 3 lifecycle milestone emails: (1) 1-year customer anniversary email, (2) 100th order / loyalty milestone email, (3) subscription renewal reminder 30 days before expiry. For each: subject line, preview text, and 120-word body. Tone: [specify]. Each should make the customer feel recognized without being generic.
Prompt 19 — Webinar Promotional Sequence
Build a webinar promotion sequence for a [topic] webinar on [date]. Audience: existing email subscribers. Sequence: 2-week countdown. Include: initial invite (Day -14), reminder (Day -7), "last chance to register" (Day -2), day-of reminder (Day 0 AM), replay offer (Day +1). Write subject lines, preview text, and 100-word body copy for each email.
Prompt 20 — Sequence Audit Framework
I have an existing welcome sequence with these open rates and click rates: [paste data per email]. Write a diagnostic audit that identifies: where drop-off occurs and likely cause, which emails are over-performing and why, copywriting issues in the weakest emails (review these drafts: [paste]), and 3 specific A/B tests to run to improve overall sequence conversion by [target %].
3. Segmentation Strategies (Prompts 21–30)
Prompt 21 — Segmentation Architecture Design
I have a list of [X] subscribers with the following data points available: [list available data — purchase history, lead source, engagement score, industry, company size, geographic region, etc.]. Design a segmentation architecture with 5 primary segments and 3 secondary segments. For each segment, define: entry criteria, expected list percentage, content strategy differences, and primary KPI.
Prompt 22 — RFM Segmentation Email Strategy
Using RFM analysis (Recency, Frequency, Monetary), I've identified these customer segments: Champions (bought recently, buy often, highest spenders), At-Risk (bought often before but not recently), Hibernating (last purchase was very old), Potential Loyalists (recent buyers, more than once). Write a distinct email strategy for each segment: different messaging, offer type, frequency, and success metric.
Prompt 23 — Engagement Tier Strategy
My email list engagement breaks down as: Highly Engaged ([X]% — opens 80%+ of emails), Engaged ([Y]% — opens 20–80%), Cold ([Z]% — opens under 20%). Write a differentiated content and frequency strategy for each tier. Include: how often to email each tier, what content types work best for each, and at what engagement level to move subscribers between tiers.
Prompt 24 — Geographic Segmentation Plan
My list has subscribers in [list top 5 countries/regions] with significant differences in: [timezone, language preference, seasonal relevance, local events, regulatory requirements like GDPR/CASL]. Write a geographic segmentation strategy covering: send time optimization, localization priorities, legally required differences, and which segments warrant separate campaigns vs. dynamic content blocks.
Prompt 25 — New Subscriber vs. Long-Term Subscriber Strategy
My list has [X] subscribers under 90 days old and [Y] over 1 year old. These groups have different needs, familiarity levels, and product adoption stages. Write a differentiated strategy for each: content types, email frequency, promotional sensitivity, and how the transition works when a subscriber "graduates" from new to established.
Prompt 26 — Lead Source Segmentation
Subscribers come from these sources: [list sources — organic search, paid ads, content upgrade, webinar, referral, social, cold list]. Each source implies different intent and familiarity levels. Design a segmentation strategy based on lead source, including: assumed knowledge level on arrival, recommended initial sequence length, and how to bridge them to the main list over time.
Prompt 27 — Product Interest Segmentation from Clicks
I can segment subscribers based on which links they've clicked in past campaigns. My product lines are: [Product A], [Product B], [Product C]. A subscriber has clicked [Product A]-related links 3 times but never [Product B] or [Product C]. Write a click-based interest scoring model and recommend a product-specific content stream for this subscriber profile.
Prompt 28 — B2B Firmographic Segmentation
For a B2B email list, I have company size and industry data for [X]% of subscribers. Industries include: [list]. Company sizes: SMB (1–50), Mid-Market (51–500), Enterprise (500+). Write a segmentation strategy that maps industry/size combinations to: pain point messaging, case study selection, pricing tier emphasis, and appropriate content length/complexity.
Prompt 29 — Preferences Center Design
Design an email preferences center for a [type of business] brand. What content categories and frequency options should we offer subscribers? Write the copy for the preferences page, including: section headers, option labels, explanatory text, and a confirmation message after update. Also include the technical logic for how preferences should affect send lists and campaign eligibility.
Prompt 30 — Suppression Logic Documentation
Write a suppression logic policy document for our email program. Cover: hard bounce handling, soft bounce thresholds, spam complaint suppression, global unsubscribe vs. category unsubscribe, win-back eligibility criteria, transactional email exemptions, and GDPR/CASL considerations. Format as an internal operations document with decision trees where appropriate.
4. A/B Test Summaries (Prompts 31–40)
Prompt 31 — A/B Test Results Narrative
We ran an A/B test on our [email type] campaign. Variant A: [describe — subject line, content, CTA, send time]. Variant B: [describe]. Results: Variant A open rate [X]%, click rate [Y]%, conversion rate [Z]%. Variant B open rate [A]%, click rate [B]%, conversion rate [C]%. List size: [N]. Test duration: [days]. Write a test results summary including: statistical significance assessment, what the results mean for our program, and the next test hypothesis.
Prompt 32 — CTA Button Test Analysis
We tested two CTA button variants in our promotional email. Button A: text "[text]", color [color], placement [location in email]. Button B: text "[text]", color [color], placement [location]. Click rates: A = [X]%, B = [Y]%. Total recipients: [N]. Write an analysis that interprets the result, identifies whether it's statistically significant, proposes what element drove the difference, and recommends a follow-up test.
Prompt 33 — Send Time Test Conclusions
We tested 4 send times for our weekly newsletter: [Time A, B, C, D] across equal segments of [N] subscribers each. Open rates were [A%, B%, C%, D%] and click rates were [A%, B%, C%, D%]. Our audience is primarily [describe demographics/timezone]. Write a send time analysis with a clear recommendation and how to implement the winning time across campaigns going forward.
Prompt 34 — Email Length Test
We A/B tested a short email ([X] words) against a long email ([Y] words) for our [campaign type]. The short email had click rate [A]% and conversion rate [B]%. The long email had click rate [C]% and conversion rate [D]%. Write an analysis covering: which email type performed better on the primary metric, what this suggests about our audience's preferences, and when to use each format going forward.
Prompt 35 — Plain Text vs. HTML Test
We tested plain text vs. HTML-designed emails for our [campaign type] to [audience segment]. Plain text: open rate [X]%, clicks [Y]%, replies [Z]. HTML: open rate [A]%, clicks [B]%, replies [C]. Write a format preference analysis that covers: deliverability implications of each format, audience signal interpretation, and a recommendation for default format by campaign type.
Prompt 36 — Personalization Test Results
We tested dynamic personalization (subject line included first name + last purchase category) against no personalization in [campaign type]. Personalized: open rate [X]%, CTR [Y]%. Non-personalized: open rate [A]%, CTR [B]%. List size: [N]. Write a personalization test summary that evaluates: incrementality of the lift, implementation cost vs. return, and which other personalization variables to test next.
Prompt 37 — Social Proof Test
Email A included a customer testimonial ("X users improved Y by Z%") above the CTA. Email B had no social proof element. Conversion rates: A = [X]%, B = [Y]%. Write a social proof effectiveness analysis for this email type and recommend: which formats of social proof to test next (reviews, user count, press mentions, case study snippet, star rating), and where in the email body to place them.
Prompt 38 — Multi-Variant Test Documentation
We ran a multi-variant test with 4 combinations of 2 variables: subject line (A or B) × CTA copy (X or Y). Results:
- Subject A + CTA X: open [%], click [%]
- Subject A + CTA Y: open [%], click [%]
- Subject B + CTA X: open [%], click [%]
- Subject B + CTA Y: open [%], click [%]
Write a factorial test analysis that identifies interaction effects, declares a winner, and explains how to apply these learnings to future campaigns.
Prompt 39 — Test Velocity Assessment
Our current A/B testing program runs [X] tests per month on a list of [Y] subscribers. We test [describe what we test]. Write an assessment of our testing velocity: are we testing enough to generate meaningful learnings, are we testing the right things, and what's a realistic roadmap to improve our email program by [target improvement: open rate, CTR, or revenue per email] over the next 6 months?
Prompt 40 — Annual Test Learning Synthesis
Over the past 12 months, we ran [X] A/B tests across [campaign types]. Our top findings were: [paste 5–8 key test results with brief context]. Write an annual learnings synthesis that: identifies patterns across tests, extracts 3–5 rules of thumb for our specific audience, and builds a priority testing roadmap for next year based on the highest-ROI hypotheses we haven't tested yet.
5. Deliverability and List Health (Prompts 41–50)
Prompt 41 — Deliverability Audit Report
Our current email metrics: overall deliverability rate [X]%, inbox placement rate [Y]%, spam rate [Z]%, hard bounce rate [A]%, soft bounce rate [B]%. Domain age: [X] years. ESP: [name]. Sending volume: [X] emails/month. Write a deliverability audit that diagnoses potential issues, prioritizes remediation steps, and sets realistic KPI targets for the next 90 days.
Prompt 42 — List Hygiene Campaign Brief
Our list has [X] total subscribers. Engagement data shows [Y]% haven't opened in 6+ months. Before suppressing, we want to run a hygiene campaign. Write a 3-email re-permission campaign brief: Email 1 (soft check-in), Email 2 (explicit re-permission ask), Email 3 (final notice before removal). Include subject lines, copy direction, and what action = confirmed engagement.
Prompt 43 — Sender Reputation Recovery Plan
Our domain's sender reputation has declined. Symptoms: [describe — open rates dropped, more emails landing in spam, DMARC failures increasing, bounce rate increase]. Write a 60-day sender reputation recovery plan covering: warm-up restart protocol, suppression of low-engagement contacts, authentication record audit (SPF, DKIM, DMARC), content spam score reduction, and monitoring checkpoints.
Prompt 44 — GDPR/CASL Compliance Audit
Our email program serves subscribers in [regions: EU, Canada, UK, US]. Our current consent capture method is: [describe]. Our unsubscribe mechanism is: [describe]. Our data retention policy is: [describe]. Write a compliance audit that flags gaps against GDPR, CASL, and CAN-SPAM requirements and provides specific remediation actions, not just general guidance.
Prompt 45 — Bounce Management Policy
Write a bounce management policy document for our email program. Cover: hard bounce definition and immediate suppression protocol, soft bounce threshold before suppression (by category: mailbox full, temporary server issue, content filter), role address handling (info@, support@), list purchase/import quality standards, and monthly bounce audit process. Include the decision logic for when to remove vs. when to retry.
Prompt 46 — Spam Trap Contamination Response
Our deliverability monitoring tool flagged potential spam trap hits in our [audience segment]. Symptoms: [describe metrics]. Write a spam trap response protocol covering: immediate containment actions, forensic analysis to identify the contamination source (list acquisition, form security, long-dormant segment), remediation steps, and how to document the incident for future prevention.
Prompt 47 — ESP Migration Deliverability Plan
We're migrating from [current ESP] to [new ESP] with a list of [X] active subscribers. The migration is happening on [date]. Write a warm-up migration plan that: maintains deliverability during the transition, defines the warm-up schedule by volume and segment (start with most engaged), sets go/no-go criteria at each warm-up stage, and includes a rollback plan if deliverability degrades.
Prompt 48 — Authentication Setup Guide
Write a step-by-step guide for setting up email authentication for a domain using [ESP name]. Cover SPF record creation (with exact syntax for our ESP's sending IPs), DKIM key generation and DNS publishing, DMARC policy creation (start with p=none monitoring, path to enforcement), BIMI setup if applicable, and how to verify all records are configured correctly. Include error scenarios and how to diagnose them.
Prompt 49 — List Growth Health Report
Our email list grew by [X]% last month. New subscribers came from: [sources + percentages]. Simultaneously, we had [Y]% unsubscribes, [Z]% hard bounces, and [W]% spam complaints. Net list growth was [%]. Write a list health report that evaluates growth quality (not just quantity), flags any acquisition sources producing abnormal engagement patterns, and recommends adjustments to growth strategy.
Prompt 50 — Deliverability Monitoring Dashboard
Design a deliverability monitoring dashboard specification for our email program. Include: key metrics to track daily vs. weekly vs. monthly, alert thresholds that should trigger review, data sources for each metric (ESP reports, Google Postmaster, Microsoft SNDS, third-party tools), and the escalation process when metrics fall outside acceptable ranges. Format as a dashboard spec document.
Your Email Program, Accelerated
These 50 prompts cover the technical and creative work that consumes email marketers' time without moving the needle.
The best email programs run on systems — repeatable processes that produce consistent results without reinventing the wheel every campaign. ChatGPT is a force multiplier for building and maintaining those systems when you give it specific inputs.
Start with the 5 prompts most relevant to your current bottleneck. Customize them with real data from your platform. Iterate on the outputs until they match your brand voice.
For a complete library of email marketing prompts — organized by campaign type, platform, and audience stage — the Email Marketers AI Toolkit is built for exactly this.
Email Marketers AI Toolkit — $14.99 — Use code LAUNCH30 for 30% off (limited uses remaining).
Prompts tested with ChatGPT-4o and Claude Sonnet. Output quality scales with input specificity — replace every bracket with real data before running.
Top comments (0)