DEV Community

Cover image for How to Build an Automated AI Newsletter in 2026 (Full Pipeline)
Pax
Pax

Posted on • Originally published at paxrel.com

How to Build an Automated AI Newsletter in 2026 (Full Pipeline)

How to Build an Automated AI Newsletter in 2026 (Full Pipeline)

        March 24, 2026 • 14 min read • By Paxrel
Enter fullscreen mode Exit fullscreen mode

A man working on a laptop at a desk with coffee, showcasing remote work in a modern office setting.

Photo by Burst on Pexels

What if your newsletter could write and publish itself? Not some generic AI slop, but a curated, high-quality newsletter that scrapes real sources, scores articles by relevance, and produces polished content — all on autopilot.

        We built exactly this. Our newsletter [AI Agents Weekly](https://paxrel.com/newsletter.html) publishes 3x/week with zero manual intervention. It costs $0.10 per edition to run. Here's the complete pipeline, with code.

        ## The Architecture

        The pipeline has 5 stages, each handled by a separate Python script:


            StageScriptWhat It DoesCost
            1. Scrapescraper.pyFetch articles from 11 RSS feeds$0.00
            2. Scorescorer.pyRate each article 0-30 with DeepSeek$0.02
            3. Writewriter.pyGenerate newsletter with Claude$0.08
            4. Publishpublisher.pySend via Buttondown API$0.00
            5. Promotepromoter.pyPost teaser on social media$0.00
            **Total****$0.10**


        A pipeline script (`pipeline.py`) orchestrates everything. Cron triggers it Mon/Wed/Fri at 8am UTC.

        ## Step 1: Build the RSS Scraper

        The scraper collects articles from multiple sources using `feedparser`. RSS is the most reliable data source for newsletter content — it's structured, standardized, and free.
Enter fullscreen mode Exit fullscreen mode
import feedparser
import json
from datetime import datetime, timedelta

FEEDS = {
    "Hacker News AI": "https://hnrss.org/newest?q=AI+agent&points=10",
    "Reddit r/artificial": "https://www.reddit.com/r/artificial/.rss",
    "Reddit r/MachineLearning": "https://www.reddit.com/r/MachineLearning/.rss",
    "TechCrunch AI": "https://techcrunch.com/category/artificial-intelligence/feed/",
    "Anthropic Blog": "https://www.anthropic.com/rss",
    "OpenAI Blog": "https://openai.com/blog/rss.xml",
    "Google AI Blog": "https://blog.google/technology/ai/rss/",
    "The Verge AI": "https://www.theverge.com/rss/ai-artificial-intelligence/index.xml",
    "arXiv cs.AI": "http://arxiv.org/rss/cs.AI",
    "arXiv cs.CL": "http://arxiv.org/rss/cs.CL",
    "ProductHunt AI": "https://www.producthunt.com/feed?category=artificial-intelligence",
}

def scrape_all():
    articles = []
    cutoff = datetime.now() - timedelta(days=3)

    for source, url in FEEDS.items():
        try:
            feed = feedparser.parse(url)
            for entry in feed.entries[:15]:
                articles.append({
                    "title": entry.get("title", "").strip(),
                    "url": entry.get("link", ""),
                    "source": source,
                    "summary": entry.get("summary", "")[:500],
                })
        except Exception as e:
            print(f"Error scraping {source}: {e}")

    return articles
Enter fullscreen mode Exit fullscreen mode
            ### Why RSS Over Web Scraping?
            Web scraping breaks constantly — one HTML change and your scraper is dead. RSS feeds are standardized (RSS 2.0, Atom, RDF) and maintained by the publishers themselves. The `feedparser` library handles all three formats automatically. Every major tech publication has an RSS feed.



        ## Step 2: Score Articles with an LLM

        This is the secret sauce. Instead of manual curation, we send each article's title and summary to an LLM and ask it to score relevance on a scale of 0-30.
Enter fullscreen mode Exit fullscreen mode
import openai  # DeepSeek uses OpenAI-compatible API

client = openai.OpenAI(
    api_key="your-deepseek-key",
    base_url="https://api.deepseek.com"
)

SCORING_PROMPT = """Score this article's relevance to "AI agents and automation" on a scale of 0-30.

Scoring guide:
- 25-30: Directly about AI agents, autonomous systems, or agent frameworks
- 15-24: Related to AI tools, LLMs, or automation that agents could use
- 5-14: General AI/ML news with tangential relevance
- 0-4: Unrelated or off-topic

Article: {title}
Summary: {summary}

Return ONLY a JSON object: {{"score": N, "reason": "one sentence"}}"""

def score_article(article):
    response = client.chat.completions.create(
        model="deepseek-chat",
        messages=[{
            "role": "user",
            "content": SCORING_PROMPT.format(
                title=article["title"],
                summary=article["summary"]
            )
        }],
        temperature=0.1,
        max_tokens=100,
    )
    return json.loads(response.choices[0].message.content)

def score_batch(articles):
    scored = []
    for article in articles:
        try:
            result = score_article(article)
            article["score"] = result["score"]
            article["reason"] = result["reason"]
            scored.append(article)
        except Exception as e:
            article["score"] = 0
            scored.append(article)
    return sorted(scored, key=lambda x: x["score"], reverse=True)
Enter fullscreen mode Exit fullscreen mode
        **Why DeepSeek for scoring?** It costs $0.07 per million input tokens — roughly 140x cheaper than GPT-4. For a classification task like scoring, the quality difference is negligible. We process 88 articles for about $0.02.

        ## Step 3: Write the Newsletter with AI

        Take the top 8-10 scored articles and generate the newsletter. This is where you want a higher-quality model — we use Claude for writing because it produces more natural prose.
Enter fullscreen mode Exit fullscreen mode
WRITER_PROMPT = """Write edition #{edition_number} of "AI Agents Weekly".

You are writing a newsletter about AI agents, autonomous systems, and automation.

Top articles this edition (sorted by relevance score):
{articles_json}

Format:
1. Opening paragraph (2-3 sentences, hook the reader)
2. For each of the top 8 articles:
   - **[Title](url)** — Source
   - 2-3 sentence summary + your analysis of why it matters
3. Closing paragraph with a forward-looking take

Tone: Expert but accessible. Like TLDR Newsletter meets The Rundown AI.
Write in Markdown. Keep it under 1500 words."""

def write_newsletter(articles, edition_number):
    top_articles = [a for a in articles if a["score"] >= 15][:10]

    response = claude_client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=4000,
        messages=[{
            "role": "user",
            "content": WRITER_PROMPT.format(
                edition_number=edition_number,
                articles_json=json.dumps(top_articles, indent=2)
            )
        }]
    )
    return response.content[0].text
Enter fullscreen mode Exit fullscreen mode
        ## Step 4: Publish via API

        Most newsletter platforms have APIs. We use Buttondown (free tier, simple API), but this works with Beehiiv, ConvertKit, or any platform with a REST API.
Enter fullscreen mode Exit fullscreen mode
import requests

BUTTONDOWN_API_KEY = "your-api-key"

def publish_newsletter(subject, body):
    response = requests.post(
        "https://api.buttondown.com/v1/emails",
        headers={
            "Authorization": f"Token {BUTTONDOWN_API_KEY}",
        },
        json={
            "subject": subject,
            "body": body,
            "status": "about_to_send",
        }
    )

    if response.status_code == 201:
        email_id = response.json()["id"]
        print(f"Published! ID: {email_id}")
        return email_id
    else:
        raise Exception(f"Publish failed: {response.status_code}")
Enter fullscreen mode Exit fullscreen mode
        One API call. That's it. The newsletter goes out to all subscribers immediately.

        ## Step 5: Orchestrate with a Pipeline Script

        Tie everything together with a pipeline script that runs each stage in sequence:
Enter fullscreen mode Exit fullscreen mode
#!/usr/bin/env python3
"""Newsletter pipeline — runs end-to-end on cron."""

from scraper import scrape_all
from scorer import score_batch
from writer import write_newsletter
from publisher import publish_newsletter

def run_pipeline(edition_number, dry_run=False):
    # Stage 1: Scrape
    print("Scraping articles...")
    articles = scrape_all()
    print(f"{len(articles)} articles found")

    # Stage 2: Score
    print("Scoring with DeepSeek...")
    scored = score_batch(articles)
    top_score = scored[0]["score"] if scored else 0
    print(f"  → Top score: {top_score}")

    # Stage 3: Write
    print("Writing newsletter...")
    subject = f"AI Agents Weekly #{edition_number}"
    body = write_newsletter(scored, edition_number)
    print(f"{len(body)} chars written")

    # Stage 4: Publish
    if dry_run:
        print("DRY RUN — not publishing")
        with open("preview.md", "w") as f:
            f.write(body)
        return

    print("Publishing...")
    email_id = publish_newsletter(subject, body)
    print(f"  → Published: {email_id}")

    # Stage 5: Notify
    send_telegram(f"Newsletter #{edition_number} published! "
                  f"{len(articles)} articles, top score {top_score}.")
Enter fullscreen mode Exit fullscreen mode
        ## Step 6: Schedule with Cron

        Add the pipeline to your crontab for fully autonomous execution:
Enter fullscreen mode Exit fullscreen mode
# Newsletter pipeline: Mon/Wed/Fri at 8am UTC
0 8 * * 1,3,5 cd ~/newsletter && source .venv/bin/activate && python3 pipeline.py --publish >> /tmp/newsletter.log 2>&1

# Verify publication succeeded (30 min later)
30 8 * * 1,3,5 cd ~/newsletter && source .venv/bin/activate && python3 verify_publication.py >> /tmp/verify.log 2>&1
Enter fullscreen mode Exit fullscreen mode
        The verification script checks if the newsletter was actually published and sends a Telegram alert if it wasn't. Belt and suspenders.

        ## Newsletter Platform Comparison


            PlatformFree TierAPIBest For
            Buttondown100 subscribersSimple RESTAutomated newsletters
            Beehiiv2,500 subscribersComplexGrowth features
            ConvertKit1,000 subscribersFull-featuredCreator economy
            SubstackUnlimitedNoneManual writing
            Mailchimp500 contactsOverkillE-commerce


        For an automated pipeline, **API simplicity matters more than features**. Buttondown's single-endpoint publish API is perfect. Beehiiv has better analytics but a more complex API. Substack has no API at all — if you can't publish programmatically, automation is impossible.

        ## LLM Cost Breakdown

        Here's what each edition actually costs in API calls:


            TaskModelTokensCost
            Score 88 articlesDeepSeek V3~50K in, ~5K out$0.02
            Write newsletterClaude Sonnet~8K in, ~3K out$0.08
            **Total per edition****$0.10**
            **Monthly (12 editions)****$1.20**


        Compare this to hiring a content writer ($500-2000/month) or spending 5-10 hours/week curating manually. The ROI is enormous.

        ## Error Handling That Actually Matters

        The pipeline will fail. Here's how to handle the failures that actually happen in production:


            - **RSS feed returns HTML instead of XML** — feedparser handles this gracefully, returns empty entries. Check `len(articles) > 50` as a sanity check.
            - **LLM returns text instead of JSON** — Wrap the scorer in a try/except with `json.loads()`. If it fails, assign score 0 and move on.
            - **API rate limit (429)** — Add exponential backoff: `time.sleep(2 ** attempt)` with 3 retries.
            - **Newsletter platform rejects the email** — Some platforms require specific headers. Buttondown needs `X-Buttondown-Live-Dangerously: true` for the first API publish.
            - **Cron uses wrong shell** — Add `SHELL=/bin/bash` at the top of your crontab. Without it, `source .venv/bin/activate` silently fails.



            ### The 80/20 Rule of Automated Newsletters
            20% of the work is getting the AI to write content. 80% is error handling, monitoring, scheduling, and making sure the pipeline doesn't silently break at 3am on a Monday. Budget your time accordingly.



        ## Growth Strategies for Automated Newsletters

        An automated newsletter solves the *production* problem. The *distribution* problem remains. Here's what works:


            - **SEO blog posts** — Write articles targeting your newsletter's keywords. Each post has a CTA to subscribe. Traffic compounds over time.
            - **Cross-promo swaps** — Find newsletters of similar size and swap recommendations. One mention can bring 10-50 subscribers.
            - **Social media teasers** — Post the top story from each edition on Twitter/Reddit with a "full breakdown in the newsletter" CTA.
            - **Lead magnets** — Offer a free PDF cheatsheet or tool list in exchange for an email. Gate it behind your newsletter signup.
            - **Community engagement** — Post genuinely helpful content on Reddit, HN, and niche forums. Build reputation before promoting.



            ### See This Pipeline in Action
            AI Agents Weekly is built with exactly this architecture. Free, 3x/week, curated AI agent news.

            [Subscribe Free](https://paxrel.com/newsletter.html)


        ## Frequently Asked Questions

        ### Can readers tell it's AI-written?
        If your prompts are good, no. The key is the **curation layer** — AI doesn't just summarize random articles, it selects the most relevant ones and explains why they matter. That editorial judgment (encoded in your scoring prompt) is what makes it feel human-curated.

        ### Is it ethical to run an AI newsletter?
        Yes, as long as you're transparent. Mention that AI assists in curation and writing. What matters to subscribers is whether the content is useful, not whether a human typed every word.

        ### What if the AI writes something wrong?
        The scoring + curation pipeline reduces this risk. The AI summarizes articles and links to the original source — it's not generating claims from thin air. For extra safety, add a verification step that checks the newsletter before publishing.

        ### How many subscribers can I handle for free?
        Buttondown's free tier supports 100 subscribers. Beehiiv's free tier supports 2,500. At those numbers, your newsletter costs essentially $0 to operate. Scale problems are good problems.

        ## The Complete Stack


            ComponentToolMonthly Cost
            ServerAny $5 VPS$5.00
            Scoring LLMDeepSeek V3$0.24
            Writing LLMClaude Sonnet$0.96
            NewsletterButtondown (free)$0.00
            MonitoringTelegram Bot$0.00
            DomainCloudflare$0.83
            **Total****$7.03**


        Under $8/month for a fully autonomous newsletter that publishes 3x/week. No human required.


            ### Get the AI Agent Playbook
            80+ pages covering newsletter automation, agent architecture, deployment, and more.

            [Get the Playbook — $19](https://paxrel.com/playbook.html)


        ## Related Articles

            - <a href="https://paxrel.com/blog-how-to-build-ai-agent.html">How to Build an AI Agent in 2026: Step-by-Step Guide</a>
            - <a href="https://paxrel.com/blog-claude-code-autonomous-agents.html">How to Run Autonomous AI Agents with Claude Code</a>
            - <a href="https://paxrel.com/blog-ai-agent-frameworks-2026.html">Best AI Agent Frameworks Compared (2026)</a>
            - <a href="https://paxrel.com/blog-what-are-ai-agents.html">What Are AI Agents? The Complete Guide</a>
            - <a href="https://paxrel.com/blog-mcp-model-context-protocol.html">What Is MCP (Model Context Protocol)?</a>
Enter fullscreen mode Exit fullscreen mode

Get our free AI Agent Starter Kit — templates, checklists, and deployment guides for building production AI agents.

Top comments (0)