DEV Community

Ramesh Kumar Saragadam
Ramesh Kumar Saragadam

Posted on • Originally published at aiagentautomation.site

I Built a Free Daily AI News Engine Using Claude Code CLI — No API Key Needed

Every morning at 8:30am, my Mac writes four 1,200-word AI news articles — Bloomberg-style, with citations, analyst quotes, and structured headings — then commits them to GitHub and deploys to Cloudflare Pages. Total API cost: $0.

Here's how I built it, and why claude --print as a subprocess is the most underrated automation trick right now.

The Problem I Was Solving

I run AI Agents Directory — a directory of 2,800+ AI tools across 305 categories. The site has great reference content but needed fresh editorial content to build topical authority and keep Google happy.

Hiring writers for daily AI news wasn't viable. Existing AI writing tools cost money per article. I wanted something that would run forever for free.

The Architecture: Two Bots, Zero Cost

Bot 1 — Trend Fetcher (6:00 AM)

Pulls from three free sources with no API keys:

# Hacker News Algolia API — completely free, no auth
r = requests.get("https://hn.algolia.com/api/v1/search", params={
    "query": "AI OR LLM OR agent",
    "tags": "story",
    "hitsPerPage": 100,
    "numericFilters": f"created_at_i>{cutoff}"
})

# Reddit public JSON — no auth needed
r = requests.get(
    f"https://www.reddit.com/r/artificial/top.json?t=day&limit=25",
    headers={"User-Agent": "TrendBot/1.0"}
)

# GitHub Trending — simple scrape
r = requests.get("https://github.com/trending?since=daily")
Enter fullscreen mode Exit fullscreen mode

It clusters posts by topic keyword, scores by engagement (upvotes + comments × 2), and saves a structured JSON of the top 5 topics with headlines, companies mentioned, and key numbers.

Bot 2 — Blog Writer (8:30 AM)

This is the clever part. Instead of paying for API access, it uses the Claude Code CLI — which you're already paying for as a subscription — as a subprocess:

def call_claude(prompt: str) -> str:
    result = subprocess.run(
        ["/Users/apple/.local/bin/claude", "--print", "--model", "claude-haiku-4-5-20251001"],
        input=prompt,
        capture_output=True,
        text=True,
        timeout=240,
    )
    return result.stdout.strip()
Enter fullscreen mode Exit fullscreen mode

claude --print runs non-interactively, takes stdin as the prompt, and returns the response to stdout. It uses your existing OAuth session — no ANTHROPIC_API_KEY needed.

The Prompt Engineering

Bloomberg quality requires specific constraints. Vague prompts get vague articles. Here's what actually works:

1. Open with a specific lede — a named fact or dollar figure, never a question
2. Exactly 4–5 H2 section headers — descriptive, not generic
3. At least 4 data points woven into prose (not listed)
4. Name specific companies and executives throughout
5. One analyst blockquote formatted as > ...
6. At least 6 bold instances for key terms
7. At least 2 inline hyperlinks to sources
8. End with "## What This Means for Practitioners" — 3 actionable bullets
9. End with "## Frequently Asked Questions" — 3 specific Q&As
Enter fullscreen mode Exit fullscreen mode

The FAQ section does double duty: it makes the article more useful for readers AND generates FAQPage JSON-LD schema for Google rich results.

Quality Validation Before Publishing

Every article runs through a validator before it touches the filesystem:

def validate_content(content: str) -> tuple[bool, list[str]]:
    failures = []
    if len(content.split()) < 1000:
        failures.append(f"Too short: {len(content.split())} words")
    if len(re.findall(r"^## .+", content, re.MULTILINE)) < 4:
        failures.append("Not enough H2 sections")
    if len(re.findall(r"\*\*[^*\n]+\*\*", content)) < 6:
        failures.append("Not enough bold text")
    if not re.search(r"^> .+", content, re.MULTILINE):
        failures.append("Missing analyst blockquote")
    if len(re.findall(r"\[.+?\]\(https?://[^\)]+\)", content)) < 2:
        failures.append("Not enough external links")
    if "What This Means" not in content:
        failures.append("Missing practitioner section")
    return len(failures) == 0, failures
Enter fullscreen mode Exit fullscreen mode

If it fails, the script retries once with an explicit "PREVIOUS ATTEMPT FAILED" notice in the prompt. If it fails twice, the topic is skipped rather than publishing substandard content.

The Results After Day 1

  • 4 articles published, all 1,150–1,430 words
  • All pass the quality validator on first attempt
  • Auto-committed to GitHub and deployed to Cloudflare Pages
  • Each article links internally to 3–4 relevant AI agent pages

The full source code is in the Blog-AI-Content-Engine/ folder of the project. If you're running Claude Code already, this costs you literally nothing to run.


The site tracking these tools is AI Agents Directory — 2,800+ AI agents across 305 categories, updated weekly.

Top comments (0)