The Problem: AI News is Noise
Every morning, I faced the same problem. There are 50 new AI tools released daily, 10 new models on HuggingFace, and endless hype on X/Twitter.
I was wasting hours "doomscrolling" just to find the 2 or 3 updates that actually mattered to my work.
I didn't need more news. I needed a Chief of Staff to read everything for me, filter out the garbage, and only show me the signal.
So, I built one.
The Solution: An Autonomous "News Editor"
In this tutorial, I’ll show you how I built a Personal AI News Agent using n8n, OpenAI, and Tavily.
It works while I sleep:
**Reads **the raw RSS feeds from major tech sites.
**Judges **every headline (acting as a strict "Senior Editor").
**Researches **the winners using Tavily (to verify facts).
**Delivers **a curated morning briefing to my email.
The Stack
Orchestrator: n8n (Local or Cloud).
The Brain (Filter): OpenAI gpt-4o-mini (Cheap and fast).
The Researcher: Tavily AI (Essential for fetching live context).
Source: RSS Feeds (e.g., TechCrunch, Verge).
Step 1: The "Firehose" (RSS Ingestion)
The workflow starts with a Schedule Trigger set for 8:00 AM. It pulls the latest articles using the RSS Read Node.
At this stage, we have everything—rumors, minor updates, and noise.
Step 2: The "Senior Editor" (OpenAI Filtering)
This is the most critical part. I didn't just ask AI to "summarize." I used a Loop Node to process each headline individually and gave OpenAI a specific persona:
_System Prompt: "Analyze this news item:
Title: {{ $json.title }}
Summary: {{ $json.contentSnippet || $json.content }}
YOUR ROLE:
You are a Senior Tech Editor curating a daily briefing. Your goal is to identify useful, relevant news for AI Engineers.
SCORING GUIDELINES (0-10):
- 0-3: Irrelevant, gossip, or low-quality clickbait.
- 4-5: Average news. Minor updates or generic articles.
- 6-7 (PASSING): Solid, useful news. Good tutorials, interesting tool releases, or standard industry updates.
- 8-10 (EXCELLENT): Major breakthroughs, aquistitions, critical security alerts, or high-impact releases (e.g., GPT-5, new SOTA model).
INSTRUCTIONS:
- Rate strictly but fairly.
- If it is useful to a professional, give it at least a 6.
- Return ONLY a JSON object.
OUTPUT FORMAT:
{
"score": ,
"title": ,
"reason": ""
}_
Step 3: The Gatekeeper (If Node)
I added an If Node that acts as a gate.
_Score < 7: Discard immediately.
Score >= 7: Proceed to research._
This simple logic reduced my reading list from ~50 articles to just the top 5.
Step 4: The Deep Dive (Tavily AI)
For the winning articles, I didn't want just the RSS blurb. I used Tavily AI to go out and "read" the full context of the story.
I set Tavily's include_answer parameter to "Advanced." This generates a high-quality, synthesized summary of the topic based on multiple sources, not just the original article.
Step 5: The Briefing (Email)
Finally, an Aggregate Node collects all the "Winners" and formats them into a clean HTML email, sent via Gmail.
Watch the Build (Step-by-Step)
I recorded the entire process, including the exact Prompt and JSON logic I used. You can follow along here:
Why This Matters
By building this agent, I saved myself ~5 hours a week of mindless scrolling. The agent does the boring work of filtering; I just read the high-signal results.
Next Steps: In my next post, I’ll share how I used Google NotebookLM to "stress test" this agent.
Let me know in the comments: How are you handling the information overload right now?

Top comments (0)