This is a submission for the AI Agents Challenge powered by n8n and Bright Data
What I Built
I built an AI-powered News-to-Email Agent that automatically:
- Fetches the latest news articles of specific categories you provide.
- Uses an LLM to generate a professional, table-based HTML email newsletter with multiple articles formatted for email clients like Gmail, Outlook, and Apple Mail.
- Sends the final email to subscribers with proper preheader text, article summaries, and "Read more" links.
This solves the problem of turning raw RSS/news data into a polished daily newsletter—without manual formatting.
Demo
n8n Workflow
Technical Implementation
System Instructions: The system prompt was carefully crafted to enforce compatibility with email clients (table-based layout, inline CSS, no external assets). It also ensures each article is formatted consistently with title, source, summary, and link.
Model Choice: "meta-llama/Llama-3.1-8B-Instruct" from HuggingFace Inference API, good for text-generation and is helpful for template generation
Memory: Stateless, each workflow execution processes the latest batch of news articles.
Tools Used:
- Bright Data Node (Google News Scraper) → get latest articles.
- HTTP Request Node → send articles + system prompt to Hugging Face API.
- Function Node → structure JSON payloads (e.g. { instruction, articles: [...] }).
- SMTP Email Node → deliver the final HTML newsletter.
Bright Data Verified Node
I used the Bright Data Verified Node to fetch clean, reliable, and structured news data from various web sources. This ensured the agent always received up-to-date, accurate articles to include in the newsletter without scraping issues or inconsistent data.
Journey
This was my first time exploring n8n and building a workflow that integrates with LLMs. At the start, I had to learn how n8n handles data, especially:
- Data cleaning and transformations: writing Function nodes to reshape and sanitize incoming JSON so that the model could consume it properly.
- Understanding data flow between nodes: getting used to how n8n passes input/output made me rethink how to structure each step in the pipeline.
One of the biggest challenges came when I tried to connect the Hugging Face Inference API using n8n’s basic LLM Chain node. The node always defaulted to a conversational mode, which wasn’t suitable for structured HTML generation. After multiple attempts, I switched to using the HTTP Request node to directly access Hugging Face’s text-generation models. While this approach worked, it also raised concerns about security (exposing API keys when sharing workflows).
Final Thoughts
This workflow works end-to-end, but there’s still room for improvement:
- The AI model doesn’t always generate a perfect email template.
- Optimizations could reduce the running cost of the workflow.
- Exploring safer and more scalable ways to integrate Hugging Face APIs in n8n would make this even more robust.
Top comments (1)
Key takeaway: switching from n8n's LLM Chain to direct HTTP calls for consistent, table-based HTML generation—while acknowledging the API key security trade-off—was a smart, practical insight.