DEV Community

Cover image for LLM Aggregator: aggregate RSS feeds and summarise them with LLMs
Maxwell Jensen
Maxwell Jensen

Posted on

LLM Aggregator: aggregate RSS feeds and summarise them with LLMs

I’d like to share a tool I’ve been developing for my own workflow: llm_aggregator.

What is it? Free software CLI tool written in Go that fetches articles from multiple RSS feeds, optionally filters them by date or keywords, then sends them as a query to any LLM through OpenAI-compatible API to produce a concise summary, or analysis, or whatever you prompt it for.

Why I built it: I like some news sources, but I don’t really care for keeping up with hundreds of articles a day. I wanted something that:

  • Works completely from the terminal.
  • Does one thing well: fetches, filters, summarises; the Linux way.
  • Works with any LLM providers.

How it works: a quick example

# Feed file (one URL per line)
$ cat feeds.txt
https://news.ycombinator.com/rss
https://lwn.net/headlines/newrss
https://opensource.com/feed

# Basic usage: summarise recent tech news
$ llm_aggregator --api-key <API_KEY> --base-url <API_URL> \
  --feeds-file feeds.txt \
  --prompt "What are the latest trends in open-source AI?"

# Power-user mode: filter, limit, output to JSON
$ llm_aggregator -f feeds.txt \
  -p "Summarize Linux kernel news" \
  --include-keywords linux,kernel \
  --max-days-old 2 \
  --max-total-articles 15 \
  --output json \
  --output-file kernel_summary.json

# Bonus: a bubbletea TUI with live progress bars!
$ llm_aggregator --feeds-file feeds.txt --prompt "Tech highlights" --tui
Enter fullscreen mode Exit fullscreen mode

Technical highlights

  • Written in Go: single binary, available on every platform (Linux, macOS, Windows), zero runtime dependencies. go build ./cmd/llm_aggregator.go is all you need.
  • Feed parsing by gofeed: handles RSS, Atom, and JSON.
  • LLM integration via openai-go: use any OpenAI-compatible endpoint (Deepseek, Claude, Ollama, etc.) by changing a few parameters.
  • Filtering & processing pipeline: articles are fetched, filtered (date/keywords), content extracted (with goquery fallback when feeds are snippet-only), and assembled into a context-aware prompt.
  • Flexible output: plain text, Markdown, or structured JSON (optionally including the original articles).
  • Sensible defaults: silent by default, verbose logging behind -v/--verbose, environment variable for API key.
  • TUI: built with bubbletea & lipgloss. Still rough, but should be serviceable.

Configuration

All options are command flags, a TOML file at ~/.config/llm_aggregator/config.toml or environment variables prefixed with LLM_AGGREGATOR_. More information on this in the repository, but I explicitly designed it to fit any Linux workflow.

What I’d love feedback on

  • The TUI (-t/--tui) experience: is it genuinely useful? If so, would something add to it?
  • Your personal use case and if anything is missing that would add to your workflow.

I haven’t had anyone else try this software, so expect bugs or obvious things that I might have missed. However, I did already successfully use it to make a personal daily digest, using a Python script that compiles a newspaper in LaTeX, from about 25 feeds.

Interested? Check out releases in the repository and grab a binary for your platform.

Happy to answer questions. I want this program to benefit as many people as possible.

Top comments (0)