Scraping, summarizing, and saving tech news using multi-agent automation — here's how I made it work with OWL and FireCrawl.
If you're anything like me, keeping up with tech news means juggling 5+ tabs, 3 newsletters, and forgetting half of it anyway.
I wanted something better — something that could scan, summarize, and deliver the latest tech news automatically every day.
So I built a digital assistant that does exactly that — using AI agents from the CAMEL-AI OWL framework and a scraping backend called FireCrawl MCP.
It’s like having two interns: one who surfs the web for news, and another who writes a daily digest for you.
What This Thing Actually Does
- Visits sites like TechCrunch, The Verge, Wired
- Extracts the latest headlines, summaries, and publish dates
- Groups similar stories
- Saves the results in a clean Markdown digest
How It Works (The Agent Flow)
Using the OWL framework, I created autonomous agents that talk to each other to get the job done. Here’s the workflow:
+------------------+ +----------------------+ +---------------------+
| Curator Agent | ----→ | Research Agent | ----→ | FireCrawl Web API |
| ("Get me news!") | | ("I'll scrape it...")| | (Scrapes tech sites)|
+------------------+ +----------------------+ +---------------------+
↑ ↓
+------------ Final digest in Markdown ------------+
They collaborate using natural language prompts internally — no need for hardcoded rules.
The Tools Behind It
- 🧠 OWL (Optimized Workforce Learning): Multi-agent orchestration framework
- 🔥 FireCrawl MCP: A scraping server that pulls structured data from URLs
- 🧰 CAMEL-AI: Lets you define agents and simulate workflows
- 🐍 Python: The glue tying it all together
Getting Started (Setup in 3 Steps)
Clone the OWL repo:
git clone https://github.com/camel-ai/owl.git
cd owl
pip install -r requirements.txt
Then navigate to the use case:
cd community_usecase/Mcp_use_Case
python content_curator.py
Make sure:
- You have your
.env
file with API keys -
mcp_servers_config.json
is in the same folder ascontent_curator.py
A Sample Output Looks Like This
### Today’s Top Tech News
- **OpenAI Launches GPT-5 Preview** – The Verge
*OpenAI reveals GPT-5 with better memory and deeper reasoning.*
- **Apple Debuts M4 Chips for MacBooks** – TechCrunch
*Next-gen silicon promises speed and efficiency for developers.*
- **GitHub Copilot Workspace Rolls Out** – Wired
*Your AI coding buddy just got a major upgrade.*
You can set this up to run daily, and send the digest to Notion, WhatsApp, or email.
Customize It: Your Digest, Your Rules 🛠️
This is not a fixed tool — you can personalize every part:
🧾 Change the Task
Edit the prompt in content_curator.py
:
default_task = "Curate today's top AI and startup news from Hacker News, Product Hunt, and Reddit r/MachineLearning."
🌐 Use Your Own Sites
FireCrawl supports scraping any website — simply modify the Research Agent’s prompt or provide custom URLs.
⚙️ Edit MCP Server Config
Want to use different MCPs (e.g., WhatsApp, Notion, Telegram)?
You can register and integrate them into OWL by editing mcp_servers_config.json.
Why I Made It
Honestly? I was drowning in tabs and newsletters. I wanted a focused, daily, and clean tech summary that didn’t waste my time.
Also, I wanted to explore what it's like building with OWL — and I was surprised how easy it was to simulate an AI team doing meaningful work.
What's Next?
Here’s what I’m planning to add next:
- 📆 Run it daily via cron
- 💬 WhatsApp alerts using the WhatsApp MCP
- 🔎 Include Hacker News, Reddit, and Dev.to scraping
- 🧠 Add sentiment + trend detection across headlines
Final Thoughts
This isn’t some fancy AGI project. It’s a practical, nerdy little tool that genuinely makes my mornings better.
If you’re into automating boring stuff with AI, I highly recommend trying out OWL + FireCrawl. You’ll feel like you’re building the future — one YAML config at a time.
📎 GitHub: CAMEL-AI OWL Repository
🧠 Framework Docs: camel-ai.org
💬 Questions or feedback? Drop a comment below!
Top comments (0)