I’m reevaluating a deep-research workflow I built earlier and would love some advice.
My previous design used a static tree workflow (fixed width/depth, node = search → extract → summarize → generate follow-ups), similar to GitHub’s popular deep-research repo. But newer projects like deer-flow and open_deep_research seem to favor a different style:
clear multi-agent roles + dynamic tool-call loops instead of a fixed search tree.
I’m trying to understand:
- Is moving from static workflows to tool-call loops the current trend? What are the concrete advantages, and is it worth refactoring?
- How do you evaluate these systems? From output alone it’s hard to tell which is “better,” and static workflows are still very popular. Is there actually a meaningful performance gap today?
- For a practical open-source project, what principles guide iteration? If the goal isn’t just scoring well on benchmarks (e.g., HLE), how would you think about evolving a deep-research agent? Any thoughts or experience would be really helpful. Thanks!
Top comments (0)