deep research APIs are a new category. OpenAI, Perplexity, Google, and startups like Parallel are all shipping systems that can browse the web, synthesize sources, and return cited answers in a single API call. these tools are powerful, BUT choosing between them currently is not.
over the last couple weeks, i kept running into the same problem:
- pricing scattered across docs
- capabilities buried in changelogs
- benchmarks inconsistent, missing, or not public
- “it returns citations” used as a proxy for quality
so instead of guessing, i decided to actually compare them. this post is about what i learned, how i think about evaluation now, and why i built a public index to track this space.
how i approached evaluation
instead of reading docs in isolation, i did three things in parallel:
- ran real queries: i tested market analysis, technical research, competitive intel, and product strategy prompts across providers — not just trivia or synthetic benchmarks.
- read provider docs + benchmarks side-by-side: this made inconsistencies obvious very quickly, such as different definitions of “grounding”, benchmarks that aren’t comparable and pricing models that hide real cost drivers
- tracked tradeoffs: there isn’t a single “best” deep research api. there are tradeoffs, specifically, depth vs latency, or cost vs accuracy or batch workflows vs interactive use. once i framed it that way, the space became much clearer.
why i built the deep research api index
after doing this manually, i realized there was no neutral, centralized place to:
- compare providers side-by-side
- run the same prompt across them
- track how things change over time
so i built one at deep-research-index.vercel.app
it’s an independent index that:
- compares providers across concrete metrics
- lets you run identical prompts across multiple apis
- makes tradeoffs explicit instead of hiding them
this isn’t a startup pitch, it's more it’s a reference i wanted to exist.

Top comments (0)