DEV Community

Cover image for đź§  Human Intelligence vs. LLMs

đź§  Human Intelligence vs. LLMs

Nadine on September 30, 2025

I was doing investigative research and found a crucial bit of information in a single article. I then used an LLM to perform Deep Research on the s...
Collapse
 
whitmanmark profile image
Mark Whitman

This is a fascinating breakdown of where LLMs fall short in genuine deep research. The point about statistical text matching bias really stood out—how models tend to dismiss rare but potentially crucial findings in favor of what’s most common or “authoritative.” It makes me wonder: as RAG + NER systems become more widely adopted, could they finally bridge this gap between surface-level synthesis and true contextual inference? Also curious how human-guided annotation could scale in practice without losing the depth of reasoning you’ve highlighted here.

Collapse
 
nadinev profile image
Nadine

Yes, NER can help LLMs create associations or form a mental picture closer to that of humans. Human-guided annotation does not necessarily need to scale because a small NER model can train an LLM. I’ll explore the NER layer more in another post!

Collapse
 
zibly profile image
Admin Zibly

I find this to be particularly troubling socially. If you use Deep Research for anything political, you will inherently only receive the establishment narrative, leading to technocratic group think

Collapse
 
nadinev profile image
Nadine

Correct, the LLM's deep research strategy, based on frequency and authority, makes it inherently biased. This appeal to authority is a logical fallacy!

Thread Thread
 
zibly profile image
Admin Zibly

100%! Which is particularly worrying given how our institutions are so partisan nowadays. I don't think anyone in their right mind would consider either the UN or the US State Department to be impartial non-partisan sources, but the models certainly think they are. Humans know that most of these "institutions" are activists masquerading as experts, but when the lay person treats ChatGPT as a "meta-expert" and ChatGPT launders activism as fact, what happens to the nature of dissent away from institutions?

Thread Thread
 
nadinev profile image
Nadine

That's a fair statement. LLMs regard those institutions, including all governmental authorities, as high authority; these models have a statistical bias, such as ChatGPT, but Gemini seems to provide more balanced, centrist views on political topics despite its appeal to authority 🤷🏻‍♀️

Collapse
 
rajesh_patel_68e5dd6c9a4f profile image
Rajesh Patel

Great write-up — you nailed the core weakness of LLMs as statistical engines rather than reasoning engines. The point on NER-based RAG is especially important: converting unstructured text into structured entities/facts is what lets retrieval become precise instead of probabilistic.

Collapse
 
nadinev profile image
Nadine

đź’Ż - thanks for reading!

Collapse
 
capestart profile image
CapeStart

The point about LLMs biasing towards common info and missing out on niche data is so true. I’ve noticed that too, sometimes the more unique, less popular sources just get skipped. I’m also a fan of the RAG/NER combo. It’s like giving the model a better map for finding the good stuff.

Collapse
 
amit_nudel profile image
Amit Nudel

Thank you for sharing this. I’ve found that having a human in the loop is always essential, providing context is a key part of the effective collaboration. Understanding the strengths and limits of each side, human and machine, only helps us move forward together.

Collapse
 
sbgprojects profile image
Shubham Gujarathi • Edited

Amazing observation