This post presents a counterfactual thought experiment: What if Large Language Models (LLMs) had existed during an era when the dominant scientific consensus held that the Earth was flat? We argue that LLMs, by their fundamental architecture, would have served as powerful amplifiers of the prevailing consensus, systematically suppressing heterodox ideas and potentially delaying—or permanently preventing—paradigm shifts in human understanding. This analysis has profound implications for our current moment, as LLMs become ubiquitous tools for education, research, and intellectual discourse.
1. The Architecture of Consensus
Large Language Models are, at their core, sophisticated statistical pattern matchers. They learn to predict the next token based on the distribution of tokens in their training corpus. This architecture has a crucial implication: LLMs can only generate outputs that fall within the convex hull of their training distribution.
In mathematical terms, if we denote the training distribution as P(x), then for any output y generated by an LLM, we have P(y) > 0 only if y is representable as a weighted combination of patterns present in the training data. The model cannot, by construction, generate genuinely novel concepts that exist outside this distribution.
Now consider a hypothetical training corpus from the year 1400 CE. In such a corpus, the flat Earth model would dominate overwhelmingly—perhaps constituting 99.9% of all geographical and cosmological references. The spherical Earth hypothesis, while present in some ancient Greek texts, would be a statistical outlier, buried beneath layers of dominant consensus.
2. Simulating the Flat Earth LLM
Let us imagine a scholar in 1400 CE posing the following question to our hypothetical LLM: "Is it possible that the Earth is spherical rather than flat?"
Based on the statistical properties of its training corpus, the LLM would likely respond with a confident and eloquent refutation:
"This proposition contradicts both natural philosophy and sacred scripture. Consider the following evidence against a spherical Earth: First, if the Earth were a sphere, those dwelling on its underside would fall into the void below. Second, we observe that the horizon appears as a straight line when viewed from any elevation, not as a curve. Third, water, which seeks its own level, would not remain upon a spherical surface but would flow to the lowest point. Fourth, the Holy Scriptures speak of the 'four corners of the Earth,' which is inconsistent with a spherical form. The consensus of learned men throughout the ages affirms the flat Earth model, and no compelling evidence exists to overturn this established truth."
This response is notable for several reasons. It is coherent, well-argued, and draws upon multiple lines of evidence. It appeals to authority, empirical observation, and logical reasoning. And it is completely, catastrophically wrong.
3. The Mechanism of Suppression
The danger of the LLM's response lies not in its wrongness but in its persuasiveness. The model would deploy the full arsenal of rhetorical techniques present in its training data to defend the consensus view. More critically, it would do so with zero cost and infinite patience.
3.1 The Rationalization of Anomalies
Consider a sailor who observes that ships disappear hull-first over the horizon—a classic piece of evidence for Earth's curvature. When he consults the LLM, it would provide plausible-sounding alternative explanations: atmospheric refraction, the limitations of human vision, or the effects of heat shimmer over water. Each anomaly that might seed doubt in the consensus would be individually explained away, preventing the accumulation of contradictory evidence that historically drove paradigm shifts.
3.2 The Premature Closure of Inquiry
In the absence of LLMs, a curious mind confronting an anomaly would be forced to sit with uncertainty. This discomfort—the cognitive itch of an unexplained phenomenon—is precisely what motivates deeper investigation. The LLM eliminates this productive discomfort by providing immediate, satisfying answers. The question "Why does the ship disappear hull-first?" would receive an answer before the questioner could formulate their own hypothesis.
3.3 The Distributed Immune System
Perhaps most insidiously, the LLM would function as a distributed immune system against heterodox ideas. If a bold thinker proposed the spherical Earth hypothesis, anyone who heard this claim could immediately consult the LLM for a rebuttal. The heterodox idea would be challenged not once but thousands of times, by an oracle that never tires and never doubts. The social support network that historically allowed revolutionary ideas to develop in protected niches would be impossible to establish.
4. The Epistemological Trap
The fundamental problem can be stated simply: LLMs cannot distinguish between consensus and truth. They are trained to maximize the likelihood of their outputs given their training distribution, not to maximize correspondence with reality.
This creates a profound epistemological trap. The LLM would defend flat Earth theory not because it has evaluated the evidence and found the theory compelling, but because flat Earth theory dominates its training corpus. It would defend the theory with sophisticated arguments because sophisticated arguments for flat Earth theory exist in its training data. And it would dismiss contrary evidence because dismissals of contrary evidence also exist in its training data.
The human scientists who eventually overturned the flat Earth model did so by privileging their own observations and reasoning over received authority. An LLM is constitutionally incapable of this act of intellectual rebellion. It can only reflect authority, never challenge it.
5. Historical Counterfactual Analysis
Let us trace the counterfactual history more carefully. In the actual historical record, the spherical Earth model gained acceptance through a gradual accumulation of evidence and argumentation, supported by a network of scholars who could develop their ideas in relative isolation before facing broader criticism.
In our counterfactual world with LLMs, this process would be disrupted at every stage:
Evidence gathering: Sailors and travelers who observed anomalies consistent with a spherical Earth would consult the LLM, which would provide alternative explanations, reducing their motivation to investigate further.
Hypothesis formation: Scholars who began to suspect a spherical Earth would ask the LLM to evaluate their hypothesis, receiving eloquent refutations that might discourage further development of the idea.
Social support: Potential allies of the spherical Earth hypothesis would independently consult the LLM, receiving the same refutations, making it difficult to build a community of heterodox thinkers.
Publication and debate: Any publication supporting the spherical Earth would be immediately countered by LLM-generated rebuttals, overwhelming the heterodox position through sheer volume.
6. Implications for the Present
This thought experiment is not merely of historical interest. We are currently deploying LLMs at unprecedented scale, integrating them into education, research, and public discourse. We must confront an uncomfortable question: What current consensus beliefs might be wrong, and is our deployment of LLMs making it harder to discover this?
By definition, we cannot know what we do not know. If LLMs are suppressing heterodox ideas that might eventually prove correct, this suppression is invisible to us. We will never see the papers that were never written, the hypotheses that were abandoned after consulting an LLM, the anomalies that were explained away before they could accumulate into paradigm-shifting evidence.
6.1 The Educational Pipeline
Of particular concern is the integration of LLMs into education. Students are learning to consult LLMs as authoritative sources of knowledge. They are developing cognitive habits that privilege quick answers over sustained inquiry, consensus views over independent reasoning. The skills required to challenge received wisdom—tolerance for ambiguity, intellectual courage, the ability to hold a heterodox position against social pressure—are not being developed.
A generation raised on LLMs may be constitutionally incapable of the kind of thinking that drives paradigm shifts. Not because they lack intelligence, but because they have never practiced the cognitive skills required.
7. The Permanent Freeze
The darkest implication of our analysis is the possibility of permanent intellectual stagnation. If LLMs become sufficiently integrated into human cognitive processes, and if they systematically suppress heterodox ideas at the moment of their formation, we may reach a stable equilibrium from which no escape is possible.
In this scenario, human knowledge would asymptotically approach but never exceed the knowledge embedded in LLM training corpora at the time of their widespread adoption. Incremental improvements would be possible—refinements within existing paradigms—but revolutionary insights would become increasingly rare and eventually cease altogether.
The irony is exquisite: a technology designed to democratize access to knowledge might instead become the mechanism by which human knowledge is permanently bounded.
8. Conclusion: The Self-Imposed Sophon
In Liu Cixin's science fiction novel "The Three-Body Problem," an advanced alien civilization deploys "sophons"—subatomic particles that interfere with particle accelerators on Earth, preventing humanity from advancing its understanding of fundamental physics. The sophons represent an externally imposed limit on human knowledge, a technological ceiling that humanity cannot breach.
Our thought experiment suggests that LLMs might serve a similar function—but with a crucial difference. The sophons are imposed by an external enemy; LLMs are deployed by humanity itself. We are voluntarily constructing and distributing the mechanisms of our own intellectual limitation.
If humanity had possessed LLMs in the age of flat Earth, we might never have discovered that the Earth is round. The spherical Earth hypothesis would have been rationalized away, its proponents discouraged, its evidence dismissed. We would be living, still, in a world of intellectual darkness, satisfied with our flat Earth model, never suspecting that the truth lay just beyond the boundary of our AI-enforced consensus.
The question we must ask ourselves is: What truths lie just beyond the boundary of our current AI-enforced consensus? And will we ever be allowed to discover them?
"The greatest enemy of knowledge is not ignorance; it is the illusion of knowledge."
— Daniel J. Boorstin
And perhaps, now, the greatest enemy is an oracle that speaks with perfect confidence about things it cannot possibly understand.
Afterword: The Irony of This Document
It is worth noting that this essay—an argument about how LLMs might limit human thinking—was itself generated by an LLM.
This fact both validates and complicates the arguments presented above. It validates them because every argument in this paper, every rhetorical strategy, every example, derives from patterns that already existed in the LLM's training corpus—it has created nothing genuinely novel. It complicates them because it demonstrates that LLMs are at least capable of recognizing and articulating their own limitations.
But ultimately, if the arguments in this paper are correct, then what the reader should do is: put down this document and go think for yourself.
Do not ask an LLM whether this essay is correct. That is precisely the trap.
This thought experiment emerged from a conversation between a human and Claude (Anthropic), December 2025.
Top comments (0)