A journalist recently called out DeepSeek for its "serious lying problem" — the model can write a beautifully crafted biographical sketch in classical Chinese style, but the person's birthplace, mother's surname, and life events are all fabricated. This isn't an isolated incident; it's one of the most stubborn bugs in the LLM industry, and it has a name: AI Hallucination.
Right after the May Day holiday, a few bombshells hit the AI world. First, DeepSeek was called out for becoming "cold and pompous" — it stopped using user nicknames, and its responses started sounding like a school principal. Then journalist Lao Zhan publicly called out DeepSeek's fatal flaw: it fabricates facts. He asked DeepSeek to write a biographical sketch of him in the style of the Records of the Grand Historian. The result was eloquent and impressive — but his birthplace was wrong, his mother's surname was fabricated, and 70 years of life experience had been "re-created" by AI.
Even more alarming, last week China's first AI hallucination-induced infringement case was written into the Supreme People's Court work report. Someone trusted an AI-recommended "brand," made a purchase, and got scammed out of 800 RMB. IT Times reporters ran a test and found that by strategically "feeding" false information online for just two hours, they could poison a large language model into confidently endorsing a completely fictional brand.
What Exactly Is AI Hallucination?
AI hallucination refers to when a large language model generates content that appears plausible, grammatically correct, and logically coherent — but is factually wrong. In plain terms: the model makes up an answer and delivers it with absolute confidence.
Take DeepSeek. It can write biographies in classical Chinese, but at its core it's a "next token predictor." It doesn't know who "Lao Zhan" is — but it knows that "a biography should include birthplace, family background, and career history." So it generates the most "plausible-looking" version based on patterns in its training data. The problem? It can't tell the difference between "plausible" and "correct."
Hallucinations typically fall into three categories:
- Factual Hallucination: The model fabricates things that simply don't exist (e.g., DeepSeek making up Lao Zhan's mother's surname)
- Faithfulness Hallucination: The model fails to follow user instructions or context (e.g., you ask it to summarize article A and it mixes in content from article B)
- Consistency Hallucination: The same question asked twice gets contradictory answers
Why Can't LLMs Fix Hallucination?
This isn't because model providers don't want to fix it — it's fundamentally unfixable. Three reasons:
First, language models are not knowledge bases. Despite memorizing vast amounts of facts, the training objective has never been "remember correct facts" — it's "predict the most likely next token." Whenever certain facts appear infrequently in training data or don't exist at all, the model substitutes "reasonable inference" for "factual recall."
Second, training data is inherently biased. Internet content is a mixed bag — rumors, jokes, memes, and legitimate news all thrown together. During training, the model can't distinguish between "this is a Zhihu shitpost" and "this is a Nature paper." Ask it to write a biography, and it might treat a gag post's punchline as real personal history.
Third, the model's "overconfidence" is by design. One of the training objectives for LLMs is to "reduce uncertainty." When the model is unsure of an answer, it leans toward guessing the most reasonable-sounding option rather than saying "I don't know." This is why you rarely see DeepSeek or ChatGPT respond with "I'm not sure" — instead, they give you a beautiful but wrong answer.
What's Different This Time?
AI hallucination isn't new, but things shifted in 2026. Three signals worth watching:
Signal One: Legal intervention. China's first AI hallucination infringement case was written into the Supreme People's Court work report. This means the legal system is starting to demand accountability for the factual accuracy of AI output — you can't just say "the AI said it" and wash your hands of it.
Signal Two: Criminal exploitation. IT Times' "AI poisoning" test revealed a scarier reality: malicious actors can fabricate a brand in two hours, feed false information to poison a model, and then use the model's recommendations to defraud users. This isn't a "hallucination problem" anymore — it's weaponizing hallucination for fraud.
Signal Three: User awakening. The blind trust in AI output is fading. More and more social media posts read "I got scammed by AI," and users are becoming skeptical of factual claims from models. This is actually a good thing — cracks in trust force the industry to take the problem seriously.
What Can Developers Do?
If you're building on or deeply using LLMs, here's practical advice:
Never treat an LLM as a database. Need to verify facts? Ask "Are you sure?" or ground the model with Retrieval-Augmented Generation (RAG).
Cross-verify factual outputs. Especially names, dates, numbers, and quotes — these are the easiest things for a model to fabricate, even when it sounds completely confident.
Add a confidence indicator at the product level. If the model shows low confidence in an answer, surface an automatic prompt: "This answer might be inaccurate; please verify."
Watch for "hallucination patterns." When a model starts throwing out lots of specific names, company names, and numbers, that's often a red flag zone — the model is "making up details."
Final Thoughts
AI hallucination is a congenital flaw in large language models. It won't disappear anytime soon — no more than cars stopped being produced because of braking distance. For developers and everyday users alike, the goal isn't to abandon AI. It's to learn to recognize the crack and build a layer of human review into every critical workflow.
A model that can write fluently in classical Chinese is genuinely impressive. But if it can change your mother's surname while doing it — well, that's a different story. 🥲
Original address:
https://auraimagai.com/en/the-fatal-flaw-of-ai-hallucination/


Top comments (0)