DEV Community

Cover image for Hallucinations and AI: Scary or Not?
Ayesha Shahzad
Ayesha Shahzad

Posted on

Hallucinations and AI: Scary or Not?

If we read the usual definition of hallucination, it means believing or saying something that never happened, nobody said it, or nobody lived it. Now imagine if someone with this condition became influential, with millions of followers who took their words as truth or ultimate knowledge. That's a scary thought.

AI hallucinates when it creates/invents information that never happened or existed. Making up a fact, a conclusion or a solution that doesn't reflect reality. One of the reasons this happens is flawed training data, Predictive models are based on patterns built from the data provided, which can then lead to inaccurate results.

Most people are using GPT as their new search engine, and even if they are not, we have Gemini popping up on every Google search now.

So what's wrong with that? Well, since GPT is an LLM, it's trained to predict the next word in the text, not to fact-check it. For Example, you may ask it to write a fact-based article, and it might include some self-generated things that did not happen because it did not find enough data on that topic. Similarly, you may ask it some questions with less context or foundation, which most people in my surroundings are doing.

GPT doesn't know facts, it generates responses based on statistical patterns in massive datasets. It doesn't check whether what it says is true.

How Can We Prevent Hallucinations (Prompt Level)?

Always ask questions with context, properly structure your prompts with the issue, the solution you are looking for and some back story (Yes, you should not give up on learning just because we have LLMs now, work a bit! 🙂).

Prevention at the Engineering Level:

There's a lot that goes into this, mentioning some of those techniques and their general overview:

Using RAG:

Combining an LLM with Retrieval-Augmented Reality which works on referring to a corpus of verified, relevant documents.

Fine-Tuning with Verified Data

Contrastive Learning / Denoising Objectives:

Introduce pairs in contrast and train the model to score correct answers higher.

Chain-of-Thought (CoT) Prompting:

Basically, breaking down the problem into steps improves reasoning.


This was a very basic overview of AI Hallucinations. Many people (including myself) are very lazy to cross or fact-check these things, this is a reminder to not lose your academic integrity, especially when sharing content with a broader audience.


✍️ Originally published on [Medium] https://medium.com/@ayeshashahzad2800/hallucinations-and-ai-scary-or-not-8132fb537c84

📌 Also find me here:

Top comments (0)