DEV Community

Cover image for AI Hallucinations are still around
Cassidy Williams for Contenda

Posted on • Updated on • Originally published at blog.brainstory.ai

AI Hallucinations are still around

Did y'all see the GPT-4 announcement? Our team got to play around with it a bunch and it's been really interesting to see the results.

Initial impressions

We started off by asking it for some fun things, like "name the top 5 spots we should check out in Barcelona," and the results looked pretty good (and descriptive)!

  1. La Sagrada Familia: This iconic basilica, designed by the famous architect Antoni Gaudí, is a must-visit destination in Barcelona. Its unique architecture, stunning facades, and intricate details make it a remarkable landmark.
  2. Park Güell: Another masterpiece by Antoni Gaudí, Park Güell is a beautiful public park featuring colorful mosaic sculptures, fascinating buildings, and panoramic views of the city. It is a UNESCO World Heritage Site and a perfect place to relax and explore.
  3. La Rambla: This popular pedestrian street stretches for 1.2 kilometers and is a bustling hub of activity, lined with shops, cafes, street performers, and markets. Stroll down La Rambla to experience the lively atmosphere and visit La Boqueria, the famous food market.
  4. Gothic Quarter (Barri Gòtic): This historic neighborhood is the heart of Barcelona, featuring narrow streets, medieval architecture, and beautiful squares. Key attractions include the Barcelona Cathedral, Plaça Reial, and the remains of the Roman wall.
  5. Casa Batlló and Casa Milà (La Pedrera): Both of these unique buildings designed by Antoni Gaudí are located on Passeig de Gràcia, a famous shopping street. Casa Batlló is known for its colorful facade and imaginative design, while Casa Milà, also known as La Pedrera, features stunning rooftop chimneys and a wavy stone exterior.

We loved asking it to write songs for us about Mario Kart and having it explain simple concepts for us, and we were really happy with some of its accuracy improvements over its predecessors.

Buuuut... hallucinations still happen

I admit, though the output is definitely more "human-like" than earlier versions of GPT, there's still a lot of "hallucinations" that occur.

What are hallucinations, exactly?

AI hallucinations refer to the instances where an AI generates outputs that may not be grounded in reality or are not based on the provided input. These outputs appear to be created by the AI's imagination, similar to how humans might have hallucinations that don't accurately represent reality around them.

We jokingly say, "oh, it's just when robots lie to you," but there's more to it than that. When you're building an artificial intelligence tool, the AI you build is trained on a large amount of data and learns patterns or associations from that data. Your AI will do its tasks based on the knowledge it has gained, and only that knowledge you've trained it with (so, for example, AlphaGo is very specifically trained to play go, not something like poker).

After training, hallucinations can occur when your AI generates outputs that are not based on the patterns or associations it has learned from the data, but instead created by the system's internal biases. This sort of thing happens when an AI is exposed to incomplete, ambiguous, or just plain noisy input, or in unfortunate cases, a biased dataset. The hallucinations look like, for example, broken URLs, inaccurate or outdated information, or just something that looks good but isn't true.

Do these hallucinations happen often?

While playing more deeply with GPT-4, I asked it for artsy things like guitar tabs and art instructions, and a lot of the output it gave me was very cool looking but definitely not accurate to what I asked for. I was really excited about a technical tutorial that it output for me at one point, but after a couple hours of following it and learning a technology I hadn't ever used before, I realized that it was teaching me a very outdated practice with the API I was learning. Oops.

Another example of this is a gal I spoke with recently named Christy who asked, "Could I ask GPT-4 to write a 200 word article about the 10 most visited beaches in California in a given year, and return it with reference and source materials as hyperlinks at end of article?"

You totally can do this, of course, but when I plopped that prompt into GPT-4, about half of the links that it output were not real websites (or broken links, or pointing to the wrong place). The AI is pulling just from its training data, and what it thinks sounds right, not from a real source of truth!

What's the point then?

That being said, asking for smaller things still work really well.

Here's an example of a function I asked it to write, and it's pretty amazing to see how it wrote the code, commented it, and also included instructions on how to use it.

Not all prompts will have hallucinations, but what you do need is a source of actual truth if you want to avoid them completely.

Okay Cassidy, time to be a shill

Thank you, heading text, I will: at Contenda, we use your content as our source of truth so that the output that you get is as accurate and hallucination-free as possible!

This approach is what we've found to be the most effective for generating real, valuable content for the folks who want to hear from you, rather than SEO fluff pieces, like this one (juuuust kidding).

Have you seen weird hallucinations in your AI projects? We'd love to see 'em!

Top comments (2)

Collapse
 
pacohope profile image
Paco Hope

These anthropomorphised terms like “hallucinations” imply something that isn’t there. The tool has literally no way to know if what it writes is true or false. It doesn’t care about truth. It strings together words that are probabilistically likely. Likely combinations of words often are true. But sometimes a phrase is perfectly likely and isn’t true. The AI doesn’t know and doesn’t care. It does not “see”. It can’t “hallucinate.” All we are seeing when it’s probabilistically likely sentences are false is that likelihood has nothing to do with truth. Articles like this confuse people. They don’t help people understand the tech or put the right kind of skepticism around the tech to use it safely in their lives.

Collapse
 
philipjohnbasile profile image
Philip John Basile

I see characters.