DEV Community

Cover image for Master LLM Hallucinations πŸ’­
tanya rai
tanya rai

Posted on

5 3 3 3 3

Master LLM Hallucinations πŸ’­

Building with AI and LLMs is now a must-know for every developer. Every application is trying to integrate AI models. But hallucinations -- the phenomenon of AI models generating incorrect or unverified information -- are still an unsolved problem.

"Ughh ChatGPT - I told you to NOT make stuff up!"

meme

Andrej Karpathy shared recently his take on hallucinations on Twitter:

"The LLM has no "hallucination problem". Hallucination is not a bug, it is LLM's greatest feature. The LLM Assistant has a hallucination problem, and we should fix it."

So how do we fix it?

Chain-of-Verification (CoVe), a technique introduced by researchers at Meta, is one way. Let's dive into the high-level process of CoVe and then explore how we implemented a CoVe prompt template using AIConfig that you can use to reduce hallucinations in your LLM-powered apps.

The Chain-of-Verification (CoVe) Process πŸ”—

As documented in the white paper, the process involves 4 crucial steps:

1️⃣ Generate Baseline: Given a query, the Large Language Model (LLM) generates a response.
2️⃣ Plan Verification(s): With the query and baseline response, the system formulates a list of verification questions. These would aid in analyzing potential inaccuracies within the original response.
3️⃣ Execute Verification(s): Each verification question is answered, and then cross-checked against the original response to discern inconsistencies or flaws.
4️⃣ Generate Final Response: If inconsistencies are found, a revised response is generated, factoring in the results from the verification process.

Integrate CoVe into your App with AIConfigπŸ’‘

We've brought the CoVe technique to life using AIConfig, streamlining the process to help reduce hallucinations in your LLM applications.

Using AIConfig, we can separate the core application logic from the model components (prompts, model routing parameters, etc.). Here's what the prompt template looks like:

1️⃣ GPT4 + Baseline Generation prompt: This sets the foundation by generating the initial response using GPT4.
2️⃣ GPT4 + Verification prompt: This prompt creates a series of verification questions based on the initial response.
3️⃣ GPT4 + Final Response Generation prompt: Leveraging the findings from the verification stage, this prompt generates a final, more reliable response.

πŸ”— AIConfig CoVE: https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Chain-of-Verification

Want to see it action? πŸ‘€ Try out our demo in Streamlit!!

🎁 Streamlit App: https://chain-of-verification.streamlit.app/

streamlit

Are you already using AIConfig or CoVe in your projects? Feel free to share your experiences in the comments below.

Liked the post?

Show your support by starring our project on GitHub! ⭐️ https://github.com/lastmile-ai/aiconfig

API Trace View

Struggling with slow API calls?

Dan Mindru walks through how he used Sentry's new Trace View feature to shave off 22.3 seconds from an API call.

Get a practical walkthrough of how to identify bottlenecks, split tasks into multiple parallel tasks, identify slow AI model calls, and more.

Read more β†’

Top comments (1)

Collapse
 
hbamoria profile image
Himanshu Bamoria β€’

Checked out the open source library @tanyarai! Looks pretty good!
We know how powerful are these techniques to improve the quality of LLM generated outputs.

We've built Athina AI - An open source platform for monitoring and evaluating LLM responses.

I belive we can collaborate on Development X Evaluation workflows. Let me know what do you think about it.

More info - Launch Post

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs