DEV Community

Cover image for How to Stop LLM Hallucinations from Crashing Your React UI (Fixing AI_JSONParseError)
Matheus Lima
Matheus Lima

Posted on • Originally published at verossim.hashnode.dev

How to Stop LLM Hallucinations from Crashing Your React UI (Fixing AI_JSONParseError)

Building Generative UIs with tools like the Vercel AI SDK is incredibly powerful until it suddenly isn't.

ou’ve set up your tool calls, your streaming is incredibly fast, and it works flawlessly 99% of the time. But LLMs are inherently non-deterministic. Eventually, the model will drop a quote mark in the middle of a JSON stream, or decide to wrap its raw output inside a markdown block (json).

When that happens, your application doesn't just show a typo. It throws a synchronous parsing error. And in React, an unhandled error means one thing: The White Screen of Death (WSOD).

If you look at the Vercel AI SDK GitHub repository, you'll see developers fighting this in the trenches:

  • Issue #13514: Tool streaming causes malformed JSON.

  • Issue #4906: The LLM randomly outputs Markdown blocks instead of pure objects, crashing the preflight parsing.

  • Issue #1167: RSC stream aborts completely unmount the React tree.

Let's look at why standard React tools fail here, and how to elegantly quarantine these hallucinations.

The Trap of Standard Error Boundaries

When a standard React component throws an error, the official advice is to wrap it in an <ErrorBoundary>. But Generative UI is different.

If you are building an AI Chat interface, the tool component (let's say, a financial chart generated by the LLM) is a child of the message list. If the LLM hallucinates the schema (e.g., returns a string instead of an array) and your chart component tries to .map() over it, it crashes.

If you rely on a generic Error Boundary higher up in your tree, React unmounts the entire chat history to show the fallback UI. Your user just lost their entire 20-minute context because the AI forgot a comma. That is unacceptable UX.

The Ugly Fix

To prevent this, developers end up writing massive, defensive try/catch blocks inside every single component, or attempting to write regex-based "JSON cleaners" to sanitize strings mid-stream. It pollutes your codebase with error-handling logic that obscures the actual business logic.

The Elegant Fix: Quarantining Generative UI

We need a surgical approach: a boundary specifically designed for the unpredictable nature of AI payloads. That's why I built <AIBoundary /> as part of the CogniCatch open-source library.

Instead of generic error handling, AIBoundary creates an isolated quarantine zone around your GenUI components. If the LLM hallucinates or spits out malformed JSON, the boundary intercepts the crash, keeps the rest of the application (like the chat history) perfectly intact, and gracefully handles the failure in-place.

How to use it

First, install the package:

npm install @cognicatch/react
Enter fullscreen mode Exit fullscreen mode

Then, simply wrap the component responsible for rendering the LLM's output:

import { AIBoundary } from '@cognicatch/react';
import { FinancialChart } from './components/FinancialChart';

export function ChatMessage({ toolInvocation }) {
  // If toolInvocation.args comes back malformed,
  // FinancialChart will crash. AIBoundary protects the app.

  return (
    <AIBoundary
      mode="manual"
      showRawData={true} 
      rawPayload={toolInvocation.args} 
      onRecover={() => triggerRetryInChatStream()}
    >
      <FinancialChart data={toolInvocation.args} />
    </AIBoundary>
  );
}
Enter fullscreen mode Exit fullscreen mode

What happens under the hood?

  1. Isolation: If FinancialChart throws TypeError: undefined is not a function because of a schema hallucination, the error is stopped dead in its tracks. The parent component (the chat window) doesn't even flinch.

  2. Contextual Fallback: Instead of a generic "Something went wrong" message, CogniCatch displays an elegant, purpose-built Generative UI fallback.

  3. Recovery: By providing the rawPayload, developers (or users, depending on your setup) can actually see the hallucinated output, and the onRecover action allows you to instantly trigger a retry to the LLM without reloading the page.

Open-Source vs. Auto Mode

CogniCatch is an open-core project.

The mode="manual" (open-source) is completely free and runs locally. It isolates the crash, prevents the White Screen of Death, and exposes the rawPayload so your users (or you) can see the hallucinated output, allowing them to hit "Try Again" via the onRecover callback.

But I also built a Pro Tier (mode="auto") for a fully hands-off UX:

  1. Automatically translate the error empathetic message into the user's native browser language.

  2. Visually parse and beautifully format the malformed JSON so developers and users can easily read what went wrong.

Join the Early Adopter Waitlist at cognicatch.dev (Early birds get 50% off the Pro tier for the first 6 months!).

Stop letting LLMs dictate your app's stability

We shouldn't let the non-deterministic nature of AI degrade the deterministic reliability of our React applications. By treating LLM outputs as highly volatile external data and wrapping them in specialized boundaries, you guarantee that an AI's mistake never becomes a user's frustration.

If you are tired of building AI wrappers that crash on unpredictable streams, give CogniCatch a try. It’s open-source, beautifully styled out of the box, and respects your telemetry.

Happy (and safe) prompting!


(Note: If you need to silently log these hallucinations to Sentry or Datadog without breaking the UI, the package also exposes an onError callback designed specifically to avoid blinding your observability tools. You can read my previous article about that here).

Top comments (0)