If you are building an AI wrapper, a RAG pipeline, or working with Agentic AI, you already know the dirtiest secret in the space:
LLMs are terrible at returning reliable JSON.
You can use OpenAI's JSON mode. You can write strict system prompts. But eventually, in a production environment, the model will hallucinate a trailing comma, use single quotes, or just forget a closing bracket.
And when that hits your Node backend?
JSON.parse(llmOutput); // π₯ Crashes your Express server thread.
I got tired of writing brittle Regex hacks to catch trailing commas, so I built a dedicated API and open-sourced the SDKs to fix it permanently.
Here is how I built a "Reliability Layer" that auto-repairs JSON and validates it against a schema before it ever touches your database.
Why "Just use Regex" is a trap
When we first hit the SyntaxError: Unexpected token bug, the reflex is to write a utility function:
- Strip the
```
jsonmarkdown. - Regex out the trailing commas.
This works until the LLM nests a broken object inside an array, or completely omits a required key that your Postgres database is expecting. Wrapping it in a try/catch just means you drop the data entirely and have to waste tokens re-prompting the AI.
Large Language Models generate probabilistic text. Backends require deterministic structure. You can't mix the two without a shield in the middle.
The Architecture: llm-json-guard
I built llm-json-guard (available on NPM and PyPI).
Instead of doing the heavy lifting on the client device or main server thread (which is terrible for performance), it routes the broken string to a fast RapidAPI backend I deployed called LLM JSON Sanitizer & Schema Guard.
Instead of:
LLM β JSON.parse() β Runtime Failure
You do this:
LLM β llm-json-guard β Business Logic
Dropping it into Express
I wanted the SDK to be basically zero-friction. In my demo repo, I have a file called 03-express-integration.js that shows how it works as middleware.
javascript
import { LLMJsonGuard } from "llm-json-guard";
const guard = new LLMJsonGuard({ apiKey: process.env.RAPIDAPI_KEY });
// The LLM hallucinated single quotes and a trailing comma
const brokenAIOutput = "{'name': 'Harsh', 'age': 21,}";
const result = await guard.sanitize(brokenAIOutput);
The "Confidence Score"
If you look at the terminal output when you run this, it doesn't just hand you the JSON. It tells you exactly what it did.
json
{
"success": true,
"stage": "parsed_only",
"meta": {
"repaired": true,
"confidence": 0.88
},
"data": {
"name": "Harsh",
"age": 21
},
"errors": []
}
Notice the confidence: 0.88.
If the AI completely loses its mind and outputs a massive block of text, the repair engine has to violently alter the string to extract a JSON object. The confidence score drops. If the score is too low, your app knows to reject the payload, even if it's technically valid JSON now.
Enforcing Strict Schemas
Repairing the syntax is only step one. If your database requires an age field, and the LLM just forgot to include it, valid JSON won't stop your app from breaking later.
You can pass a standard JSON Schema into the .guard() method.
javascript
const userSchema = {
type: "object",
properties: {
name: { type: "string" },
age: { type: "number" }
},
required: ["name", "age", "role"] // Enforce the contract
};
// Returns { success: false, stage: 'validation_failed' } because 'role' is missing
const result = await guard.guard(brokenAIOutput, userSchema);
The Benchmark
I also ran this through a standard performance test (04-benchmark.js in the repo). Because the heavy AST parsing and schema compilation is cached on the API layer, it adds minimal overhead to your request cycle while guaranteeing 100% data safety.
Try it out
If you are dealing with broken LLM outputs, don't write another regex parser.
- π¦ Node.js:
npm i llm-json-guard - π Python:
pip install llm-json-guard - π οΈ Demo Repo: Check out the Express & benchmark examples here
- π API Key: You can grab a free tier key on RapidAPI here to test it out.
Let me know what edge cases your LLMs are failing on, I'm actively updating the parsing engine to catch weirder hallucinations!
Top comments (0)