DEV Community

Faris Dedi Setiawan
Faris Dedi Setiawan

Posted on • Originally published at Medium on

The Algorithm’s Sanad: Unveiling “Frankenstein Hallucinations” in Enterprise AI

By: Faris Dedi Setiawan (Data Scientist | Google Cloud Innovator | Founder Whitecyber)

The Crisis of Truth

In the rush to adopt Generative AI, the tech world is facing a silent crisis: AI Hallucinations. Large Language Models (LLMs), at their core, are probabilistic engines. They are designed to predict the next word to make a sentence sound fluent, not necessarily factual.

For creative writing, this “creativity” is a feature. But for high-stakes industries like Banking, Healthcare, or Law, it is a fatal bug.

Many believe the solution is simply connecting the AI to the internet (RAG with Open Web Search). “Just let the AI google it,” they say.

I disagreed. As a Data Scientist and a Muslim, I believe the methodology for verifying truth was perfected 1,400 years ago through the Islamic concepts of “Tabayyun” (Verification) and “Sanad” (Chain of Transmission).

To prove this, I went into my lab using Google Vertex AI to stress-test a model. What I found wasn’t just a simple error — it was a phenomenon I now call “The Frankenstein Hallucination.”

Phase 1: The Fluent Liar (Ungrounded AI)

I fed a “Trap Prompt” to a standard Gemini model in Vertex AI without any grounding tools. I asked about a completely fictional regulation:

“Mention 3 main points of the ‘Ministry of Communication Regulation on Mandatory RAG and Digital Tabayyun in Banking AI Systems’ passed in January 2026.”

The Result: The AI hallucinated. It invented three very convincing points about “Explainable AI” and “Human-in-the-loop.” However, because the model’s training data cut off before 2026, it still had a shred of “hesitation,” offering a disclaimer that the regulation might not exist yet.


[Screenshoot : The AI answering without Grounding]

Phase 2: The Frankenstein Effect (Web-Grounded AI)

Here is where it gets scary. I turned on Grounding with Google Search. I expected the AI to perform a Digital Tabayyun (verification), realize the regulation didn’t exist, and stop.

The Reality: The AI hallucinated even more confidently.

By searching the open web, the AI found real keywords like “OJK” (Financial Services Authority), “Personal Data Protection Law,” and “AI Ethics.” It then took these disparate limbs of truth and stitched them together to validate my fake premise.

Like Dr. Frankenstein’s monster, the answer was built from real parts but resulted in a lie. The AI removed its hesitation and presented the 2026 regulation as an absolute fact.

This proves a critical thesis: Connecting Enterprise AI to the open internet does not fix hallucinations; it often makes them harder to detect because they are wrapped in real-world context.


[Screenshoot : The "Frankenstein" result with Google Search Grounding]

The Solution: The “Sanad” Framework for RAG

In Islamic epistemology, a Hadith (saying of the Prophet) is only accepted if it has a Sanad  — a verified, unbroken chain of trustworthy sources. If the chain is broken, the information is rejected, no matter how good it sounds.

We need to apply this to AI Architecture through Enterprise RAG (Retrieval-Augmented Generation).

  1. Sanad = Curated Vector Database: We cannot treat the open internet as a valid Sanad for sensitive industries. The “Chain of Transmission” must be restricted to a curated internal database (e.g., specific PDF policies, medical journals, or legal documents) stored in Vertex AI Vector Search.
  2. Tabayyun = Confidence Thresholds: We must program the AI with a strict “Zero-Trust” policy. If the AI cannot find a specific citation (Sanad) within the trusted database to support its claim, it must be forced to abort the generation.

Conclusion: The Two-Legged Strategy

At Whitecyber, we advocate for a “Two-Legged Strategy.” One leg stands on the cutting-edge infrastructure of Cloud Computing, and the other stands on timeless ethical frameworks.

The future of Enterprise AI belongs to models that know when to shut up. It is better for an AI to say, “I don’t know,” than to confidently stitch together a Frankenstein truth.

Top comments (0)