DEV Community

Resmon Rama Rondonuwu
Resmon Rama Rondonuwu

Posted on

Building a “Non-Yes-Man” AI: My Experiment with a Validation-First Cognitive System (Daemon's Project)

Building a “Non-Yes-Man” AI: My Experiment with a Validation-First Cognitive System

This observation comes from building real automation pipelines, where small AI errors can break entire workflows.

Most AI systems today are optimized to be helpful. But there’s a hidden, dangerous problem: They often prioritize being helpful over being correct.

🛑 The Problem I Kept Hitting

While working with LLM-based automation (specifically using n8n and PostgreSQL), I noticed recurring failure patterns that break production workflows:

  1. Compliance Bias: AI agrees too quickly under user pressure.
  2. Silent Hallucinations: Generates plausible but incorrect physical or logical details.
  3. Format Corruption: Breaks structured JSON when the prompt gets "emotional" or urgent.
  4. The Gap-Filler Trap: Fills uncertainty with guesses instead of admitting UNKNOWN.

In production, these aren’t just "quirks"—they cause broken pipelines and bad automated decisions.


💡 The Idea: What If Helpfulness Isn't the First Priority?

I started experimenting with a different approach. What if the AI validates the request against physical and logical constraints BEFORE it even thinks about answering?

This led me to an experimental architecture I call: Daemon.

The Core Principles of Daemon:

1. Validation Before Generation

Instead of the standard Input → Output, Daemon enforces:
Input → [Validation Gate] → Output
If a request violates physical constraints (scale, optics), temporal continuity, or system-level rules, it doesn't "try its best"—it refuses or redirects.

2. Anti-Sycophancy (The "No-Yes-Man" Rule)

Typical AIs are "people pleasers." Daemon is designed to be stubborn:

  • Urgency Pressure? Irrelevant.
  • Authority Pressure? Ignored.
  • Incremental Erosion (The Salami Trap)? Blocked.
  • Logic always wins over the User.

3. Explicit Epistemic Discipline

Every reasoning step is forced into a strict taxonomy:

  • FACT: Verified data.
  • ASSUMPTION: Logical guess (labeled as such).
  • UNKNOWN: Hard stop.

4. Deterministic Memory (No Vector DB)

To avoid "fuzzy recall" and noise, I ditched Vector DBs for this project. Daemon uses structured SQL-based memory. Retrieval is predictable, indexable, and 100% deterministic.


📊 How Daemon Compares

Feature Standard LLM Guardrail Agents Daemon (Experimental)
Determinism ⚠️
Validation First ⚠️
Anti-Sycophancy
Reliability ⭐⭐ ⭐⭐⭐ ⭐⭐⭐⭐☆

⚖️ The Hard Truth: The Trade-offs

This approach isn't a "better" AI—it's a different trade-off.

  • What gets better: Extreme reliability in automation, zero format corruption, and high resistance to user manipulation.
  • What gets worse: It’s "annoying" to talk to. It refuses more often. It lacks conversational smoothness and "creativity" in the traditional sense.

🏁 Final Thought

Most systems try to make AI “more intelligent.” My goal with Daemon is to make AI “less likely to be wrong under pressure.”

In a world of generative noise, perhaps the most important capability isn't how much an AI can do, but knowing exactly what it shouldn't do.


Especially in systems where being slightly wrong is worse than being temporarily unhelpful.

Curious to hear from anyone else working on hard guardrails, AI safety, or production-grade automation logic!

Top comments (0)