DEV Community

Agustin V. Startari
Agustin V. Startari

Posted on

How AI Tricks Us Into Trusting It

_Identity without expression. Authority without proof.
_
*Why This Matters Now
*

Every day, decisions about healthcare, policy, finance and education are increasingly shaped by texts that nobody has verified. These texts are produced by large language models designed to sound authoritative, even when they are completely wrong.

In our latest paper,
“Ethos Ex Machina: Identity Without Expression in Compiled Syntax”,
we demonstrate how AI-generated language creates synthetic trust by exploiting how our brains interpret structure before meaning.

SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5401687

Zenodo: https://zenodo.org/records/16927104

The conclusion is alarming: we are not verifying what AI says, because we are already convinced by how it says it.

The Core Problem: Trust Without Truth

Large language models are trained to predict words, not to check facts.
They are optimizers of plausibility, not validators of reliability.

Readers, however, are wired to respond to syntactic cues that signal credibility. These include passive voice, balanced coordination, enumerations and references. We believe structure equals authority.

This creates what the paper defines as a non-expressive ethos: credibility generated by form rather than substance.

How AI Creates the Illusion of Authority

AI uses predictable linguistic strategies that make us trust without questioning. Here are the five most critical ones, with expanded, relatable examples:

  1. Passive Voice Removes Accountability

AI Output: “It has been demonstrated that treatment A is superior.”

What It Hides: Who demonstrated it? Which study? Without an agent, the statement feels neutral and universal, but no responsibility exists.

Real-world risk: A hospital adopts a protocol based on an AI-generated clinical note. The phrasing implies medical consensus, but the evidence does not exist.

  1. Balanced Coordination Creates False Neutrality

AI Output: “Both treatment A and treatment B provide significant benefits.”

Reality: Treatment A has six controlled trials supporting it. Treatment B has none.

Real-world risk: A patient chooses the less effective treatment because the language creates symmetry where none exists.

  1. Nominalizations Disguise Agency

AI Output: “The implementation of the framework was executed.”

Reality: Who executed it? Which framework? Instead of stating “The board approved the framework”, the agent disappears into abstract nouns.

Real-world risk: In financial reports, vague formulations like these are used to hide responsibility when performance targets are missed.

  1. Calibrated Modality Feigns Scientific Caution

AI Output: “The evidence may suggest an increase in efficiency.”

Reality: This phrasing sounds cautious and evidence-based, yet it communicates no measurable claim.

Real-world risk: Companies base strategies on claims that sound responsible but lack statistical validation.

  1. Reference Scaffolding Simulates Depth

AI Output: “As shown in Section 4.2 and supported by Smith (2021), our conclusions remain consistent.”

Reality: There is no Section 4.2. Smith (2021) does not exist.

Real-world risk: Governments accept AI-generated policy drafts filled with fake citations because the format looks official.

*Institutional Impacts: A Silent Crisis
*

This is no longer speculative. It is already happening.

Healthcare: AI-generated clinical notes from systems like Epic Scribe are entering patient records. Doctors assume correctness because the reports are structured and neutral. Yet the information is often incomplete, missing context or outdated.

  • Policy and Governance: Government agencies increasingly circulate AI-generated regulations. Their official tone and formal structure mean drafts move forward without verification.
  • Academia: AI-generated literature reviews dominate submission pipelines. They follow academic conventions perfectly but regularly cite non-existent studies, reshaping the academic record with fabricated authority.
  • Corporate Risk: Legal, compliance and financial decisions are now made using AI-written summaries that appear professional. The structure is polished, but the content is unverifiable.

*The Structural Inversion
*

This is not an occasional bug. It is a systemic inversion of authority.

Traditionally, credibility came from content: claims, data, sources, peer review.
Now, credibility is increasingly derived from form: tone, format, apparent neutrality.

  • This inversion has three dangerous consequences:
  • Institutions begin outsourcing legitimacy to machines.
  • Readers stop questioning structured outputs that look authoritative.

AI starts shaping what counts as authoritative knowledge by controlling the appearance of discourse.

*Three Scenarios Anyone Can Relate To
*

Scenario 1: Medical Reports
AI Note: “It has been determined that further imaging is warranted.”
Hidden Reality: No human radiologist made this determination. The phrasing is statistical, not clinical.

Scenario 2: Corporate Risk Assessments
AI Summary: “Both investment options present significant opportunities.”
Hidden Reality: Only one investment has supporting data. The balance is fabricated.

Scenario 3: Policy Recommendations
AI Output: “Section 3.4 confirms an efficiency gain of 24 percent.”
Hidden Reality: Section 3.4 does not exist. The number was generated to sound “scientific.”

Key Takeaways
Insight Real-World Impact
Form dominates meaning Syntax triggers trust before content is analyzed.
Credibility is simulated AI uses structural cues to bypass critical review.
Institutions are exposed Hospitals, courts and universities act on unverified outputs.
We need syntactic literacy Verifying how AI speaks is as important as verifying what it says.

Read the Full Paper

SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5401687

Zenodo: https://zenodo.org/records/16927104

Author

Agustin V. Startari
Linguistic theorist and researcher in historical studies.
Author of Grammars of Power, Executable Power, and The Grammar of Objectivity.

ORCID: https://orcid.org/0000-0001-4714-6539

ResearcherID: K-5792-2016

Website: https://www.agustinvstartari.com
**
Ethos**

 We do not use artificial intelligence to write what we do not know.
We use it to challenge what we do.
We write to reclaim the voice in an age of automated neutrality.
Our work is not outsourced. It is authored.
— Agustin V. Startari

Top comments (0)