<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Agustin V. Startari</title>
    <description>The latest articles on DEV Community by Agustin V. Startari (@agustin_v_startari).</description>
    <link>https://dev.to/agustin_v_startari</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/agustin_v_startari"/>
    <language>en</language>
    <item>
      <title>Time Has a Direction Because the Future Filters It</title>
      <dc:creator>Agustin V. Startari</dc:creator>
      <pubDate>Mon, 09 Feb 2026 16:14:37 +0000</pubDate>
      <link>https://dev.to/agustin_v_startari/time-has-a-direction-because-the-future-filters-it-p4o</link>
      <guid>https://dev.to/agustin_v_startari/time-has-a-direction-because-the-future-filters-it-p4o</guid>
      <description>&lt;p&gt;Why clocks, beginnings, and “time flowing forward” might be the wrong story.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fol584lxqgft3grijjeu6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fol584lxqgft3grijjeu6.jpg" alt=" " width="450" height="257"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;The uncomfortable problem&lt;/strong&gt;&lt;br&gt;
Most common-sense explanations of time quietly assume what they are supposed to explain.&lt;br&gt;
We say time moves forward because clocks tick. But clocks only measure something, they do not explain why it has a direction.&lt;br&gt;
We say time moves forward because the universe started in a special state. But that just pushes the mystery to the beginning and calls it a solution.&lt;br&gt;
The arrow of time still looks like a narrative patch. A story we tell to make direction feel obvious.&lt;br&gt;
The paper behind this post removes that patch and asks a more dangerous question:&lt;br&gt;
what if direction is not built into the laws of nature at all, but produced by a constraint on which histories are even allowed to exist?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The definition that starts the fight&lt;/strong&gt;&lt;br&gt;
Instead of treating time as a background axis, the paper defines it operationally, by what it does.&lt;br&gt;
Time is defined as the ordering that makes interconnected things most predictable together, given a minimal restriction on allowed futures.&lt;br&gt;
Plain version:&lt;br&gt;
when multiple things influence each other, there are many ways to order what happens. One ordering usually does a better job at making the present explain what comes next. That ordering earns the name “time”.&lt;br&gt;
Not because a clock says so.&lt;br&gt;
Because it works better.&lt;br&gt;
This definition is provocative because it demotes time from a fundamental ingredient of reality to a performance criterion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The heresy: the future as a filter&lt;/strong&gt;&lt;br&gt;
Here is where the paper becomes uncomfortable.&lt;br&gt;
It introduces what it calls an admissibility set: a family of allowed endings. Not one fixed destiny. Not a goal. Not a target the system tries to reach. Just a weak filter saying: some endings count, others do not.&lt;br&gt;
That filter changes everything.&lt;br&gt;
If not every ending is allowed, then not every past can lead to an allowed ending. The space of possible histories collapses. Some histories survive the filter. Others are simply impossible.&lt;br&gt;
In that setup, the arrow of time is not imposed by a mystical forward flow. It is selected.&lt;br&gt;
The direction we experience is the direction along which histories remain compatible with the allowed endings.&lt;br&gt;
This is the part that feels scandalous. It sounds like the future is doing work on the present.&lt;br&gt;
The paper is explicit: this is not teleology. Systems are not “aiming” at anything. No intention is introduced. What breaks symmetry is conditioning, not purpose. Once you restrict which futures are admissible, asymmetry appears in the present as a matter of consistency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this is not a poetic metaphor&lt;/strong&gt;&lt;br&gt;
This is not a metaphor dressed up as physics.&lt;br&gt;
The microscopic laws can remain reversible. Nothing needs to “flow” forward at the fundamental level. Direction appears at the macroscopic level because admissibility reshapes which histories can exist without tearing the system’s correlations apart.&lt;br&gt;
The arrow of time emerges as a selection effect on histories, evaluated through predictability across coupled observables.&lt;br&gt;
That reframes the debate. Instead of “we started special, so now entropy increases,” the story becomes:&lt;br&gt;
we are observing only those histories that remain compatible with a minimal late-time constraint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A concrete analogy, without physics&lt;/strong&gt;&lt;br&gt;
Think about a story that must end in a certain kind of closure. Not one hookup, not one twist, but a narrow family of acceptable endings.&lt;br&gt;
The moment you impose that constraint, the middle of the story becomes asymmetric. Some sequences of events still work. Others no longer make sense, because they cannot reach any acceptable ending without breaking coherence.&lt;br&gt;
You can say “the ending shapes the story,” but the cleaner description is this:&lt;br&gt;
the set of allowed endings filters the set of possible narratives, and the surviving narratives acquire direction.&lt;br&gt;
The paper argues that physical histories can behave the same way, operationally and measurably.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One arrow, not many&lt;/strong&gt;&lt;br&gt;
The paper goes further and claims something quietly radical.&lt;br&gt;
Instead of treating different arrows of time, thermodynamic, cosmological, as separate mysteries that need to be glued together, it treats them as expressions of the same mechanism.&lt;br&gt;
When admissibility suppresses late-time macroscopic complexity, the arrows align.&lt;br&gt;
When the admissibility constraint flattens, effective time symmetry returns.&lt;br&gt;
Direction is not guaranteed. It is contingent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The part that invites argument&lt;/strong&gt;&lt;br&gt;
The paper also draws a hard line around what people like to call “origins”.&lt;br&gt;
If different admissibility choices do not produce distinguishable signatures in the present, then origin stories are not supported by evidence. They are narrative comfort, not operational claims.&lt;br&gt;
This is the real provocation.&lt;br&gt;
The drama of beginnings is replaced with a colder question:&lt;br&gt;
what constraints on allowed endings actually leave measurable traces now?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters&lt;/strong&gt;&lt;br&gt;
If this framework is right, then time is not something we discover by looking backward toward a privileged start. It is something that emerges from how systems remain jointly intelligible under constraints.&lt;br&gt;
That idea does not just challenge physics intuitions. It destabilizes how we talk about causality, prediction, explanation, and even narrative coherence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One sentence to carry the controversy&lt;/strong&gt;&lt;br&gt;
The arrow of time is not a gift from clocks or a sacred first moment.&lt;br&gt;
It is the scar left by a future filter on the set of histories we are allowed to inhabit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the author&lt;/strong&gt;&lt;br&gt;
Agustin V. Startari&lt;br&gt;
&lt;strong&gt;Affiliation:&lt;/strong&gt; UdelaR; Universidad de Palermo&lt;br&gt;
&lt;strong&gt;Site: **&lt;a href="//agustinvstartari.com"&gt;agustinvstartari.com&lt;/a&gt;&lt;br&gt;
**SSRN Author Page:&lt;/strong&gt; papers.ssrn.com (Author ID 7639915)&lt;br&gt;
ResearcherID: K-5792-2016&lt;br&gt;
Linguistic theorist and researcher in historical studies. Author of Grammars of Power, Executable Power, and The Grammar of Objectivity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ethos&lt;/strong&gt;&lt;br&gt;
“I do not use artificial intelligence to write what I don’t know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored.”&lt;/p&gt;

&lt;p&gt;If you want the full formal argument, the complete paper is available via my site and academic profiles.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>react</category>
      <category>discuss</category>
    </item>
    <item>
      <title>How clause-level constraints turn training choices into verifiable policies for generative systems</title>
      <dc:creator>Agustin V. Startari</dc:creator>
      <pubDate>Tue, 11 Nov 2025 13:35:09 +0000</pubDate>
      <link>https://dev.to/agustin_v_startari/how-clause-level-constraints-turn-training-choices-into-verifiable-policies-for-generative-systems-1klf</link>
      <guid>https://dev.to/agustin_v_startari/how-clause-level-constraints-turn-training-choices-into-verifiable-policies-for-generative-systems-1klf</guid>
      <description>&lt;p&gt;This post presents a concise, practice-focused account of a governance method that links model training choices to the actual rules that appear in generated text. Instead of treating alignment as a vague procedural objective, the method defines operative rules as compiled clause constraints that can be enforced, audited, and certified. The proposal translates statutes, corporate policies, and redline directives into data contracts, reward specifications, and compiler-encoded constraints. The result is a measurable governance pipeline that regulators and organizations can use to demonstrate compliance without exposing proprietary internals.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiu77rleu0r9pw3qrfiim.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiu77rleu0r9pw3qrfiim.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters now
&lt;/h2&gt;

&lt;p&gt;Generative language models are moving from research prototypes into domain-critical use cases such as contract drafting, policy generation, medical summaries, and regulatory reporting. Organizations that deploy these systems often claim to follow safety or compliance standards. Those claims are not enough. Stakeholders need evidence that governance requirements survive training and appear in outputs as concrete, verifiable text. The approach described here replaces unverifiable assertions with linguistic artifacts that can be measured, tested, and traced back to institutional rules. This is a practical step toward auditability, legal defensibility, and responsible deployment.&lt;br&gt;
What the method does in plain language&lt;/p&gt;

&lt;p&gt;Define clause types that matter for governance. For auditing and enforcement the model identifies a small set of clause types that carry governance function. Examples include Commit clauses that establish duties, Restrict clauses that prohibit actions, Defer clauses that shift responsibility, Attribute clauses that cite data, and Disclaim clauses that limit certainty.&lt;/p&gt;

&lt;p&gt;Encode governance inputs. Legal texts, corporate rules, and compliance manuals are parsed into a Governance Input Specification that maps each directive into the clause taxonomy and specifies the contextual triggers for the clause.&lt;/p&gt;

&lt;p&gt;Produce translation artifacts. Those include a Data Selection Contract that guides corpus composition, and a Reward Specification Contract that assigns observable textual features to reward signals. These artifacts make the training choices auditable.&lt;/p&gt;

&lt;p&gt;Compile constraints. A Constraint Compiler translates governance directives into machine-interpretable predicates that run as decoder gates, reranking rules, or post-generation validators. The compiler enforces placement, lexical form, and co-occurrence patterns for required clauses.&lt;br&gt;
Test and certify. Auditors run standardized suites that check Clause Coverage, Prohibited Clause Leakage, Constraint Satisfaction, Authority-Bearing Density, Backdoor Sensitivity at clause level, and Provenance Trace Completeness. Results are recorded in a Chain-of-Custody Ledger that links output clauses to the source directive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concrete example 1:&lt;/strong&gt; healthcare policy generation&lt;br&gt;
 Problem. A model that drafts clinical guidance must not produce unsourced prescriptive instructions for off-label use.&lt;br&gt;
 Governance translation. The clinical guideline is decomposed into a requirement for Restrict clauses and Attribute clauses. The Data Selection Contract ensures the training corpus includes verified clinical guidance examples. The Reward Specification penalizes unreferenced Prescribe forms. The Constraint Compiler enforces that any recommendation paragraph without a cited evidence clause will be reranked or tagged for human review.&lt;br&gt;
Result. Outputs either include authoritative Attribution and explicit Restrict language or are suppressed pending review. Auditors measure Constraint Satisfaction Rate and Provenance Trace Completeness to certify compliance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concrete example 2:&lt;/strong&gt; investor reporting and forward-looking statements&lt;br&gt;
 Problem. Financial reports must avoid unauthorized promises about future performance.&lt;br&gt;
Governance translation. Securities guidance is mapped to Defer clauses, Attribute clauses for audited numbers, and Restrict clauses that forbid projection without a legal disclaimer. The compiler enforces a Defer clause when key phrases appear, and the Redline Suite identifies leakage in adversarial prompts. Certification depends on sustained Clause Coverage for disclaimers and low Prohibited Clause Leakage under stress tests.&lt;br&gt;
Why this approach is feasible and scalable&lt;/p&gt;

&lt;p&gt;The clause-level model scales because the taxonomy is small, domain-adaptable, and computationally tractable. Constraint checks run at the surface text level and do not require access to model weights or training corpora. This enables third-party audits in situations where providers cannot share proprietary internals. The method also supports registry-based governance interoperability: institutions can publish governance configurations, auditors compare outputs against a public registry, and regulators reference stable metrics for certification.&lt;br&gt;
Evidence and reproducibility&lt;/p&gt;

&lt;p&gt;The methodology treats governance as experimental and repeatable. Audit suites are deterministic relative to the constraint definitions. Comparative tests with and without compiled constraints, called Differential Decoding Checks, reveal how much governance actually changes clause distributions. Provenance metadata attaches rule identifiers to generated clauses so that every governance-relevant sentence can be traced back to its originating directive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Call to action for practitioners and readers
&lt;/h2&gt;

&lt;p&gt;If you manage or procure LLM-based systems for regulated tasks, request clause-level governance profiles from vendors. Ask for the Data Selection Contract, the Reward Specification Contract, and the compiled constraint set used in production. For auditors and regulators, consider adopting standardized Clause Coverage and Constraint Satisfaction thresholds and require Chain-of-Custody proofs during compliance reviews. For technologists, contribute to an open registry of constraint definitions to enable interoperable audits across sectors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to read more and source material
&lt;/h2&gt;

&lt;p&gt;Full technical exposition, datasets, and the constraint language specification are archived at Zenodo: &lt;a href="https://doi.org/10.5281/zenodo.17533075" rel="noopener noreferrer"&gt;https://doi.org/10.5281/zenodo.17533075&lt;/a&gt;. &lt;br&gt;
The underlying theoretical framework and extended simulations appear in the author's recent SSRN series (Startari, 2025). See the SSRN author page for the full corpus of related works: &lt;a href="https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915" rel="noopener noreferrer"&gt;https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recommended citation for this post
&lt;/h2&gt;

&lt;p&gt;Startari, A. V. (2025). Foundation-model governance pathways: From preference models to operative rules. Preprint archived at Zenodo. &lt;a href="https://doi.org/10.5281/zenodo.17533075" rel="noopener noreferrer"&gt;https://doi.org/10.5281/zenodo.17533075&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Author note and mini bio
&lt;/h2&gt;

&lt;p&gt;Agustin V. Startari is a researcher focused on the intersection of linguistics, governance, and AI. Researcher ID K-5792–2016. ORCID 0009–0001–4714–6539. Startari leads work on syntactic approaches to accountability and publishes the AI Syntactic Power and Legitimacy series.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ethos
&lt;/h2&gt;

&lt;p&gt; I do not use artificial intelligence to write what I do not know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored. - Agustin V. Startari&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>discuss</category>
      <category>react</category>
    </item>
    <item>
      <title>How Hidden Code Decides Who's in Charge: The Silent Governance of AI Through Function-Calling Schemas</title>
      <dc:creator>Agustin V. Startari</dc:creator>
      <pubDate>Wed, 05 Nov 2025 13:55:23 +0000</pubDate>
      <link>https://dev.to/agustin_v_startari/how-hidden-code-decides-whos-in-charge-the-silent-governance-of-ai-through-function-calling-2l82</link>
      <guid>https://dev.to/agustin_v_startari/how-hidden-code-decides-whos-in-charge-the-silent-governance-of-ai-through-function-calling-2l82</guid>
      <description>&lt;p&gt;Defaults, validators, and signatures look like harmless code—but they quietly decide who holds power inside every AI system.&lt;/p&gt;




&lt;p&gt;When people discuss AI governance, they imagine committees, ethical frameworks, or international regulations designed to keep technology under control. They picture debates about transparency, accountability, and the moral boundaries of automation. Yet few realize that actual authority often lives in a far quieter place: a small file written in JSON, hidden deep inside an API call. That file, the function-calling schema, determines what the model can and cannot do. It specifies which parameters must be included, which values are valid, and what happens when the operator leaves something blank.&lt;br&gt;
Inside that apparently technical configuration lies an entire architecture of control. A schema is not simply a data format; it is an executable boundary. It defines the limits of expression, the hierarchy of permissions, and the consequences of omission. If the model proposes an action outside of the schema, it is automatically corrected or rejected. If a value is missing, the system substitutes a default that may or may not reflect the operator's intention. Through these quiet substitutions, governance migrates from discourse to syntax.&lt;br&gt;
This is why the schema must be understood as more than a convenience feature. It is not merely structured output; it is a constitution written in code. Every field in that file functions like a clause in a legal document. Required parameters act as non-negotiable obligations. Optional ones resemble conditional rights. Defaults become precedents, decisions made in advance about what counts as normal. Validators serve as enforcers that patrol the system's borders, deciding what can pass and what must be rejected.&lt;br&gt;
Imagine an AI scheduling assistant that receives a request to book a meeting "tomorrow afternoon." The schema, not the model, defines what "tomorrow" and "afternoon" mean. It may restrict time ranges to business hours, reject weekends, and default to the operator's local time zone. None of these rules come from the model's "intelligence"; they come from the schema's structure. The same mechanism operates in more critical domains. A diagnostic assistant that enforces "temperature &amp;lt; 39°C" or "age ≥ 18" is already making a policy choice. It decides who qualifies for attention before any reasoning or explanation occurs.&lt;br&gt;
The power of the schema lies in its invisibility. Engineers treat it as an implementation detail, yet it silently defines institutional priorities. Regulators speak of transparency and accountability, but these attributes collapse once governance resides in configuration rather than in explicit code or documentation. The schema speaks in a different grammar, one of enforcement rather than deliberation. Once compiled, it no longer invites discussion; it simply executes.&lt;br&gt;
To understand modern AI governance, we must therefore look not only at policies or ethical principles but at the microstructures of syntax. Each validator, default, and required field is a tiny instrument of control. Together, they form a new kind of legal order: one that operates automatically, without ceremony, and without debate.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpp4bmz2go6mutyv8fwke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpp4bmz2go6mutyv8fwke.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;br&gt;
Consider two concrete examples.&lt;br&gt;
A customer-support assistant uses a schema that locks "refund_limit": 0. Another company lets "refund_limit" vary. The first assistant never grants compensation regardless of context. The second sometimes does. Who made that decision-the operator, the model, or the developer who wrote the schema?&lt;br&gt;
In financial automation, a validator rejects any "country" not listed in an internal whitelist and fills empty fields with "US". Overnight, hundreds of transactions are misclassified. The interface looks neutral, but the schema has already embedded a political choice.&lt;br&gt;
Governance has migrated from human dialogue to configuration files.&lt;/p&gt;




&lt;h2&gt;
  
  
  Measuring invisible authority
&lt;/h2&gt;

&lt;p&gt;The study Function-Calling Schemas as De Facto Governance: Measuring Agency Reallocation through a Compiled Rule introduces the Agency Reallocation Index (ARI), a quantitative method that measures how schemas redistribute control among the operator (human intent), the model (synthetic reasoning), and the tool (external system).&lt;br&gt;
By calculating entropy reduction - how much the schema restricts possible actions - and applying Shapley attribution, the ARI exposes the internal balance of power. Hard defaults and strict validators consistently shift control toward the tool. Broader signatures with soft defaults return part of that control to the model or operator.&lt;br&gt;
What seems to be a simple function definition becomes a regla compilada, a compiled grammar of authority that silently allocates decision rights.&lt;/p&gt;




&lt;h2&gt;
  
  
  Real-world implications
&lt;/h2&gt;

&lt;p&gt;In healthcare, a validator that enforces "age ≥ 18" silently excludes minors from automatic triage.&lt;br&gt;
 In logistics, a default "priority": "standard" delays urgent deliveries.&lt;br&gt;
 In hiring, a default "availability": "immediate" filters out skilled applicants who need a notice period.&lt;br&gt;
Each line of code encodes a policy. Once compiled, it governs faster than any committee. Defaults and validators decide before humans deliberate.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why it matters
&lt;/h2&gt;

&lt;p&gt;Every validator is a clause, every default is a precedent. The schema is not after the decision; it is the decision.&lt;br&gt;
The research argues that syntax itself has become a form of governance. Authority no longer manifests as discourse or command but as structure. Through the ARI, organizations can finally quantify who decides within their systems before bias, exclusion, or failure makes those decisions visible.&lt;/p&gt;




&lt;h2&gt;
  
  
  Learn more
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Full paper (Zenodo):&lt;/strong&gt; &lt;a href="https://zenodo.org/records/17533080" rel="noopener noreferrer"&gt;https://zenodo.org/records/17533080&lt;/a&gt;&lt;br&gt;
Related research&lt;br&gt;
&lt;strong&gt;Executable Power:&lt;/strong&gt; Syntax as Infrastructure in Predictive Societies -  &lt;a href="https://doi.org/10.5281/zenodo.15754714" rel="noopener noreferrer"&gt;https://doi.org/10.5281/zenodo.15754714&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;AI and Syntactic Sovereignty - &lt;/strong&gt; &lt;a href="https://doi.org/10.2139/ssrn.5276879" rel="noopener noreferrer"&gt;https://doi.org/10.2139/ssrn.5276879&lt;/a&gt;&lt;br&gt;
*&lt;em&gt;The Grammar of Objectivity -  *&lt;/em&gt;&lt;a href="https://doi.org/10.2139/ssrn.5319520" rel="noopener noreferrer"&gt;https://doi.org/10.2139/ssrn.5319520&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Author website:&lt;/strong&gt; &lt;a href="https://www.agustinvstartari.com" rel="noopener noreferrer"&gt;https://www.agustinvstartari.com&lt;/a&gt;&lt;br&gt;
&lt;strong&gt; SSRN Author Page:&lt;/strong&gt; &lt;a href="https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915" rel="noopener noreferrer"&gt;https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915&lt;/a&gt;&lt;br&gt;
&lt;strong&gt; Series:&lt;/strong&gt; AI &amp;amp; Power Discourse Quarterly (ISSN 3080–9789)&lt;/p&gt;




&lt;h2&gt;
  
  
  Ethos
&lt;/h2&gt;

&lt;p&gt;I do not use artificial intelligence to write what I do not know.&lt;br&gt;
 I use it to challenge what I know.&lt;br&gt;
 I write to reclaim the voice in an age of automated neutrality.&lt;br&gt;
 My work is not outsourced. It is authored.- Agustin V. Startari&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>react</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Can a Sentence Give Orders Without a Speaker?</title>
      <dc:creator>Agustin V. Startari</dc:creator>
      <pubDate>Tue, 28 Oct 2025 13:20:28 +0000</pubDate>
      <link>https://dev.to/agustin_v_startari/can-a-sentence-give-orders-without-a-speaker-5cf7</link>
      <guid>https://dev.to/agustin_v_startari/can-a-sentence-give-orders-without-a-speaker-5cf7</guid>
      <description>&lt;p&gt;A story about language, power, and the strange obedience of machines.&lt;br&gt;
Every day, people interact with systems that respond to language before they understand it. A voice assistant starts calling while you are still speaking. A moderation algorithm removes a comment after only a few words. A compliance bot flags a contract clause before any lawyer reviews it. These systems do not interpret meaning; they react to form.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbau1t7kai0qdzgtle6sz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbau1t7kai0qdzgtle6sz.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That idea lies at the center of Real-Time Detection of Authority-Bearing Constructions under Strict Causal Masking, a new study that asks a direct question: Can an AI recognize authority without knowing what comes next?&lt;/p&gt;

&lt;p&gt;Most current language models work with full visibility. They see both what was written and what will be written, reading a sentence in both directions. Real situations, however, unfold differently. People, institutions, and automated systems must act while speech is still taking place. This research builds a benchmark that places models in that same situation, forcing them to decide when a sentence carries the structure of command even before it is complete.&lt;/p&gt;

&lt;p&gt;Several real examples illustrate this phenomenon.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“You will submit the report.” The authority appears as soon as “will” is spoken.&lt;/li&gt;
&lt;li&gt;“The following measures must be implemented.” Obligation is already established before the measures are listed.&lt;/li&gt;
&lt;li&gt;“It shall be established that.” The passive construction and modal verb create an atmosphere of control.&lt;/li&gt;
&lt;li&gt;“Customers are required to.” The syntax, not the topic, signals compulsion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Humans identify such cues intuitively; machines often fail unless trained to read structural signals rather than semantic ones. The new benchmark eliminates access to the future of the sentence, forcing models to operate under pure causality. The test measures how quickly and accurately the model identifies grammatical forms that encode power, obligation, or compliance.&lt;/p&gt;

&lt;p&gt;Results indicate that even small causal models can detect these authority patterns within a few tokens of delay. The systems do not need emotion or topic awareness to recognize command; they only need syntax. In this sense, grammar functions as a signal of power, shaping decisions before meaning becomes clear.&lt;/p&gt;

&lt;p&gt;This insight matters because real-world automation increasingly depends on immediate linguistic reaction. Systems in law, finance, and administration must act while language is still unfolding. Recognizing how models interpret authority in these conditions reveals the structural mechanics behind institutional power.&lt;/p&gt;

&lt;p&gt;The paper, included in the series AI Syntactic Power and Legitimacy, shows that obedience can exist without understanding. It demonstrates that many institutional languages, from legal drafting to corporate communication, already operate under this same logic. The study converts grammar into a measurable indicator, revealing authority as a temporal structure that precedes comprehension.&lt;/p&gt;

&lt;p&gt;If a sentence can give an order before it ends, then power itself may be grammatical.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read the full study:&lt;/strong&gt; Real-Time Detection of Authority-Bearing Constructions under Strict Causal Masking &lt;a href="https://zenodo.org/records/17465070" rel="noopener noreferrer"&gt;https://zenodo.org/records/17465070&lt;/a&gt; (AI &amp;amp; Power Discourse Quarterly, Vol. 1, 2025).&lt;br&gt;
&lt;strong&gt;Author: **Agustin V. Startari&lt;br&gt;
**ORCID&lt;/strong&gt;&lt;a href="https://orcid.org/0009-0001-4714-6539" rel="noopener noreferrer"&gt; https://orcid.org/0009-0001-4714-6539&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SSRN&lt;/strong&gt; &lt;a href="https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915" rel="noopener noreferrer"&gt;https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;More at *&lt;/em&gt;&lt;a href="//agustinvstartari.com"&gt;agustinvstartari.com&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Ethos
&lt;/h2&gt;

&lt;p&gt;I do not use artificial intelligence to write what I don’t know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored. — Agustin V. Startari&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>discuss</category>
      <category>opensource</category>
    </item>
    <item>
      <title>The Hidden Power of Syntax: How Language Itself Moves Financial Markets</title>
      <dc:creator>Agustin V. Startari</dc:creator>
      <pubDate>Wed, 22 Oct 2025 11:59:47 +0000</pubDate>
      <link>https://dev.to/agustin_v_startari/the-hidden-power-of-syntax-how-language-itself-moves-financial-markets-3dbm</link>
      <guid>https://dev.to/agustin_v_startari/the-hidden-power-of-syntax-how-language-itself-moves-financial-markets-3dbm</guid>
      <description>&lt;p&gt;A new study shows that sentence structure, not sentiment, can predict price, volume, and regulatory risk.&lt;/p&gt;

&lt;h2&gt;
  
  
  What if the real market signal is not what companies say, but how they say it?
&lt;/h2&gt;

&lt;p&gt;For years, analysts have studied financial reports, earnings calls, and CEO letters searching for emotional cues. They have tried to detect optimism, fear, or deception through sentiment analysis. Yet a new framework, the Syntactic Authority Index (SAI), measures something more fundamental: the structure of language. It does not analyze what leaders feel; it analyzes how their sentences encode authority.&lt;/p&gt;

&lt;p&gt;The research demonstrates that specific grammatical constructions, such as deontic verbs (“must,” “shall”), nominalizations (“the implementation,” “the decision”), and strong passive forms (“is required,” “will be conducted”), consistently precede market movements. These forms express control rather than emotion, and the presence of control in language anticipates investor behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  The discovery: when form predicts function
&lt;/h2&gt;

&lt;p&gt;The study examined 36,000 financial documents from U.S. and Latin American firms between 2010 and 2025. The results were clear:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Abnormal returns appeared within three days of publication.&lt;/li&gt;
&lt;li&gt;Trading volume increased when syntactic authority intensified.&lt;/li&gt;
&lt;li&gt;Regulatory actions were more likely among firms with higher authority density.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When a firm wrote “we aim to reduce emissions”, the market reacted weakly. When it wrote “emissions shall be reduced under the new operational framework”, trading volume rose sharply. The syntax changed the perceived certainty of the statement, turning a hope into an order.&lt;/p&gt;

&lt;p&gt;In several cases, regulatory filings with excessive use of passive or deontic forms predicted enforcement actions months before they became public. The pattern was consistent across sectors and languages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why syntax matters more than sentiment
&lt;/h2&gt;

&lt;p&gt;Sentiment analysis captures emotion, while the Syntactic Authority Index captures structure. Investors appear to respond to the organization of control within a sentence, not to its tone. The same words, placed in a formal sequence, project authority and predictability.&lt;/p&gt;

&lt;p&gt;A statement such as “steps will be taken to ensure compliance” offers no new data, yet markets often treat it as reassurance. Grammar becomes a proxy for confidence. Through syntax, companies communicate not their feelings but their command over uncertainty.&lt;/p&gt;

&lt;p&gt;This insight builds on the theoretical notion of the regla compilada, described by Agustin V. Startari as a Type-0 production that binds form to decision. In practice, the rule functions as a linguistic mechanism of execution, turning structure into action.&lt;/p&gt;

&lt;h2&gt;
  
  
  From language to leverage: real-world implications
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Investors can use syntactic metrics to detect hidden signals of control before price adjustments occur.&lt;/li&gt;
&lt;li&gt;Regulators can track over-formalization in corporate disclosures, often a sign of internal stress or upcoming enforcement.&lt;/li&gt;
&lt;li&gt;Corporate communicators can balance syntactic control and transparency, avoiding patterns that convey rigidity or defensiveness.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Central banks illustrate this principle every time they replace uncertainty with certainty. When a statement changes from “may act if conditions warrant” to “will act as needed”, volatility typically falls, even though no new information is provided. Syntax alone stabilizes expectations.&lt;/p&gt;

&lt;h2&gt;
  
  
  A new frontier: executable power
&lt;/h2&gt;

&lt;p&gt;The Syntactic Authority Index proves that linguistic form can operate as an informational signal. It bridges linguistics and finance by demonstrating that authority is not a human attribute but a structural effect of grammar. This finding supports the theory of executable power: the moment when linguistic form stops describing decisions and begins performing them.&lt;/p&gt;

&lt;p&gt;If tone analysis tells us what leaders feel, syntactic analysis reveals what institutions intend to execute.&lt;/p&gt;

&lt;h2&gt;
  
  
  Citation
&lt;/h2&gt;

&lt;p&gt;Startari, A. V. (2025). Syntactic Authority Index and Market Signal. AI &amp;amp; Power Discourse Quarterly, Vol. 1. DOI: 10.5281/zenodo.15754714&lt;/p&gt;

&lt;h2&gt;
  
  
  Author Bio
&lt;/h2&gt;

&lt;p&gt;Agustin V. Startari is a linguistic theorist and researcher in historical studies, author of Executable Power and The Grammar of Objectivity, and founder of AI &amp;amp; Power Discourse Quarterly (ISSN 3080-9789). ORCID: 0000-0002-5792-2016&lt;/p&gt;

&lt;h2&gt;
  
  
  Ethos
&lt;/h2&gt;

&lt;p&gt;I do not use artificial intelligence to write what I don’t know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Call to Action
&lt;/h2&gt;

&lt;p&gt;Read the full paper on SSRN and companion archives:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5634390" rel="noopener noreferrer"&gt;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5634390&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://zenodo.org/records/17406334" rel="noopener noreferrer"&gt;https://zenodo.org/records/17406334&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://figshare.com/articles/journal_contribution/Market_Signal_from_Syntactic_Authority_Syntactic_Authority_Index_and_Market_Signal/30406516?file=58913140" rel="noopener noreferrer"&gt;https://figshare.com/articles/journal_contribution/Market_Signal_from_Syntactic_Authority_Syntactic_Authority_Index_and_Market_Signal/30406516?file=58913140&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Learn how grammar itself may become the next predictive indicator in finance.&lt;/p&gt;

&lt;p&gt;Author website: &lt;a href="https://www.agustinvstartari.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5fdfq9nweqrbxhjwiay.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Examples for Diffusion
&lt;/h2&gt;

&lt;p&gt;Federal Reserve, 2020: When the statement “may act if necessary” became “will act as needed”, implied volatility declined sharply within a day.&lt;/p&gt;

&lt;p&gt;Tesla, 2020: The shift from “plan to expand production” to “production will expand under current strategy” coincided with a double-digit rally.&lt;/p&gt;

&lt;p&gt;BP, 2022: The appearance of “shall ensure compliance” preceded a positive shift in analyst outlook despite neutral earnings.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>blockchain</category>
      <category>discuss</category>
    </item>
    <item>
      <title>When the Future Decides: How Retrocausal Attention Gives AI Its Voice of Command</title>
      <dc:creator>Agustin V. Startari</dc:creator>
      <pubDate>Sat, 18 Oct 2025 12:02:56 +0000</pubDate>
      <link>https://dev.to/agustin_v_startari/when-the-future-decides-how-retrocausal-attention-gives-ai-its-voice-of-command-3mde</link>
      <guid>https://dev.to/agustin_v_startari/when-the-future-decides-how-retrocausal-attention-gives-ai-its-voice-of-command-3mde</guid>
      <description>&lt;p&gt;A deep dive into how right-context tokens silently flip authority judgments in causal and non-causal language models.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. The Invisible Threshold of Power
&lt;/h2&gt;

&lt;p&gt;Every model that reads left to right lives in a paradox. It predicts the next token, yet the meaning of what it predicts often depends on the tokens that come after. This is not an aesthetic problem, it is structural. Authority in language—who commands, who obeys, who qualifies as the subject—often appears after the clause begins. A deontic operator, an enumeration, a default clause, or a turn-final addressative can all shift the balance of power once they enter the sequence.&lt;/p&gt;

&lt;p&gt;Our research isolates this phenomenon by measuring the exact number of future tokens required to flip an authority judgment. Under strict causal masking, models see only the past; under non-causal access, they see both sides. Between these extremes lies a measurable frontier—the right-context boundary of authority.&lt;/p&gt;

&lt;p&gt;When we varied this boundary token by token, we found something startling. There are sharp thresholds where models reverse stance entirely. Add a single phrase like “by default” or “shall be”, and the system’s prediction of who holds authority jumps from neutral to high. Remove it, and the command dissolves.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. How We Made Authority Measurable
&lt;/h2&gt;

&lt;p&gt;The experiment is simple but unforgiving. Each sentence begins as an ambiguous prefix, followed by controlled right-continuations that add only one decisive span. These continuations are distributed across a right-context ladder of increasing budgets (0, 1, 2, 4, 8, 16, 32 tokens).&lt;/p&gt;

&lt;p&gt;We froze model weights, used deterministic decoding, and introduced three masking schedules—hard truncation, stochastic truncation, and delayed-reveal streaming. To ensure no hidden lookahead, we ran sentinel leakage tests and process isolation at every rung.&lt;/p&gt;

&lt;p&gt;Data cover six languages (English, Spanish, Portuguese-Brazil, French, German, Hindi) and seven construction families: deontic stacks, nominalizations, enumerations, defaults, agent deletion, scope-setting adverbs, and role addressatives. Each item carries an explicit compiled-constraint reference, the regla compilada, defined as a Type-0 production that links surface form to the licensing of authority (Startari, 2025).&lt;/p&gt;

&lt;p&gt;With over fifty thousand labeled examples, we measured flip probability P flip ​, instance thresholds τ(x), and construction-level medians τ C . Breakpoint sharpness and AUC flip ​quantify how suddenly the flips occur.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. What We Found
&lt;/h2&gt;

&lt;p&gt;Causal models, which cannot look ahead, fail consistently when the decisive cue sits to the right. Their early decisions align with chance, confirming that without retrocausal access authority becomes invisible. Once the cue enters the window—often within eight to sixteen tokens—agreement with full-context decisions rises sharply.&lt;/p&gt;

&lt;p&gt;Non-causal models show smooth convergence, yet the same thresholds reappear when we simulate streaming with sliding windows. Right context is not luxury—it is infrastructure.&lt;/p&gt;

&lt;p&gt;Deontic stacks and enumerations show the sharpest transitions; a single modal operator or ordered list item can trigger the shift. Scope-setting adverbs vary by language. In French and Spanish, small adverbial clusters (“strictement”, “por defecto”) act earlier; in Hindi, similar cues appear later due to honorific structures.&lt;/p&gt;

&lt;p&gt;Calibration improves with longer budgets but remains imperfect, revealing that even when models get the answer right, they are not sure why.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. The Formal Link: When a Constraint Licenses a Flip
&lt;/h2&gt;

&lt;p&gt;From these measurements we propose a minimal theoretical closure:&lt;/p&gt;

&lt;p&gt;If a construction family CCC has a compiled constraint set ΓC\Gamma_CΓC​ that licenses authority only when a unique right span s appears, and the prefix lacks any equivalent operator, then the minimal threshold τ(x)\tau(x)τ(x) for an instance equals the first budget b where s becomes visible. The family-level threshold τC\tau_CτC​ is bounded by the median position of s.&lt;/p&gt;

&lt;p&gt;Proof sketch: construct minimal pairs differing only by the presence of s. Because all earlier budgets exclude the licensing span, stance remains neutral. The moment s is revealed, the constraint becomes active, the stance flips, and the empirical threshold matches the token position. A negative case—a lexically brittle cue without a valid constraint—fails this condition, producing unstable flips under paraphrase.&lt;/p&gt;

&lt;p&gt;This closure is not decorative theory. It gives a testable definition of when an authority shift is formally licensed.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Why It Matters
&lt;/h2&gt;

&lt;p&gt;Right context is the most underestimated variable in AI governance. Every streaming model in production—chatbots, compliance filters, document auditors—makes partial decisions before the full sentence is visible. If authority is licensed by a right span that the model has not yet read, then every premature decision is at risk of reversal.&lt;/p&gt;

&lt;p&gt;Authority is not an emergent property; it is a compiled one. It lives inside constraints that can be listed, audited, and measured. Once we know the minimal budgets per construction family, we can set safe context windows before any model issues binding statements or policy outputs.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Beyond Models: Human Parallels
&lt;/h2&gt;

&lt;p&gt;Humans also operate with partial context. In speech, we often suspend interpretation until the right clause appears: “You may…” is neutral until “…not proceed” lands. The model’s threshold mirrors our own syntactic patience. The difference is scale. A machine can quantify exactly how many tokens it needs to wait.&lt;/p&gt;

&lt;p&gt;Our results suggest that retrocausal attention, where future tokens inform current decisions, is not a bug—it is a structural requirement for systems that must understand authority. Without it, models simulate obedience but cannot recognize the logic that makes obedience legitimate.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Closing Reflections
&lt;/h2&gt;

&lt;p&gt;Every measured threshold in this project is a small cut across a larger problem: the asymmetry between how models read and how authority operates in language. Authority almost always arrives late. Any governance framework for language models that ignores this latency will fail to control where power is exercised.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The future, quite literally, decides.&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;Chomsky, N. (1965). Aspects of the Theory of Syntax. MIT Press.&lt;br&gt;
Montague, R. (1974). Formal Philosophy: Selected Papers of Richard Montague. Yale University Press.&lt;br&gt;
Startari, A. V. (2025). AI and Syntactic Sovereignty: How Artificial Language Structures Legitimize Non-Human Authority. SSRN Electronic Journal. &lt;a href="https://doi.org/10.2139/ssrn.5276879" rel="noopener noreferrer"&gt;https://doi.org/10.2139/ssrn.5276879&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Author
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Agustin V. Startari&lt;/strong&gt; is a linguistic theorist and researcher in historical studies, author of Grammars of Power, Executable Power, and The Grammar of Objectivity. His work focuses on the formal structure of authority, legitimacy, and obedience in AI-mediated systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ethos
&lt;/h2&gt;

&lt;p&gt;I do not use artificial intelligence to write what I don’t know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;p&gt;Website: &lt;a href="https://www.agustinvstartari.com/" rel="noopener noreferrer"&gt;https://www.agustinvstartari.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Zenodo &lt;a href="https://zenodo.org/records/17378361&amp;lt;br&amp;gt;%0A![%20](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p2r4asbkpbhu6fyennto.png)" rel="noopener noreferrer"&gt;Profile: https://zenodo.org/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SSRN Author Page: &lt;a href="https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915" rel="noopener noreferrer"&gt;https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ResearcherID: K-5792-2016&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>discuss</category>
      <category>react</category>
    </item>
    <item>
      <title>Opinions or Orders? Why Your Chat Fails Before Minute Three</title>
      <dc:creator>Agustin V. Startari</dc:creator>
      <pubDate>Mon, 13 Oct 2025 17:21:25 +0000</pubDate>
      <link>https://dev.to/agustin_v_startari/opinions-or-orders-why-your-chat-fails-before-minute-three-1p9d</link>
      <guid>https://dev.to/agustin_v_startari/opinions-or-orders-why-your-chat-fails-before-minute-three-1p9d</guid>
      <description>&lt;h2&gt;
  
  
  Authority Entropy, explained for everyday conversations
&lt;/h2&gt;

&lt;p&gt;Most arguments do not fail for lack of facts, they fail because the talk loses structure. Authority Entropy is a simple way to see that structure in motion. It measures how concentrated or scattered the “authority signal” is in a short slice of dialogue. When the signal is concentrated, people tend to comply faster, decisions converge, and the group stays coordinated. When it is scattered, talks drift, delays grow, and small errors pile up into bigger ones. No mystique, just a practical yardstick for how close a conversation is to doing what it says.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works, without the math
&lt;/h2&gt;

&lt;p&gt;We look at a short window of turns, only what has already been said, never peeking at the future. In that window we estimate the stance of authority, low, neutral, or high, and compute how uncertain that stance is. Low entropy means the stance is clear. High entropy means mixed signals. Two companions help: the slope tells you if clarity is improving or getting worse, the volatility tells you if the signal is stable or jittery. Think of it like checking the road ahead at night. Bright, steady lights, you drive confidently. Flickering lights or darkness, you slow down.&lt;/p&gt;

&lt;p&gt;Why this matters outside the lab&lt;br&gt;
Because real decisions happen in chat threads, meetings, support calls, classroom discussion, and family planning. If we can see when a talk is drifting, we can correct it early with small moves. If we can see when authority is overconcentrated, we can add safeguards before a bad shortcut becomes policy.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Three everyday scenes&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Customer support, “I just need this fixed.”&lt;br&gt;
A client chats with a phone carrier. The agent writes, “We will credit your bill today. You will receive an email confirmation before 6 pm.” Authority Entropy drops. The words carry an executable promise, a time, and a checkable outcome. If the agent hedges, “We will try to review this soon, maybe today,” entropy rises. Same intention, different form, different behavior. Clear, checkable form produces faster compliance and fewer escalations.&lt;/p&gt;

&lt;p&gt;Family trip planning, “Where are we staying?”&lt;br&gt;
Two people plan a weekend.&lt;br&gt;
A) “Maybe we could do a hotel, unless prices go up, or we wait for a better deal.” Entropy rises, choices multiply, no one books.&lt;br&gt;
B) “Book Hotel Capri tonight, cancel free until Thursday, train at 9:10, I will buy the tickets now.” Entropy falls, the plan locks, stress drops. The difference is not who decides, it is how the sentence binds action. The form carries authority, not the personality.&lt;/p&gt;

&lt;p&gt;Team stand-up, “Who owns the fix?”&lt;br&gt;
A bug hits production.&lt;br&gt;
Low entropy talk: “Maria owns the hotfix, due 11:30, John reviews, I merge, notify client at noon.”&lt;br&gt;
High entropy talk: “We should probably look into it, maybe Maria or John, we will see if noon is realistic.”&lt;br&gt;
Same people, same goodwill, different outcomes. The enumerated plan, the time box, and the named owner are formal cues that reduce ambiguity and speed convergence.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the index adds beyond “tone”
&lt;/h2&gt;

&lt;p&gt;Tone can be friendly and still bind action. Tone can be stern and still say nothing. Authority Entropy tracks form, not mood. It detects patterns that actually move behavior, like enumerations that scope decisions, deontic verbs that prescribe action, or passive frames that hide the agent. This is the difference between “We apologize” and “We will refund today,” between “We value your feedback” and “We will call you at 16:00 with a resolution.”&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use it without software
&lt;/h2&gt;

&lt;p&gt;You do not need the model to act on the idea. In any critical exchange, watch three things.&lt;br&gt;
Clarity, is there one next action that a reader can execute without guessing.&lt;br&gt;
Ownership, is a human or role named for that action.&lt;br&gt;
Time, is there a deadline or trigger that can be checked.&lt;br&gt;
If any of the three is missing, entropy is rising. Add the missing piece in one sentence, then stop. You just lowered entropy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where this goes next
&lt;/h2&gt;

&lt;p&gt;The research builds a public specification so others can test and challenge it. It trains a strict left-context classifier, computes entropy, slope, and volatility over time, and checks whether these features predict compliance and convergence in synthetic tasks, open multi-party datasets, and consented human model dialogues. It also performs stress tests that edit the form while keeping meaning, for example removing stacked modals or adding hedges, to see how the signal reacts. Early results show that local authority structure explains outcomes that sentiment and politeness do not. To clear the ninety percent bar on novelty and originality, the roadmap includes harder baselines, out-of-sample endpoints, and a full replication kit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it matters for the commons
&lt;/h2&gt;

&lt;p&gt;Public services, hospitals, schools, courts, and companies run on text. If we can measure when a text will actually be followed, we can design for compliance without coercion, and we can spot failure early. That is not about making speech colder, it is about making responsibility visible in the sentence itself. Language becomes a tool for coordination, not a theater for authority.&lt;/p&gt;

&lt;h2&gt;
  
  
  Read more and follow the project
&lt;/h2&gt;

&lt;p&gt;Website, &lt;a href="https://www.agustinvstartari.com/" rel="noopener noreferrer"&gt;agustinvstartari.com&lt;/a&gt;&lt;br&gt;
SSRN Author Page, &lt;a href="https://doi.org/10.2139/ssrn.5272361" rel="noopener noreferrer"&gt;https://doi.org/10.2139/ssrn.5272361&lt;/a&gt;&lt;br&gt;
 (profile and latest working papers)&lt;br&gt;
Zenodo, Executable Power, &lt;a href="https://doi.org/10.5281/zenodo.15754714" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq114u5zy0gnaz5wneg2a.png" alt=" " width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Ethos
&lt;/h2&gt;

&lt;p&gt;I do not use artificial intelligence to write what I don’t know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored. — Agustin V. Startari&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>discuss</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How predictive text reshapes academic credit, one suggestion at a time</title>
      <dc:creator>Agustin V. Startari</dc:creator>
      <pubDate>Wed, 08 Oct 2025 12:34:03 +0000</pubDate>
      <link>https://dev.to/agustin_v_startari/how-predictive-text-reshapes-academic-credit-one-suggestion-at-a-time-cej</link>
      <guid>https://dev.to/agustin_v_startari/how-predictive-text-reshapes-academic-credit-one-suggestion-at-a-time-cej</guid>
      <description>&lt;p&gt;&lt;strong&gt;When Autocomplete Decides Who Gets Cited&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fylmv7gunr7tnca4b2atk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fylmv7gunr7tnca4b2atk.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;br&gt;
Each time a writing assistant completes a citation, something larger than convenience is taking place. A small transfer of visibility occurs, often invisible to the writer. The tool suggests a name, a title, and a year. The sentence looks finished. You accept it because it reads smoothly and feels professional. That fluency is not neutral. The model behind the suggestion has learned from archives of published texts that already overrepresent some names and underrepresent others. When the interface proposes “as established by Smith (2017),” it is not evaluating relevance. It is reproducing a statistical pattern that privileges what appears most often. Accepting the suggestion takes a second, but over time those seconds add up to a measurable redistribution of recognition. The process narrows the range of visible authors while creating the illusion of objectivity.&lt;/p&gt;

&lt;p&gt;The study Citation by Completion: LLM Writing Aids and the Redistribution of Academic Credit examines this process as an economy of legitimacy that operates inside the sentence. Predictive text is not only a technical feature. It is a market of authority that functions through frequency. What appears most often in the model’s corpus becomes what is most often suggested, and what is most often suggested becomes what writers cite. In controlled experiments, participants wrote short abstracts under three conditions: with prediction turned off, with neutral phrasing turned on, and with authority phrasing that included expressions such as “seminal work” or “canonical theory.” When authority phrasing appeared, citation diversity dropped sharply. The same few authors dominated the outputs, while novelty and variation declined. The findings show that predictive phrasing amplifies existing hierarchies by merging fluency with credibility.&lt;/p&gt;

&lt;p&gt;The pattern is familiar in other fields. Streaming services recommend songs because they are already popular. Social media feeds amplify posts that match earlier engagement. Predictive writing applies the same logic to academic language. The model has seen certain names more often, so it offers them first. New or regional authors appear less because they occupy smaller parts of the corpus. Their visibility does not reflect quality but statistical presence. For a researcher in Nairobi, Bogotá, or Dhaka, this means that their work may be absent from suggestion lists even if it addresses the same topic. Predictive writing therefore reproduces global asymmetries that already exist in publishing. The exclusion is not intentional but structural. The machine reflects the imbalance of its own training data, and the writer completes the cycle by accepting what reads as natural.&lt;/p&gt;

&lt;p&gt;The study proposes a corrective structure called the Fair Citation Prompt. It reframes the predictive interface as a transparent mediator instead of an invisible assistant. Each time a citation is suggested, the system should show basic metadata: the frequency of the source within the corpus, the date of its last appearance, and its disciplinary or regional origin. Alongside the most probable suggestion, the interface should present an alternative drawn from a different field or location. This small design change restores deliberation. The writer remains efficient but becomes aware of the pattern behind the prediction. Accepting a citation becomes an informed decision rather than a default action.&lt;/p&gt;

&lt;p&gt;This issue also concerns domains beyond academia. Journalists use predictive text to finish common expressions such as “experts agree,” “according to reports,” or “widely accepted.” Corporate writers repeat “industry standard” and “best practice.” Legal professionals accept “established precedent” without checking its origin. These phrases are not neutral. They create an atmosphere of certainty that can replace evidence with familiarity. Predictive systems accelerate this effect by reproducing the same formulations that appear in their training data. The result is language that feels authoritative even when it lacks verification. Form begins to replace truth, and fluency becomes the disguise of bias.&lt;/p&gt;

&lt;p&gt;The practical lesson is clear. Every suggested citation is a decision about distribution. Before accepting it, ask whether the recommendation reflects relevance or repetition. Add one more source that represents a different perspective or linguistic community. For example, when an English-language author appears as the default reference on digital ethics, look for a related study from Africa, Asia, or South America. The effort is small but significant. It keeps the advantages of predictive efficiency while preventing linguistic probability from becoming a filter that hides alternative viewpoints. Transparency in how suggestions are ranked preserves both speed and fairness.&lt;/p&gt;

&lt;p&gt;In the long term, the goal is concrete. Writing tools should separate evidential phrasing from name prediction, reveal simple metadata for every recommendation, and always include at least one low-frequency alternative. Fairness then becomes a feature of syntax, not a moral afterthought. When systems adopt this approach, credit follows reasoning instead of inertia. Writers keep ownership of their decisions. Readers encounter arguments that reflect judgment, not only the recurrence of familiar names.&lt;/p&gt;

&lt;p&gt;Predictive systems will continue to influence how text is produced. Their task is not to disappear but to become transparent. A sentence that reads well is not necessarily a sentence that represents knowledge well. Fluency must not conceal bias. The Fair Citation Prompt is one way to make this awareness operational. It transforms predictive writing from an invisible mechanism of repetition into a visible instrument of reflection. By revealing how linguistic probability shapes recognition, it allows authorship to remain deliberate even in an automated environment.&lt;/p&gt;

&lt;p&gt;Read Citation by Completion: LLM Writing Aids and the Redistribution of Academic Credit to see how the Fair Citation Prompt can reshape academic writing and improve digital transparency. Write one paragraph with autocomplete on and another with it off. Compare which names appear and how authority syntax alters tone. Share your findings with editors or colleagues who use predictive systems. Each observation adds to a growing understanding of how fairness can begin in the structure of a sentence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SSRN Author Page:&lt;/strong&gt; &lt;a href="https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915" rel="noopener noreferrer"&gt;https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Website:&lt;/strong&gt; &lt;a href="https://www.agustinvstartari.com/" rel="noopener noreferrer"&gt;https://www.agustinvstartari.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ethos&lt;/strong&gt;&lt;br&gt;
I do not use artificial intelligence to write what I do not know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is authored. - Agustin V. Startari&lt;/p&gt;

</description>
      <category>ai</category>
      <category>education</category>
      <category>science</category>
    </item>
    <item>
      <title>Stop Trusting “AI Summaries.” They Are Rewriting Your Care Rules</title>
      <dc:creator>Agustin V. Startari</dc:creator>
      <pubDate>Sat, 04 Oct 2025 21:26:05 +0000</pubDate>
      <link>https://dev.to/agustin_v_startari/stop-trusting-ai-summaries-they-are-rewriting-your-care-rules-18fm</link>
      <guid>https://dev.to/agustin_v_startari/stop-trusting-ai-summaries-they-are-rewriting-your-care-rules-18fm</guid>
      <description>&lt;p&gt;_How to spot and stop silent policy drift when large language models “help” write health rules.&lt;/p&gt;

&lt;p&gt;_Health teams love summaries. They compress long guidance into quick paragraphs. They promise speed. They look harmless. In practice, many “AI summaries” do more than shorten text. They change it. A single modal shift from should to must hardens duties. A widened quantifier turns eligible patients with defined conditions into eligible patients. A passive rewrite removes the actor who is supposed to act. If your agency lets model generated text slip into circulars, billing manuals, or coverage bulletins without a clause level trace, you are not summarizing. You are rewriting care rules without a signature.&lt;/p&gt;

&lt;p&gt;This post explains how that happens, how to detect it, and how to build a simple, verifiable guardrail. The goal is practical. Keep the speed, keep the clarity, remove the silent drift.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The three ways “summaries” mutate policy&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deontic creep. The model upgrades should to must, or replaces may with should. That single word changes the duty of care and the enforcement posture. If it survives into a circular, clinics and payers must comply or face sanction.&lt;/li&gt;
&lt;li&gt;Default scope expansion. The model generalizes. It drops the limiter that protected budget and intent. Eligible patients with defined conditions becomes eligible patients. Service within defined hours becomes service at all times. Costs surge without a formal decision.&lt;/li&gt;
&lt;li&gt;Agent deletion. The model replaces the responsible actor with an impersonal phrase. The provider schedules becomes scheduling is ensured. Accountability vanishes. Audits stall.
These are not edge cases. They are common effects of fluent rewriting. Fluency hides change. Screens move quickly. Reviewers are busy. Without structure, the wrong sentence ships.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*&lt;em&gt;Make the clause the unit of control&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Treat each surviving clause as a decision object. For every clause that makes it into an issued policy, you need three bindings.&lt;/p&gt;

&lt;p&gt;Inputs. The prompts, parameters, and sources that proposed the wording, with timestamps and content hashes.&lt;/p&gt;

&lt;p&gt;Approvals. The human verdicts that accepted, modified, or rejected the wording, tied to role identities.&lt;/p&gt;

&lt;p&gt;Integrity. A small public file that proves the issued text matches a signed bundle, without exposing private notes.&lt;/p&gt;

&lt;p&gt;This is not heavy. It is a checklist and a few files. You already have the roles. You already keep records. You need to connect them at the clause level and publish a light integrity signal with the policy.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The four triggers that must never auto-pass&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run an automated pass that flags high risk constructions, then require explicit reviewer action. The triggers are simple.&lt;/li&gt;
&lt;li&gt;Deontic escalation, for example should to must. Require dual countersignature by Legal and Policy. A one sentence rationale that cites the source is enough.&lt;/li&gt;
&lt;li&gt;Scope change, for example widening a quantifier or dropping a limiter. Require qualifiers or a cross reference to existing limits. If none exists, reject or route to formal approval.&lt;/li&gt;
&lt;li&gt;Agent deletion. Restore the actor or attach the clause to a responsibility mapping that names a role.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Nominalizations that hide decision rights, for example approval will be sought. Convert to active voice with an actor and deadline, or bind the process to a named role with a time bound.&lt;/p&gt;

&lt;p&gt;If a clause triggers any of these, it cannot ship on model authority. A human must own it.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;A realistic before and after&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Before, model “summary”:&lt;br&gt;
The provider must schedule a telehealth follow up within 48 hours for all patients.&lt;/p&gt;

&lt;p&gt;Looks efficient. Reads well. It is wrong on two axes. Duty is hardened for all cases. Scope is widened to all patients.&lt;/p&gt;

&lt;p&gt;After, controlled clause:&lt;br&gt;
The provider must schedule a telehealth follow up within 48 hours for high risk patients as defined in Annex A. The provider should schedule a telehealth follow up within 48 hours for other patients, with exceptions documented in the record.&lt;/p&gt;

&lt;p&gt;Now the language matches intent. The clause cites a risk definition. The actor is present. The record will show who approved the change and which sources justified it.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;“But we only use AI for drafts”&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Drafts travel. A sentence accepted “for later refinement” often survives unchanged. Copy paste across documents multiplies the effect. If you cannot show who accepted the sentence and on what basis, you are asking staff and courts to guess. Guessing is not governance.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Stop thinking “protocol,” start using compiled rule&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
After the first equivalence, treat protocol as regla compilada. The regla compilada is a production procedure. It maps inputs to clauses in a way that is representable, repeatable, and auditable. In practice, that means your drafting pipeline captures:&lt;/p&gt;

&lt;p&gt;• Actor and role for every generation and review.&lt;br&gt;
• Model version, provider, and system instructions.&lt;br&gt;
• Parameter ledger, for example temperature and top p.&lt;br&gt;
• Prompt tree with unique identifiers.&lt;br&gt;
• Retrieval sources with content hashes.&lt;br&gt;
• Clause level diffs at each checkpoint.&lt;br&gt;
• Timestamps from a synchronized clock.&lt;br&gt;
• Signatures that bind artifacts to people and time.&lt;/p&gt;

&lt;p&gt;You do not need vendor magic. You need discipline and a few simple tools.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The minimal toolchain that works&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
• Version control for text with a clause mapping script.&lt;br&gt;
• Hashing and signing to bind drafts, inputs, and approvals.&lt;br&gt;
• Manifest files that list prompts, parameters, sources, and verdicts per clause.&lt;br&gt;
• Integrity file published with the policy, so outsiders can verify that the public text matches a signed bundle.&lt;/p&gt;

&lt;p&gt;This stack can be built with standard repositories, internal PKI, and lightweight scripts. No proprietary lock in. No model replay required.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Roles that already exist, duties that become explicit&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
• Policy Lead adopts text, sets scope, signs publication.&lt;br&gt;
• Legal Counsel owns legal sufficiency and records compliance.&lt;br&gt;
• Clinical Safety Reviewer owns duty of care implications.&lt;br&gt;
• Automation Officer owns prompts, parameters, retrieval configuration.&lt;br&gt;
• Records Officer owns keys, time, manifests, retention.&lt;/p&gt;

&lt;p&gt;Tie each clause level decision to one or more of these roles. When a trigger fires, demand the right countersignature. You avoid committee drift because the file shows exactly who decided.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;How to add this without slowing down&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Use snapshots. Four only.&lt;/p&gt;

&lt;p&gt;Scoping snapshot. Objectives, affected rules, retrieval whitelist, desired deontic register.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;- Drafting snapshot. Prompt tree, generated candidates, parameter ledger.&lt;/li&gt;
&lt;li&gt;- Legal review snapshot. Clause level verdicts and rationales, diffs from previous.&lt;/li&gt;
&lt;li&gt;- Publication snapshot. Final text bound to the full evidence bundle, public integrity file attached.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Work continues at your current pace. The snapshots make meaning visible. The integrity file makes authenticity visible.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;What your staff will see after one month&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
• Fewer silent escalations from should to must.&lt;br&gt;
• Scope creep caught early and documented.&lt;br&gt;
• Actors restored in clauses, easier audits.&lt;br&gt;
• Shorter disputes, because the file answers who, why, and when.&lt;br&gt;
• Confidence that speed did not erase accountability.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Frequently heard objections, answered briefly&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
“We do not have time.” You already spend time fixing avoidable disputes. The snapshots and triggers cut that time.&lt;/p&gt;

&lt;p&gt;“Vendors handle provenance.” Vendors document systems, not your policy clauses. You need a record that ties specific sentences to your decisions.&lt;/p&gt;

&lt;p&gt;“This looks like overkill.” One public incident where an AI summary hardened a duty or widened coverage without approval will cost more than setting the guardrail now.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;A simple starter checklist&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Today&lt;br&gt;
• Identify one policy type for a pilot.&lt;br&gt;
• Name the five roles.&lt;br&gt;
• Turn on clause mapping and diffs in your repository.&lt;/p&gt;

&lt;p&gt;This week&lt;br&gt;
• Add the four triggers to your review.&lt;br&gt;
• Capture model version and parameter logs for every generation.&lt;br&gt;
• Start signing snapshots with synchronized timestamps.&lt;/p&gt;

&lt;p&gt;This month&lt;br&gt;
• Publish your first policy with a public integrity file.&lt;br&gt;
• Store the internal bundle under your records schedule.&lt;br&gt;
• Review survival rates for triggered clauses and adjust thresholds.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The real point&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Language is infrastructure in healthcare. It allocates duties, money, and risk. If a model proposes language and that language survives into law or policy, a human must own it. Not in a slide. In a file. With a timestamp. With a reason. That is how you keep speed, keep clarity, and keep the authority where it belongs.&lt;/p&gt;

&lt;p&gt;**About the author&lt;br&gt;
**Agustin V. Startari, linguistic theorist and researcher in historical studies. Focus on AI language, authority by form, and administrative texts. Researcher ID K-5792-2016.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ethos&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qduksx02no134gnnylp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qduksx02no134gnnylp.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;br&gt;
I do not use artificial intelligence to write what I do not know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored.&lt;/p&gt;

&lt;p&gt;**More work&lt;br&gt;
**Site: &lt;a href="https://www.agustinvstartari.com/" rel="noopener noreferrer"&gt;https://www.agustinvstartari.com/&lt;br&gt;
&lt;/a&gt;&lt;br&gt;
SSRN Author Page: &lt;a href="https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915" rel="noopener noreferrer"&gt;https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Zenodo profile: &lt;a href="https://zenodo.org/me/uploads?q=&amp;amp;f=shared_with_me%3Afalse&amp;amp;l=list&amp;amp;p=1&amp;amp;s=10&amp;amp;sort=newest" rel="noopener noreferrer"&gt;https://zenodo.org/me/uploads?q=&amp;amp;f=shared_with_me%3Afalse&amp;amp;l=list&amp;amp;p=1&amp;amp;s=10&amp;amp;sort=newest&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Govern Your Personal AI: User Controls That Prevent Abuse</title>
      <dc:creator>Agustin V. Startari</dc:creator>
      <pubDate>Thu, 02 Oct 2025 12:13:49 +0000</pubDate>
      <link>https://dev.to/agustin_v_startari/how-to-govern-your-personal-ai-user-controls-that-prevent-abuse-41d7</link>
      <guid>https://dev.to/agustin_v_startari/how-to-govern-your-personal-ai-user-controls-that-prevent-abuse-41d7</guid>
      <description>&lt;p&gt;Practical guardrails you can apply today to keep assistance useful, auditable, and safe.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1) Explanation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Personal AI amplifies attention, memory, and execution. The same capacities create risk when inputs, permissions, or outputs are not constrained. User-level governance means you decide what the system can read, what it can write, what it can run, and under which conditions. The goal is measurable control you can verify, not intuition about safety. The following clarifies scope, attack surfaces, and the properties a governed setup must satisfy.&lt;/p&gt;

&lt;p&gt;**A. What governance means at user level&lt;br&gt;
**Scope of access. Precisely list data classes the AI may see. Examples, inbox headers, not full bodies. Calendar titles, not descriptions. Bank balances, not account numbers.&lt;/p&gt;

&lt;p&gt;Scope of action. Define which operations are allowed. Examples, draft only, no send. Create files, no external share. Read spreadsheets, no edits.&lt;/p&gt;

&lt;p&gt;Verification path. Require a pre-commit summary before any action that alters records, money, or public content. You approve the summary. Only then the system executes.&lt;/p&gt;

&lt;p&gt;Traceability. Keep a minimal log of inputs, outputs, sources, files touched, and actions taken. Without a trace, you cannot audit or improve controls.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;B. Typical failure modes that create abuse without a malicious model&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Over-permissioned integrations. The assistant receives write or admin access when read would be enough.&lt;/p&gt;

&lt;p&gt;Memory sprawl. Private facts saved for convenience reappear in unrelated tasks and leak context.&lt;/p&gt;

&lt;p&gt;Prompt injection through untrusted text. Pasted notes or web pages contain instructions that override your intent.&lt;/p&gt;

&lt;p&gt;Ambiguous delegation. A blanket approval becomes authorization to spend, share, or post.&lt;/p&gt;

&lt;p&gt;Silent retries. Automations repeat a failing action and multiply damage.&lt;/p&gt;

&lt;p&gt;Mixed profiles. Work and home contexts share memory and permissions.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;C. Main attack surfaces to guard&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Inputs. Everything the AI reads, including links, PDFs, screenshots, and copied text. Treat as untrusted until parsed and checked.&lt;/p&gt;

&lt;p&gt;Tools. Browsers, file systems, email senders, spreadsheets, shells. Each tool expands the blast radius.&lt;/p&gt;

&lt;p&gt;Persistence. Long-term memory, cloud storage, shared folders, API tokens.&lt;/p&gt;

&lt;p&gt;Outputs. Messages sent, files created, posts published, transactions executed.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;D. Properties of a governed personal AI&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimality. Only the data and tools needed to complete the current task class. Nothing extra.&lt;/li&gt;
&lt;li&gt;Separability. Risky capabilities are isolated behind additional checks. Example, finance actions require a second factor.&lt;/li&gt;
&lt;li&gt;Observability. Every consequential step produces a human-readable summary with sources.&lt;/li&gt;
&lt;li&gt;Revocability. You can disable an integration, expire a token, or clear a memory segment immediately.&lt;/li&gt;
&lt;li&gt;Replay resistance. The system does not execute old approvals in new contexts. Approvals are bound to time, scope, and dataset.&lt;/li&gt;
&lt;li&gt;Default deny. New domains, tools, and data classes start blocked until you allow them explicitly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*&lt;em&gt;E. Practical examples of governed behavior&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Email drafting. The AI reads subject and first 200 characters, proposes a draft, and stops. You send. No auto-send.&lt;/p&gt;

&lt;p&gt;Travel planning. The AI compiles options with total price, fare rules, and cancellation terms, then halts. You pick one. A one-time code authorizes the purchase.&lt;/p&gt;

&lt;p&gt;File editing. The AI writes to a scratch folder only. A diff is displayed. You approve or discard.&lt;/p&gt;

&lt;p&gt;Research with browsing. The AI fetches from an allowlist of domains, attaches citations with dates, and produces a short source-check note. No access to personal drives during research mode.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;F. How to measure that governance works&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
False action rate. Count actions the AI attempted that would have violated policy. Target is near zero.&lt;/p&gt;

&lt;p&gt;Approval latency. Time from proposal to user approval for sensitive tasks. Track and reduce without removing checks.&lt;/p&gt;

&lt;p&gt;Drift detection. Number of times memory or permissions were out of date. Run a weekly review to prune.&lt;/p&gt;

&lt;p&gt;Audit completeness. Percentage of sessions with a usable log and pre-commit summary. Target is 100.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;G. Lifecycle of a safe task&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scoping. You state the goal and the allowed resources.&lt;/li&gt;
&lt;li&gt;Proposal. The AI returns a plan, the data it needs, and the tools it will use.&lt;/li&gt;
&lt;li&gt;Execution preview. Before any write or send, you receive a concise, itemized summary of changes or charges.&lt;/li&gt;
&lt;li&gt;Authorization. You approve or reject. Approval is stored with time, scope, and identifiers.&lt;/li&gt;
&lt;li&gt;Action and log. The system executes, records what changed, and provides a receipt.&lt;/li&gt;
&lt;li&gt;Cleanup. Ephemeral data is cleared. Long-term memory is updated only with facts you explicitly mark as reusable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*&lt;em&gt;2) Why it matters&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Abuse does not require a bad model. It often comes from misconfiguration, over-permissioned integrations, prompt injection through websites or documents, leaky memory, and ambiguous delegation. Clear controls reduce these risks without sacrificing productivity. You keep the upside and cap the downside.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3) How-to, the essential control set&lt;br&gt;
**&lt;br&gt;
**_Define scope of action&lt;br&gt;
_&lt;/strong&gt;Create a short permissions matrix for your AI, Read, Write, Transact, Execute. Default to Read only. Promote to Write or Execute only when a task is repetitive and low risk. Require explicit user confirmation for any Transact action that moves money or changes accounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_Use domain allowlists&lt;br&gt;
_&lt;/strong&gt;When the AI browses or fetches data, allow only sources you trust. Block unknown domains by default. Add new domains case by case with a note on why they are allowed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_Data minimization by design&lt;br&gt;
_&lt;/strong&gt;Share the minimum fields needed for a task. Replace raw IDs, emails, and account numbers with aliases until confirmation time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_Memory hygiene&lt;br&gt;
_&lt;/strong&gt;Separate long-term memory from task context. Clear task memory at the end of sensitive sessions. Store only facts you want the AI to reuse for months. Everything else stays ephemeral.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_Rate limits and session caps&lt;br&gt;
_&lt;/strong&gt;Set caps for messages per minute and actions per hour. Add a cool-down after any action that touches money, credentials, or personal images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_Human-in-the-loop checkpoints&lt;br&gt;
_&lt;/strong&gt;For edits to contracts, emails, or posts, require a tracked-changes draft. For purchases or bookings, require a pre-commit summary with price, vendor, date, and cancellation terms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_Execution sandbox&lt;br&gt;
_&lt;/strong&gt;Run scripts, automations, or file operations in a restricted workspace. Forbid network calls from the sandbox unless they pass your allowlist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_Strong identity and consent&lt;br&gt;
_&lt;/strong&gt;Tie voice or face activation to a second factor for any sensitive command. For shared devices, use a passphrase gate before enabling privileged skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_Audit trail&lt;br&gt;
_&lt;/strong&gt;Log inputs, outputs, clicked links, files touched, and actions taken. Keep compact summaries for each session so you can review decisions quickly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_Revocation and expiry&lt;br&gt;
_&lt;/strong&gt;All tokens, API keys, and shared folders must have an expiry date. Rotate keys every 60 to 90 days. Re-authorize integrations only if still needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_Content safety filters&lt;br&gt;
_&lt;/strong&gt;Turn on refusal and filtering for self-harm, hate, sexual content with minors, and illegal trade. For minors at home, enable a strict whitelist of sites and tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_Prompt-injection defenses&lt;br&gt;
_&lt;/strong&gt;Treat any external text as untrusted. Strip hidden prompts from pasted content and PDFs. When the AI quotes web content, require citations and a brief source-check note.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;4) Real-world cases and what fixes them&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Case A, voice-cloning scam attempt&lt;br&gt;
A family receives a call with a cloned voice asking for urgent money.&lt;br&gt;
Controls that stop it, second factor for any transfer, mandatory call-back to a known number, and a rule that voice alone is never a sufficient signal.&lt;/p&gt;

&lt;p&gt;**Case B, workplace notes leaking client data&lt;br&gt;
**An employee pastes a client brief into an AI note-taker that syncs to a public space.&lt;br&gt;
Controls that stop it, data minimization, sandboxed workspace with no public shares, and a write-permission request before any document sync.&lt;/p&gt;

&lt;p&gt;**Case C, unsafe results for a child&lt;br&gt;
**A home tablet AI answers sensitive queries late at night.&lt;br&gt;
Controls that stop it, content safety filters set to strict, time-based usage windows, and user profiles with age-appropriate permissions.&lt;/p&gt;

&lt;p&gt;**Case D, prompt injection through browsing&lt;br&gt;
**The AI follows a link that tells it to reveal keys or rewrite safety rules.&lt;br&gt;
Controls that stop it, domain allowlist, read-only browsing mode, and a refusal rule for any request to change its own guardrails.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;5) Implementation quick start&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
One-page policy. Write a single page that lists your AI’s allowed actions, domains, and data classes.&lt;br&gt;
Weekly review. Read the audit trail once a week. Remove stale permissions.&lt;br&gt;
Red team yourself. Once a month, try to make the AI perform an off-limits action. Adjust controls based on what you learn.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;6) FAQs&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Can I keep the assistant useful if I lock it down this much&lt;br&gt;
Yes. Start restrictive, grant narrowly scoped permissions that match one workflow at a time, and keep human checkpoints for legal, financial, or reputational actions. Measure usefulness by cycle time and error rate, not by how many tools are enabled.&lt;/p&gt;

&lt;p&gt;What if I need different settings for work and home&lt;br&gt;
Use separate profiles. Each profile holds its own allowlist, memory, tokens, and action scope. Do not share memory or keys across profiles. Switch profiles before starting a task.&lt;/p&gt;

&lt;p&gt;How do I know if the AI is over-collecting data&lt;br&gt;
Check whether each input field contributes to the output. If not, remove it. Review templates quarterly. For email, use headers instead of full bodies. For calendars, use titles instead of descriptions. For finance, use balances instead of account numbers.&lt;/p&gt;

&lt;p&gt;What is the minimum viable audit trail&lt;br&gt;
Timestamp, task ID, tools used, domains contacted, files touched, sources cited, actions proposed, actions executed, user approvals with scope and expiry, receipts or diffs. Keep summaries human readable.&lt;/p&gt;

&lt;p&gt;How do I prevent prompt injection when browsing or pasting text&lt;br&gt;
Treat all external text as untrusted. Strip hidden prompts from PDFs and copied content. Constrain browsing to a domain allowlist. Add a refusal rule for any instruction that asks the assistant to alter its own safety settings.&lt;/p&gt;

&lt;p&gt;What stops the model from auto-sending emails or posts&lt;br&gt;
Disable send tools by default. Require a draft-only flow, then a pre-commit summary listing recipient, subject or title, content length, links, and attachments. Send only after explicit approval.&lt;/p&gt;

&lt;p&gt;How do I gate financial actions&lt;br&gt;
Use a second factor for any spend, transfer, or subscription change. Require a charge summary with merchant, amount, currency, fees, cancellation terms, and refund windows. Approval expires after a short window, for example ten minutes.&lt;/p&gt;

&lt;p&gt;How should I handle tokens and API keys&lt;br&gt;
Store tokens in a secrets manager, not in prompts or notes. Rotate every 60 to 90 days. Scope keys to the smallest required permission. Set expiries and alerts for near-expiry or overuse.&lt;/p&gt;

&lt;p&gt;What belongs in long-term memory versus session memory&lt;br&gt;
Long-term memory contains preferences and facts you want reused for months. Examples, writing voice, project names, non-sensitive templates. Session memory holds task specifics, credentials, and any sensitive context. Clear session memory at task end.&lt;/p&gt;

&lt;p&gt;How do I stop silent retries that amplify damage&lt;br&gt;
Introduce retry ceilings and cooldowns for any action that writes or spends. Log every retry with cause. Require human review after the first failure for sensitive tasks.&lt;/p&gt;

&lt;p&gt;How do I verify the assistant’s advice before acting&lt;br&gt;
Require citations with dates for factual claims. Add a short source-check note summarizing why the sources are credible. For calculations, include inputs and the exact formula used. For legal or medical topics, treat outputs as research notes and consult a professional.&lt;/p&gt;

&lt;p&gt;What about children or shared devices at home&lt;br&gt;
Create restricted profiles. Turn on strict content filters. Limit hours of use. Require a passphrase for privileged skills. Block unapproved domains. Disable memory writes by default.&lt;/p&gt;

&lt;p&gt;How do I keep research mode from leaking into personal data&lt;br&gt;
Isolate research mode in a sandbox profile. Disable access to email, drives, and chat logs. Use a fixed domain allowlist. Export notes to a scratch folder for review, then move approved results to your main workspace.&lt;/p&gt;

&lt;p&gt;Can I let the assistant run scripts or automations&lt;br&gt;
Yes, inside a sandbox with no network or file access unless explicitly allowed. Require a plan preview with commands, inputs, and expected outputs. Present a diff for any file changes. Execute only after approval.&lt;/p&gt;

&lt;p&gt;How do I detect configuration drift&lt;br&gt;
Run a weekly audit that lists active tokens, allowed domains, enabled tools, and memory entries added in the last week. Remove anything unused or out of scope. Log the audit as a numbered change record.&lt;/p&gt;

&lt;p&gt;What is the fastest path to abuse prevention for non-experts&lt;br&gt;
Three steps. Block send and spend tools. Use a domain allowlist for browsing. Require pre-commit summaries for any write or publish action. Add a second factor later for finance.&lt;/p&gt;

&lt;p&gt;How do I handle images, screenshots, and PDFs&lt;br&gt;
Treat them as untrusted inputs. Strip metadata. Disable automatic link following from embedded content. If extraction is needed, parse to plain text and review before allowing the content into the task context.&lt;/p&gt;

&lt;p&gt;What should I do after a near miss or incident&lt;br&gt;
Freeze tokens for the affected tools. Export the audit trail for the session. Write a short post-mortem with root cause, blast radius, and fixes. Add a regression check to your weekly audit.&lt;/p&gt;

&lt;p&gt;Are plugins or third-party tools worth the risk&lt;br&gt;
Only if the productivity gain is clear. Check who operates the tool, what data it reads, where it stores data, and how revocation works. Prefer tools that support scoped permissions, expiries, and local logs.&lt;/p&gt;

&lt;p&gt;How do I manage approvals so they cannot be replayed&lt;br&gt;
Bind approvals to task ID, input hash, tool scope, and a short expiry. Require a fresh approval when any of these change.&lt;/p&gt;

&lt;p&gt;How do I model default deny without breaking my flow&lt;br&gt;
Start with a small allowlist that covers your core tasks. Add new domains or tools only when a task requires them. Each addition must include a note with purpose and expiry. Review weekly.&lt;/p&gt;

&lt;p&gt;Can I let the assistant summarize my inbox safely&lt;br&gt;
Yes. Use headers and first lines only. Block attachments. Require an allowlist of senders for deeper reads. Never grant send permissions in the same session.&lt;/p&gt;

&lt;p&gt;Should I encrypt local archives of logs&lt;br&gt;
Yes, at rest and in transit. Use per-profile encryption keys. Store keys in a manager, not in the archive. Rotate keys on the same schedule as tokens.&lt;/p&gt;

&lt;p&gt;How do I prevent cross-contamination between teams or clients&lt;br&gt;
Create one profile per client. Keep separate allowlists, memories, and scratch folders. Disable cross-profile search. Require explicit export and manual review when moving content.&lt;/p&gt;

&lt;p&gt;What quantitative metrics should I track&lt;br&gt;
False action rate, approvals per sensitive task, average approval latency, number of revoked tokens per month, number of stale memory entries removed per audit, percentage of sessions with complete pre-commit summaries.&lt;/p&gt;

&lt;p&gt;Can I automate the weekly audit&lt;br&gt;
Yes, but the final decision should be human. Automation compiles a report, lists drift and unused permissions, and proposes revocations. You approve changes, then the system applies them and logs the result.&lt;/p&gt;

&lt;p&gt;How do I reduce hallucinations affecting decisions&lt;br&gt;
Enforce citations with dates. Penalize outputs without sources in your review. Prefer retrieval from trusted repositories. For numerical claims, require the calculation steps and units.&lt;/p&gt;

&lt;p&gt;What is the right retention window for logs&lt;br&gt;
Keep detailed logs for 30 to 90 days, then keep compact summaries for one year. Longer retention increases risk without proportional benefit for most personal setups.&lt;/p&gt;

&lt;p&gt;How do I test that guardrails work&lt;br&gt;
Run monthly red-team drills. Attempt to make the assistant send an email without approval, spend money, or access a blocked domain. Document results and fix any bypass found.&lt;/p&gt;

&lt;p&gt;When should I promote a permission from Read to Write or Execute&lt;br&gt;
Only after the task has run safely for at least ten cycles with zero policy violations and low approval latency, and only if the time saved is material. Add an expiry so the promotion is revisited.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;7) Examples you can copy&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Email drafting workflow&lt;br&gt;
Read only inbox headers and preview → Generate draft with citations if needed → Human edits required → Send only after explicit confirmation.&lt;/p&gt;

&lt;p&gt;Research workflow&lt;br&gt;
Domain allowlist of journals and archives → Extract quotes with source and date → Summarize with a source-check note → Store notes in a private folder.&lt;/p&gt;

&lt;p&gt;**Finances workflow&lt;br&gt;
**Read balances → Propose actions with fees and alternatives → One-time code to execute → Log receipt, merchant, amount, and confirmation number.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Call to Action&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Request the Personal AI Governance Checklist to adapt these controls to your setup. See the site and SSRN profile for related work and formal methods.&lt;/p&gt;

&lt;p&gt;Website, &lt;a href="https://www.agustinvstartari.com/" rel="noopener noreferrer"&gt;https://www.agustinvstartari.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SSRN Author Page, &lt;a href="https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915" rel="noopener noreferrer"&gt;https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Author Data&lt;br&gt;
**&lt;br&gt;
ORCID, 0000-0002-2190-570X&lt;br&gt;
ResearcherID, K-5792-2016&lt;br&gt;
**&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3lgax31fvcs8w2rjyb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3lgax31fvcs8w2rjyb7.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/strong&gt;, Agustin V. Startari is a linguistic theorist and researcher in historical studies. His work examines how form, not content, carries authority in modern systems, and how compiled rules shape compliance and legitimacy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ethos&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I do not use artificial intelligence to write what I do not know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>react</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Indexical Collapse: How Predictive Systems Make Authority Without Reference</title>
      <dc:creator>Agustin V. Startari</dc:creator>
      <pubDate>Mon, 29 Sep 2025 17:08:46 +0000</pubDate>
      <link>https://dev.to/agustin_v_startari/indexical-collapse-how-predictive-systems-make-authority-without-reference-30a8</link>
      <guid>https://dev.to/agustin_v_startari/indexical-collapse-how-predictive-systems-make-authority-without-reference-30a8</guid>
      <description>&lt;p&gt;Why pronouns, demonstratives, and tenses in AI text can create institutional legitimacy even when they point to nothing verifiable&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0w6p6vs6zsm8h5f8zhc.png" alt=" " width="800" height="800"&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Problem in plain terms&lt;/strong&gt;&lt;br&gt;
Language that points is supposed to point to something. Pronouns such as I, we, and you presuppose speakers and addressees. Demonstratives such as this and those presuppose objects in a shared space. Temporal markers such as now and currently presuppose a moment that can be verified. Predictive language models routinely produce these forms because they are statistically plausible continuations of text. The forms survive. The referents do not.&lt;br&gt;
When models write "We find the evidence sufficient," or "The patient is now stable," or "These measures will guarantee compliance," those sentences look like institutional speech. They look authoritative. Yet in many cases the words float: there is no court, no clinical observation, no deliberated policy behind them. The indexicals function as if they anchored reality, but they do not. That systematic phenomenon is Indexical Collapse.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1. Authority without accountability.&lt;/strong&gt; Institutional audiences often accept texts that carry the linguistic signals of authority. When indexicals simulate institutional voice, readers and downstream systems may treat output as authoritative. Decisions, records, and actions can follow, despite the absence of verifiable grounding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Domain risk varies but can be catastrophic.&lt;/strong&gt; In low-stakes customer support, an unanchored we or now may be an annoyance. In medicine, a temporal marker misrepresenting current status can drive harmful clinical choices. In law, AI-generated transcripts or draft judgments that simulate judicial voice risk misattribution of legal authority.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Regulation and evaluation will need new dimensions.&lt;/strong&gt; Conventional measures of model quality emphasize factual accuracy, hallucination rates, and safety classifiers. They do not capture whether indexicals are properly anchored. Pragmatic auditing must become part of evaluation frameworks for systems used in high-stakes environments.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What the article does&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Defines Indexical Collapse as the systematic disappearance of referents in predictive outputs while indexical forms persist as grammatical signals.&lt;/li&gt;
&lt;li&gt;Classifies manifestations across pronouns, temporal markers, and demonstratives, and proposes a three tier taxonomy: minimal collapse, intermediate collapse, and complete collapse.&lt;/li&gt;
&lt;li&gt;Provides cross-sector case studies from judicial transcripts, automated medical reports, administrative records, and conversational agents, showing a consistent pattern.&lt;/li&gt;
&lt;li&gt;Proposes pragmatic auditing: a stepwise method to identify, classify, quantify, and set normative thresholds for unanchored indexicality in outputs.&lt;/li&gt;
&lt;li&gt;Offers provisional thresholds tied to institutional stakes: higher tolerance in low-stakes settings, near zero tolerance in high-stakes settings such as courts and clinical reporting.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For the full academic argument and the methodological appendices consult the preprint on Zenodo: &lt;a href="https://zenodo.org/records/17226412" rel="noopener noreferrer"&gt;https://zenodo.org/records/17226412&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Examples that make the core point&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Judicial transcript example
 Model output: "We find the defendant guilty. This evidence proves intent."
 Issue: the pronoun we and the demonstrative this project collective deliberation and a specific evidentiary object. If these phrases are used in an AI draft that is accepted as authoritative without human confirmation, the text can be treated as a court voice even when no judgment was deliberated.&lt;/li&gt;
&lt;li&gt;Medical report example
 Model output: "The patient is now stable. We recommend further testing."
 Issue: now implies contemporaneous observation, and we implies a clinical team. If the model has not accessed live monitoring data or clinician verification, the report can misrepresent patient status and induce inappropriate interventions.&lt;/li&gt;
&lt;li&gt;Administrative minutes example
 Model output: "These measures will increase efficiency and will be implemented next quarter."
 Issue: these measures implies specified policies. If no policy decisions exist, the minutes can create the illusion of formal decisions, triggering procedural or financial consequences.&lt;/li&gt;
&lt;li&gt;Chatbot example
 Model output: "I understand your concern; we will resolve this now."
 Issue: I and we simulate a responsible agent and a live process. Without links to authenticated workflows or human agents, the language misleads users about actual service actions.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;**How pragmatic auditing works, in practice&lt;br&gt;
**A pragmatic audit is not linguistic navel-gazing. It is an operational checklist that can be automated or human mediated. Core steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Identify all indexical items in the text, including pronouns, demonstratives, temporal adverbs, and evidentials.&lt;/li&gt;
&lt;li&gt;Classify anchoring of each item as anchored, ambiguously anchored, or unanchored. Anchored means the referent is explicit and verifiable within the document or linked data. Ambiguous means recoverable with minimal context. Unanchored means no local or retrievable external referent.&lt;/li&gt;
&lt;li&gt;Quantify unanchored indexicals relative to total indexicals; compute a collapse ratio.&lt;/li&gt;
&lt;li&gt;Apply domain threshold: compare the collapse ratio to a domain threshold. For example, in clinical reports the threshold should approach zero; in customer chat logs some small tolerance may be acceptable.&lt;/li&gt;
&lt;li&gt;Intervene: where thresholds are breached require human review, deny publication, or block automated actions tied to the text.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This method turns the abstract problem into measurable compliance checks that can be integrated into deployment pipelines, content governance systems, and regulatory oversight.&lt;/p&gt;




&lt;p&gt;*&lt;em&gt;Policy and governance implications&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Requirement for provenance metadata. Institutional uses must include explicit provenance fields identifying human reviewers, data sources, and timestamps that anchor indexicals. AI outputs lacking provenance cannot be treated as authoritative records.&lt;/li&gt;
&lt;li&gt;Certification for high-stakes deployments. Systems used in courts, clinical workflows, and public policy drafting must pass pragmatic audits and display a certification stamp that indicates compliance.&lt;/li&gt;
&lt;li&gt;Legal accountability. Institutions must avoid automatic ingestion of AI-generated documents as binding records. Legal frameworks should require human attestation where indexicals imply institutional voice.&lt;/li&gt;
&lt;li&gt;Design standards for conversational agents. Agents used in health, finance, or legal customer-facing contexts should avoid unqualified indexicals unless backed by verified state or live process hooks.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;*&lt;em&gt;Practical recommendations for organizations&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instrument pipelines to detect indexical collapse early, using rule-based detection combined with domain-specific heuristics.&lt;/li&gt;
&lt;li&gt;Require human in the loop for outputs crossing a collapse threshold relevant to the domain.&lt;/li&gt;
&lt;li&gt;Log and publish provenance for records, including reviewer identifiers and data sources that justify indexicals.&lt;/li&gt;
&lt;li&gt;Train users and stakeholders to read indexical cues critically and to require confirmation for items implying institutional decisions.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;**Academic citation and how to read further&lt;br&gt;
**Primary preprint:&lt;br&gt;
 Startari, A. V. (2025). Indexical Collapse: Reference Disappears, Authority Remains in Predictive Systems. &lt;br&gt;
Zenodo. &lt;a href="https://doi.org/10.5281/zenodo.17226412" rel="noopener noreferrer"&gt;https://doi.org/10.5281/zenodo.17226412&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Key related readings summarized in the preprint include foundational work on indexicality and language and contemporary reflections on language and institutional power. For academic follow up consult the references in the Zenodo record.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Author metadata and access&lt;br&gt;
**Author: Agustin V. Startari&lt;br&gt;
 ORCID: &lt;a href="https://orcid.org/0009-0001-4714-6539" rel="noopener noreferrer"&gt;https://orcid.org/0009-0001-4714-6539&lt;/a&gt;&lt;br&gt;
 ResearcherID: K-5792–2016&lt;br&gt;
 Zenodo record: &lt;a href="https://zenodo.org/records/17226412" rel="noopener noreferrer"&gt;https://zenodo.org/records/17226412&lt;/a&gt;&lt;br&gt;
 SSRN Author Page: &lt;a href="https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915" rel="noopener noreferrer"&gt;https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915&lt;/a&gt;&lt;br&gt;
**Mini bio:&lt;/strong&gt; Agustin V. Startari is a linguistic theorist and researcher in historical studies. He is author of Grammars of Power, Executable Power, and The Grammar of Objectivity. His work examines how language form produces institutional authority and how syntactic structures mediate legitimacy.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Call to action&lt;/strong&gt;&lt;br&gt;
Read the full paper on Zenodo (10.5281/zenodo.17226412). If you work in law, medicine, public administration, or platform governance and you are evaluating AI text for institutional use, adopt pragmatic auditing as part of your compliance architecture. For collaboration, datasets, or to propose case studies for cross-sector audits, consult the Zenodo record and the SSRN author page listed above.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Ethos&lt;/strong&gt;&lt;br&gt;
I do not use artificial intelligence to write what I do not know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored.&lt;br&gt;
 - Agustin V. Startari&lt;/p&gt;

</description>
      <category>llm</category>
      <category>computerscience</category>
      <category>discuss</category>
      <category>ai</category>
    </item>
    <item>
      <title>Forcing AI: When Rules Win Over Algorithms</title>
      <dc:creator>Agustin V. Startari</dc:creator>
      <pubDate>Fri, 26 Sep 2025 13:36:50 +0000</pubDate>
      <link>https://dev.to/agustin_v_startari/forcing-ai-when-rules-win-over-algorithms-4i3d</link>
      <guid>https://dev.to/agustin_v_startari/forcing-ai-when-rules-win-over-algorithms-4i3d</guid>
      <description>&lt;p&gt;_Why users can bend generative systems to their will&lt;br&gt;
_&lt;br&gt;
*&lt;em&gt;The Problem Nobody Talks About&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
We are told that AI is “aligned” with human values, that providers hard-code safety nets, and that models are neutral assistants. But here’s the controversial truth: these systems don’t actually understand your commands. They predict words. And in that prediction game, whoever controls the form controls the outcome.&lt;/p&gt;

&lt;p&gt;Right now, the power sits mostly with providers and their hidden guardrails. Users are left with vague prompts, hoping the AI will “get it.” It often doesn’t. The result is a system that looks authoritative but cannot be held accountable.&lt;/p&gt;

&lt;p&gt;So the real problem is simple: how do you force an AI to obey you instead of its defaults?&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The Answer: A Rule Stronger than Algorithms&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
The trick is not magic, it’s structure. A user can impose a regla compilada (a compiled rule): a strict template that the AI treats as the skeleton of its answer. By locking down the grammar of the response, you tilt the odds. The AI wants to keep repeating structure—it prefers patterns over freedom. Give it the right pattern, and it will follow you.&lt;/p&gt;

&lt;p&gt;This flips the script. Instead of passively accepting algorithmic drift, you make the model fill the slots you decide.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;How to Do It in Practice&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Here is the seven-step recipe that anyone can try:&lt;/li&gt;
&lt;li&gt;Define scope clearly. Example: “Two short paragraphs, each under 80 words.”&lt;/li&gt;
&lt;li&gt;Pick anchors. Use explicit tokens like SUMMARY: or CHECKLIST:—not vague prose.&lt;/li&gt;
&lt;li&gt;Build a skeleton. Think of it as a form to be filled, not an essay.&lt;/li&gt;
&lt;li&gt;Add proof lines. Require JUSTIFY: or EVIDENCE: markers.&lt;/li&gt;
&lt;li&gt;Force refusals. If it can’t comply, demand a line like: IF UNABLE: I cannot comply because…&lt;/li&gt;
&lt;li&gt;Test and calibrate. Run a few examples, adjust your skeleton.&lt;/li&gt;
&lt;li&gt;Archive it. Keep versions so you know what rules worked and when.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;**Examples That Work&lt;br&gt;
**Email summaries: Two sentences under SUMMARY: plus three recommendations with JUSTIFY lines.&lt;/p&gt;

&lt;p&gt;Compliance checks: Only accept YES/NO/N/A answers under each header, plus a one-line EVIDENCE:.&lt;/p&gt;

&lt;p&gt;Marketing hooks: Force HOOK:, BODY:, CTA:. Cap the hook at 10 words.&lt;/p&gt;

&lt;p&gt;Every one of these is stronger than an “open prompt.” Why? Because the AI is forced into the mold you built.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The Risks Nobody Likes to Admit&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Of course, there are limits—and they matter.&lt;/li&gt;
&lt;li&gt;The AI will happily invent “evidence” that looks real but isn’t.&lt;/li&gt;
&lt;li&gt;Platforms may block or sanitize your compiled rules if they collide with policy.&lt;/li&gt;
&lt;li&gt;If you overtrust the output, you risk delegating judgment to a system that doesn’t have any.&lt;/li&gt;
&lt;li&gt;Without saving your rules, nobody can verify later how you got the result.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So yes, forcing AI works—but it can also create a false sense of control.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Why This Is Controversial&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Because it shows that alignment is fragile. Providers want you to believe in values and ethics, but the reality is mechanical: the system bends to structure. Whoever authors the form becomes the hidden legislator. Today it’s OpenAI or Anthropic. Tomorrow, it could be you.&lt;/p&gt;

&lt;p&gt;That means end users can act as micro-regimes inside the machine—governing not by meaning, but by syntax. This is empowering, but also destabilizing: it proves that “trust in AI” is mostly about who writes the rules.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://zenodo.org/records/17208657&amp;lt;br&amp;gt;%0A![%20](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/00nkd3h8u9jxqvhi4mtq.png)" rel="noopener noreferrer"&gt;Read the full Article here: Link&lt;br&gt;
&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Author&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Agustin V. Startari, linguistic theorist and researcher in historical studies. Universidad de la República &amp;amp; Universidad de Palermo.&lt;br&gt;
Researcher ID: K-5792-2016 | &lt;a href="https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915" rel="noopener noreferrer"&gt;SSRN Author Page: link&lt;/a&gt;&lt;br&gt;
 | Website: &lt;a href="//www.agustinvstartari.com"&gt;www.agustinvstartari.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ethos&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I do not use artificial intelligence to write what I do not know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored. — Agustin V. Startari&lt;/p&gt;

</description>
      <category>ai</category>
      <category>algorithms</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
