DEV Community

Agustin V. Startari
Agustin V. Startari

Posted on

Who Is Responsible When Algorithms Rule? Reintroducing Human Accountability in Executable Governance


Uploading imageWhy predictive systems make decisions without subjects, and how accountability injection can restore responsibility in law, finance, education, and healthcare.

*Introduction
*

When you are denied a university admission letter, refused a credit increase, or flagged by a medical audit, the decision increasingly comes from a system rather than a person. It is not a professor who rejected your application, not a banker who cut your limit, not a doctor who reviewed your scan. Instead, the decision is produced by what I call executable governance, authority embedded in predictive models and code. These systems produce legitimacy by form, yet they displace responsibility. The question is simple: who is responsible when algorithms rule?

This article translates my academic framework on accountability injection into a public discussion. It explains why responsibility disappears in predictive societies, how the gap is already affecting lives, and what can be done to restore human accountability.

*The Disappearing Subject in Real Life
*

  1. Admissions: Applicants to universities are evaluated by machine-learning models that predict “likelihood of success.” Rejection letters arrive without a name or explanation. Appeals vanish into a void.
  2. Finance: Credit scoring algorithms adjust limits based on opaque correlations. When you ask why, the bank says “the system decided.” No one is accountable for the cut.
  3. Healthcare: Automated medical audits deny coverage or flag anomalies. Patients are told “the system detected irregularities.” Doctors defer to the software. Regulators confirm compliance. Still, no responsible actor exists.
  4. DAOs and Smart Contracts: Decentralized finance executes trades or locks assets automatically. When millions vanish due to a flaw, developers and participants blame “the code.”

Each of these cases illustrates the rise of null subjects: authority without presence, decisions without decision-makers.

The Accountability Injection Framework

To restore responsibility, accountability cannot be treated as an external safeguard. It must be compiled into the system itself, into what I define as the regla compilada. The model has three tiers:

Human Tier: Non-delegable decisions that directly affect rights and lives. Judges, physicians, or regulators must sign their name.
**
Hybrid Tier:** Co-decision where models propose but humans validate. Structured dissent logs create traceability.

Syntactic Supervised Tier: Routine or low-risk tasks can remain automated, but every execution is logged in an immutable ledger. Escalation rules automatically send anomalies to higher tiers.

*Why It Matters
*

Without accountability injection, predictive governance produces authority without remedy. Citizens cannot appeal, regulators cannot attribute responsibility, and institutions lose legitimacy. With accountability injection:

  • A rejected student knows who validated the decision.
  • A patient denied coverage can appeal to a physician, not a void.
  • A DAO dispute can be traced to documented override mechanisms.

Regulators can audit immutable ledgers and hold institutions liable.

*Toward Predictive Societies with Responsibility
*

The lesson is not to reject predictive systems. They bring speed and efficiency. The lesson is to embed accountability into their structure. Responsibility must be a property of governance, not an afterthought. This means legislation like the EU AI Act must evolve from broad oversight to explicit requirements for accountability injection.

If we do not act, we risk building institutions where no one is responsible. If we do, predictive societies can remain efficient while regaining legitimacy.

*Call to Action
*

I invite policymakers, regulators, technologists, and citizens to rethink governance in the age of algorithms. Responsibility must be reintroduced not only for justice but also for trust. The future of institutions depends on restoring the right to appeal, the visibility of decision-makers, and the dignity of accountability.

Read the full research on Zenodo and SSRN for technical detail and case studies.

**Meta Description
**This article explores how predictive systems displace responsibility by producing authority without subjects. It introduces accountability injection, a three-tier model (human, hybrid, syntactic supervised) that structurally reattaches responsibility. Case studies include the AI Act, DAO governance, credit scoring, admissions, and medical audits, offering a blueprint for legislators and regulators to restore appeal and legitimacy in predictive societies.

**TL;DR
**Executable governance produces authority without subjects, leaving decisions unappealable and responsibility displaced. This article introduces accountability injection, a three-tier model (human, hybrid, syntactic supervised) that reattaches responsibility structurally. Applied to the AI Act, smart contracts, credit scoring, admissions, and medical audits, it shows how appeal and legitimacy can be restored in predictive societies.

Links

Website: https://www.agustinvstartari.com/

ORCID: https://orcid.org/0000-0002-7438-8370

SSRN Author Page: https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915

Zenodo Profile: https://zenodo.org/me/uploads?q=&f=shared_with_me%3Afalse&l=list&p=1&s=10&sort=newest

Researcher ID: K-5792-2016

Ethos
I do not use artificial intelligence to write what I don’t know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored.
— Agustin V. Startari

*SEO Hashtags
*
#AI #Accountability #Governance #EthicsInAI #PredictiveSystems #ExecutableGovernance #SovereignExecutable #AlgorithmicPower #AIAct #SmartContracts #HealthcareAI #CreditScoring #UniversityAdmissions #DAO #LegalTech

Top comments (0)