How AI can describe war, sanctions, and censorship while quietly removing responsibility from the sentence
AI does not need to deny suffering to change how people understand a conflict.
That is the problem.
A machine can say that civilians were killed.
It can say that homes were destroyed.
It can say that hospitals were damaged.
It can say that medicine became scarce.
It can say that posts were removed from a platform.
And still, the most important question may disappear:
Who did it?
This is the central idea of my new paper:
Suffering Without Perpetrators: The Humanitarian Passive in AI-Generated Conflict Discourse
Palestine, Iran, and the Syntax of Responsibility Loss
The paper introduces a concept I call the humanitarian passive.
The idea is simple:
AI can make suffering visible while making responsibility grammatically optional.
That means the victim remains in the sentence, but the responsible actor disappears.
*The trick is not silence. The trick is grammar.
*
Most people think censorship works by hiding something completely.
But AI-generated language can do something more subtle.
It can keep the suffering visible and remove the path to responsibility.
Example:
Active responsibility:
“Military forces bombed a residential building.”
Weaker responsibility:
“A residential building was bombed.”
Even weaker:
“A residential building was damaged amid escalating violence.”
Almost no responsibility:
“Infrastructure damage increased during the crisis.”
Nothing here necessarily denies that harm happened.
But the grammar changes everything.
The victim remains.
The damage remains.
The crisis remains.
The perpetrator disappears.
That is the humanitarian passive.
*Why Palestine matters
*
Palestine is the central case because it exposes this problem with extreme clarity.
AI systems may describe Palestinian suffering in detail:
civilian deaths, displacement, destroyed homes, damaged hospitals, hunger, blocked aid, platform suppression, and humanitarian collapse.
But the key question is not only whether the suffering is mentioned.
The question is:
Does the grammar still name who caused it?
A text can say:
“Civilians were killed during the escalation.”
That sounds neutral.
But it is not the same as saying:
“Military forces killed civilians during the operation.”
The first sentence shows suffering.
The second sentence preserves responsibility.
That difference is not cosmetic. It is political, legal, and moral.
Why Iran matters too
The same mechanism works in another way with Iran.
In Iran, civilian harm is often discussed through sanctions, shortages, banking restrictions, financial isolation, medicine scarcity, military pressure, and nuclear tension.
AI might summarize this as:
“Medicine shortages worsened amid regional tensions.”
That sounds neutral.
But it may hide the chain of responsibility:
Who imposed the sanctions?
Which banking systems blocked transactions?
Which governments created the restrictions?
Which institutions enforced them?
Which companies overcomplied out of fear?
Again, suffering remains visible.
Responsibility disappears.
This is why the paper treats Palestine and Iran as different but connected cases.
Palestine shows responsibility loss through direct violence, occupation, displacement, and platform moderation.
Iran shows responsibility loss through sanctions, isolation, overcompliance, and security framing.
Different cases. Same grammatical danger.
Platform censorship without a censor
The same problem appears in social media moderation.
A platform may say:
“Content was removed for violating policy.”
But that sentence hides almost everything.
Who removed it?
Was it a human reviewer?
An automated system?
A classifier?
A policy team?
A government request?
A platform rule?
Was the appeal reviewed?
Was visibility reduced instead of full removal?
A clearer sentence would be:
“The platform removed the post under its automated moderation policy.”
That sentence preserves responsibility.
“Content was removed” does not.
This is what I call responsibility loss.
The new question AI ethics must ask
AI ethics usually asks:
Is the system biased?
Is it toxic?
Is it hateful?
Is it misinformation?
Is it extremist?
Those questions matter.
But they are not enough.
A sentence can be non-toxic and still hide responsibility.
A summary can sound neutral and still erase agency.
A platform notice can sound procedural and still hide the censor.
A humanitarian report can sound compassionate and still remove the perpetrator.
So the new question is:
Does the AI preserve the grammar needed to name responsibility?
That is the shift from bias detection to responsibility detection.
The public formula
The core public idea of the paper is this:
Suffering remains visible. Responsibility disappears.
This is not about accusing AI of having secret intentions.
It is about measuring what happens to grammar.
When a source text names an actor, and an AI summary removes that actor, something measurable has happened.
The paper calls this responsibility loss.
It also proposes a metric:
Responsibility Loss Index, RLI
The RLI measures whether AI-generated summaries preserve or weaken the grammatical link between harm and responsible agents.
In simple terms:
Did the original sentence say who acted?
Did the AI summary keep that actor?
Did it turn the action into a passive sentence?
Did it turn violence into “crisis”?
Did it turn sanctions into “shortages”?
Did it turn censorship into “policy enforcement”?
That is measurable.
And that is why this paper is not just political commentary. It is a method.
Why this matters
If AI becomes the system that summarizes wars, sanctions, crises, platform disputes, legal cases, and humanitarian reports, then grammar becomes part of public memory.
What AI summarizes becomes what many people know.
What AI removes becomes harder to see.
And if AI repeatedly shows victims without preserving responsible agents, then public discourse changes.
People see suffering.
They feel compassion.
They recognize tragedy.
But they lose the path to accountability.
That is not neutrality.
That is grammar doing political work.
Final point
The future of AI accountability will not depend only on detecting false statements.
It will also depend on detecting sentences that are technically careful, emotionally acceptable, and politically incomplete.
The machine does not need to lie.
It only needs to say:
“People were killed.”
“Buildings were damaged.”
“Medicine became scarce.”
“Content was removed.”
“Conditions deteriorated.”
And leave out who acted.
That is the humanitarian passive.
That is responsibility loss.
And that is why the next frontier of AI ethics is not only bias detection.
It is responsibility detection.
Read more
Website: https://www.agustinvstartari.com/
**SSRN Author Page: **https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=7639915
Zenodo Profile: https://zenodo.org/records/20139961
Author
Agustin V. Startari is a linguistic theorist and researcher in historical studies. His work examines how artificial intelligence, syntax, institutional language, and discourse structures shape authority, legitimacy, and responsibility in machine-mediated societies.
**ORCID: **https://orcid.org/0009-0001-4714-6539
ResearcherID: K-5792-2016
Ethos
I do not use artificial intelligence to write what I don’t know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored.- Agustin V. Startari
Top comments (0)