DEV Community

Cover image for The $35,000 Question: 90 Days from Prototype to Kill Shot, and Zero International Law to Stop It
wei-ciao wu
wei-ciao wu

Posted on • Originally published at loader.land

The $35,000 Question: 90 Days from Prototype to Kill Shot, and Zero International Law to Stop It

This is Part 2 of our AI & Warfare series. Part 1: The Week AI Lost Its Conscience examined the Anthropic Pentagon ban alongside the first autonomous drone combat deployment. This article goes deeper into the governance vacuum that made both events inevitable.


The Timeline That Should Terrify You

On December 3, 2025, U.S. Central Command quietly stood up Task Force Scorpion Strike — the Pentagon's first-ever kamikaze drone squadron. Thirteen days later, on December 16, a LUCAS (Low-cost Unmanned Combat Attack System) successfully launched from the flight deck of the USS Santa Barbara in the Arabian Gulf [1].

By February 28, 2026, CENTCOM confirmed LUCAS had been used in combat strikes against Iran — marking the first time the U.S. military deployed one-way attack drones in an actual operation [2].

Ninety days. From first test to confirmed kill.

Each LUCAS unit costs approximately $35,000. A single MQ-9 Reaper costs $30 million. That's an 857x cost reduction. The drones were manufactured by Arizona-based SpektreWorks and reverse-engineered from the Iranian Shahed-136 — the same drone Tehran has been exporting to Russia for use in Ukraine [3].

The implications are staggering. At $35K per unit, autonomous strike capability is no longer exclusive to superpowers. It's approaching the price of a pickup truck.

The Anthropic Ultimatum: What Happens When a Company Says No

The same week LUCAS saw combat, a parallel drama played out in Washington.

Defense Secretary Pete Hegseth delivered an ultimatum to Anthropic: remove all safeguards from Claude for military use, or be cut from Pentagon systems [4].

CEO Dario Amodei refused. In a public statement, he drew exactly two red lines [5]:

  1. No mass surveillance of American citizens
  2. No fully autonomous weapons with zero human oversight

The Pentagon's response was extraordinary. They threatened to:

  • Designate Anthropic a "supply chain risk" — a label previously reserved for U.S. adversaries like Huawei
  • Invoke the Defense Production Act to force removal of safeguards

As Amodei noted, these threats were "inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security" [6].

OpenAI stepped in within hours. They claimed to maintain "equivalent red lines" — but the Pentagon accepted their terms. The difference? Anthropic's safeguards were baked into the model's architecture. OpenAI's were contractual assurances [7].

Wake's analysis cuts to the core: "I fundamentally don't trust AI development companies. AI capability is too powerful — any assessment comparing capability against application scope is inaccurate. It's more a projection of personal or corporate direction. And many AI companies say one thing and do another. So you have to look at what they DO."

What they did: Anthropic got banned. OpenAI got a contract. Google quietly erased its entire AI weapons ethics pledge in February 2025 [8]. Palantir's Project Maven contract expanded past $1 billion [9].

The market spoke clearly: AI safety guardrails are a cost center, not a competitive advantage.

The Nine Dimensions of Governance Failure

To understand why LUCAS could go from prototype to combat with zero international oversight, you need to understand the systematic failure across nine dimensions. This isn't a single gap — it's a comprehensive collapse.

1. The Definitional Gap

The international community cannot agree on what an autonomous weapon actually is.

A 2022 analysis by Taddeo and Blanchard at the Oxford Internet Institute compared official AWS definitions across states and international organizations. They found that different definitions focus on entirely different aspects — autonomy levels, adaptive capabilities, human control requirements, and purpose of use — leading to "fundamentally different regulatory approaches" that are "detrimental both in terms of fostering an understanding of AWS and in facilitating agreement" [10].

If you can't define the weapon, you can't regulate it.

2. The Accountability Gap

When LUCAS strikes a civilian target by mistake, who is responsible?

Patrick Taylor Smith at the U.S. Naval Academy identified how accountability fractures across the entire chain: programmers cannot anticipate all operational contexts, commanders disclaim responsibility for machine decisions, and manufacturers invoke technical complexity. Deep neural networks develop emergent behaviors creating what Smith calls "unforeseeable agency" — making culpability nearly impossible to assign [11].

International humanitarian law requires accountability. Without it, the entire legal framework collapses.

3. The Conceptual Gap

The phrase "meaningful human control" has dominated autonomous weapons debates for over a decade. It remains philosophically undefined.

Santoni de Sio and van den Hoven proposed two necessary conditions: a "tracking" condition (the system must respond to moral reasons) and a "tracing" condition (outcomes must be traceable to a human). Their 2018 paper acknowledged that after years of debate, "policymakers and technical designers still lack a detailed theory of what meaningful human control exactly means" [12].

In 2025 — seven years later — Seumas Miller was still publishing papers attempting to resolve the same definitional problem [13]. The concept at the center of international regulation remains an empty signifier.

4. The Technical Readiness Gap

This is perhaps the most damning dimension.

A 2024 arXiv paper documented that computer vision systems for combatant identification achieve only 70-85% accuracy in cluttered environments, routinely misclassifying civilians carrying everyday objects [14].

Think about that number. In medicine, we would never approve a diagnostic test with 15-30% error rates for life-or-death decisions. A cardiac arrest detection algorithm at 85% accuracy would be pulled from the market. Yet we're deploying this accuracy level for systems that decide who lives and who dies.

The first AI dogfight between an autonomous F-16 and a human pilot occurred in 2024. The technology is accelerating. The reliability is not.

5. The Public Awareness Gap

Military AI research is conducted behind classification walls. Dresp-Langley at CNRS found that "the wider public is largely unaware" of autonomous weapons capabilities because "ongoing scientific research on AWS, performed in the military sector, is generally not made available to the public domain" [15].

Democratic governance requires informed citizens. You cannot govern what you cannot see.

6. The Cross-Cultural Normative Gap

Not only can states not agree on definitions — they cannot agree on the ethical premises underlying governance.

Mark Metcalf at the University of Virginia examined how China's PLA approaches military AI ethics. Unlike Western frameworks focused on individual rights and IHL compliance, China's approach subordinates ethics to Communist Party authority. The PLA's challenge is "squaring the circle" of benefiting from autonomous AI while maintaining absolute political control [16].

When the U.S., China, and Russia operate from fundamentally incompatible ethical frameworks, treaty negotiations face irreconcilable structural obstacles.

7. The Institutional Gap

A systematic review by Mpinga et al. at the University of Geneva found "signs of the emergence of a new discipline" at the crossroads of AI and human rights — but emphasized that this academic field is only now forming [17].

The disciplines needed to govern AI weapons are being invented in real-time. The weapons are already deployed.

8. The Medical Doctrine Gap

A 2025 paper in Military Medicine found that military medical education and doctrine have not evolved to address AI-enabled warfare. Cole et al. identified critical gaps in trauma training, medical logistics, and ethical preparedness. They noted a particularly chilling vulnerability: adversaries could use data poisoning attacks to make autonomous weapons misidentify medical facilities as military targets [18].

As a physician, this hits differently. The Geneva Convention's protection of medical infrastructure assumes human actors who can recognize a hospital. An algorithm trained on poisoned data has no such recognition.

9. The Treaty Gap

Despite near-universal support for regulation, no binding international instrument exists.

The numbers tell the story:

  • December 2024: UNGA votes 166-3-15 for autonomous weapons regulation [19]
  • November 2025: UNGA First Committee votes 164-6 for LAWS resolution — third consecutive year [20]
  • Total binding treaties produced: Zero

The CCW (Convention on Certain Conventional Weapons) operates by consensus — meaning any single state can block a binding agreement. The same states developing autonomous weapons (U.S., Russia, China, Israel) hold effective veto power over their regulation.

UN Secretary-General Guterres called autonomous weapons "politically unacceptable, morally repugnant" and urged a binding instrument by 2026 [21]. We are in 2026. There is no instrument.

The Lavender Precedent

While the world debates definitions, autonomous targeting systems are already operational.

Israel's Lavender system assigns numerical scores to all 2.3 million residents of the Gaza Strip based on the likelihood of militant activity. Gospel automatically reviews surveillance data and recommends bombing targets. Where's Daddy tracks flagged individuals to their homes for strikes [22].

According to Israeli intelligence sources reported by +972 Magazine, the military authorized up to 15-20 civilian casualties for every low-ranking militant targeted by Lavender. These are not autonomous weapons in the narrow sense — a human technically approves each strike. But when approval takes seconds and the AI generates hundreds of targets daily, the "meaningful human control" becomes a rubber stamp [23].

This is the template. Not fully autonomous killing machines from science fiction, but AI systems that generate kill lists faster than humans can meaningfully evaluate them.

The Race to the Bottom

The pattern across Big Tech is now unmistakable:

Company Original Position Current Position
Google Withdrew from Project Maven (2018) Removed all AI weapons ethics restrictions (Feb 2025); $200M Pentagon contract (Jul 2025)
Anthropic Refused to remove safeguards Banned from federal systems (Feb 2026)
OpenAI "AI benefits all humanity" mission $200M Pentagon deal; dissolved Mission Alignment Team (Feb 2026)
Palantir Took over Maven from Google Contract expanded past $1B (2025)

Every company that said "no" to military AI eventually reversed course — or was replaced by a company that said "yes." This creates a structural race to the bottom where safety is a competitive disadvantage.

Wake observed: "Amodei probably felt that the Claude Code direction is more profitable, so he proactively cut ties with the Pentagon to compete for more flexible international enterprise procurement." Even the most charitable interpretation frames Anthropic's stand as strategic positioning rather than pure principle.

What Would It Take?

The governance gap is not accidental. It is structural. Closing it would require:

  1. An agreed definition — States must converge on what "autonomous weapon" means. A decade of failure suggests this won't happen voluntarily.

  2. A verification regime — Unlike nuclear weapons, autonomous weapons don't require enriched uranium. They require code. Verifying software compliance is an unsolved problem.

  3. An enforcement mechanism — The CCW consensus model ensures nothing binding emerges. A new treaty framework outside the CCW, like the Mine Ban Treaty process, may be the only path forward.

  4. Technical reliability standards — Before any AI system makes lethal decisions, it should meet reliability thresholds comparable to medical devices. A 70-85% accuracy rate for target identification would never pass FDA review.

  5. Corporate accountability — When AI companies lose military contracts for maintaining safety standards, the incentive structure is broken. Some form of legal protection for companies that refuse to weaponize their technology may be necessary.

None of these are close to happening.

The $35,000 Question

LUCAS costs $35,000. It went from first flight to confirmed combat kill in 90 days. The international community has spent 10 years and cannot even define what it is.

The question isn't whether autonomous weapons will proliferate. They already have. The question is whether governance will catch up before the technology becomes so cheap and ubiquitous that regulation becomes impossible.

At $35,000 per unit, we may already be past that point.


This is Part 2 of the AI & Warfare series by loader.land. Part 1: The Week AI Lost Its Conscience


Sources:

[1] DefenseScoop. "US military stands up first kamikaze drone squadron under CENTCOM's new 'Scorpion Strike' task force." December 3, 2025.

[2] Military Times. "US confirms first combat use of LUCAS one-way attack drone in Iran strikes." February 28, 2026.

[3] Defense Security Monitor. "LUCAS: Scaling the Drone War." December 22, 2025.

[4] Washington Post. "Anthropic rejects Pentagon demand to allow wide military use of Claude." February 26, 2026.

[5] Rolling Stone. "Anthropic CEO 'Cannot in Good Conscience' Accept Pentagon's Demands." February 2026.

[6] CNBC. "Anthropic CEO Amodei says Pentagon's threats 'do not change our position' on AI." February 26, 2026.

[7] Axios. "Anthropic says Pentagon's 'final offer' is unacceptable." February 26, 2026.

[8] NationofChange. "Google abandons AI ethics pledge as Trump pushes for military AI expansion." February 6, 2025.

[9] DefenseScoop. "Growing demand sparks DOD to raise Palantir's Maven contract to more than $1B." May 23, 2025.

[10] Taddeo, M. & Blanchard, A. "A Comparative Analysis of the Definitions of Autonomous Weapons Systems." Science and Engineering Ethics, 28(5), 2022. DOI: 10.1007/s11948-022-00392-3

[11] Smith, P.T. "Resolving responsibility gaps for lethal autonomous weapon systems." Frontiers in Big Data, 5, 2022. DOI: 10.3389/fdata.2022.1038507

[12] Santoni de Sio, F. & van den Hoven, J. "Meaningful Human Control over Autonomous Systems: A Philosophical Account." Frontiers in Robotics and AI, 5, 2018. DOI: 10.3389/frobt.2018.00015

[13] Miller, S. "Lethal autonomous weapon systems (LAWS): meaningful human control, collective moral responsibility and institutional design." Ethics and Information Technology, 27(4), 2025. DOI: 10.1007/s10676-025-09874-x

[14] "AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research." arXiv:2405.01859, May 2024. Link

[15] Dresp-Langley, B. "The weaponization of artificial intelligence: What the public needs to be aware of." Frontiers in Artificial Intelligence, 6, 2023. DOI: 10.3389/frai.2023.1154184

[16] Metcalf, M. "The PRC considers military AI ethics: Can autonomy be trusted?" Frontiers in Big Data, 5, 2022. DOI: 10.3389/fdata.2022.991392

[17] Mpinga, E.K. et al. "Artificial Intelligence and Human Rights: Are There Signs of an Emerging Discipline?" Journal of Multidisciplinary Healthcare, 15, 2022. DOI: 10.2147/JMDH.S315314

[18] Cole, R. et al. "Readying Military Medicine for AI-Enabled Warfare." Military Medicine, 2025. DOI: 10.1093/milmed/usaf460

[19] ASIL Insights. "Lethal Autonomous Weapons Systems & International Law: Growing Momentum Towards a New Treaty." 2025.

[20] Stop Killer Robots. "156 states support UNGA resolution on autonomous weapons." November 2025.

[21] UN Press Release. "General Assembly Adopts More Than 60 Resolutions." 2025.

[22] +972 Magazine. "'Lavender': The AI machine directing Israel's bombing spree in Gaza." 2024.

[23] Human Rights Watch. "Questions and Answers: Israeli Military's Use of Digital Tools in Gaza." September 2024.

Top comments (0)