DEV Community

Kunal
Kunal

Posted on • Originally published at kunalganglani.com

The AI Kill Chain Is Here: How Algorithms Are Choosing Who Lives and Dies on the Battlefield [2026]

In April 2024, +972 Magazine published an investigation revealing that the Israeli military had used an AI system called Lavender to mark approximately 37,000 Palestinians as suspected militants for potential targeting. A separate system called The Gospel, first reported by The Guardian in December 2023, had already been generating building and infrastructure targets at a pace no human team could match. The AI kill chain isn't theoretical. It's not sci-fi. It's operational, deployed, and accelerating.

I've spent 14+ years building software systems, and the technical architecture behind these programs is disturbingly familiar. The same patterns I've used to build data pipelines and recommendation engines — sensor fusion, classification models, confidence scoring — are being wired into systems that end human lives. And the failure modes I've seen in production? They're orders of magnitude more dangerous when the output isn't a bad product recommendation but a missile strike.

What Is the AI Kill Chain?

The AI kill chain is the application of artificial intelligence to the military's traditional "kill chain" — the sequence of steps from identifying a target to engaging it with force. Traditionally, this loop moves through six phases: find, fix, track, target, engage, and assess. Each phase historically required human analysts, sometimes taking hours or days to complete.

AI compresses that entire sequence into seconds. Computer vision models scan satellite imagery and drone feeds. NLP systems sift through intercepted communications. Sensor fusion algorithms combine data from radar, signals intelligence, and ground sensors into a unified picture. Classification models then score potential targets, and the results get pushed to commanders — or increasingly, directly to weapons platforms.

Paul Scharre, Executive Vice President at the Center for a New American Security and author of Army of None, makes the point that the real revolution isn't AI itself but the speed at which it executes the kill chain. The shift from human-speed decision making to machine-speed warfare creates advantages that are nearly impossible to counter with traditional methods. When your adversary's targeting loop runs in seconds and yours takes hours, you've already lost.

Speed is a military advantage. But speed without accuracy is a catastrophe. That's the tension running through every piece of this technology.

The Pentagon's CJADC2: Connecting Every Sensor to Every Shooter

The U.S. Department of Defense's primary vehicle for the AI kill chain is CJADC2 — Combined Joint All-Domain Command and Control. Championed by Deputy Secretary of Defense Kathleen Hicks, the initiative aims to connect sensors from every military branch — Army, Navy, Air Force, Marines, Space Force — into a single AI-powered network.

The scope is wild. Every satellite, every drone, every ground radar, every submarine sonar array feeding data into a unified system where AI algorithms identify threats, recommend responses, and route targeting data to the nearest available weapon system. Gregory C. Allen, Director of the AI Governance Project at the Center for Strategic and International Studies (CSIS), has outlined how the DoD views this as a strategic necessity to maintain military advantage over China, which is building similar capabilities.

If you've ever worked on a large-scale distributed system, you'll recognize the architecture immediately. It's an event-driven pipeline: ingest from thousands of heterogeneous data sources, normalize into a common schema, run inference models, push results to consumers. I've built systems like this for processing financial transactions and monitoring cloud infrastructure. The engineering patterns are identical. The stakes couldn't be more different.

[YOUTUBE:cgzsbD5d5aQ|Deploying an AI-Enabled Military: The US is on its Way]

The technical challenges are also painfully familiar to anyone who's dealt with distributed systems at scale. Data latency between sensors. Schema mismatches between branches that have used incompatible systems for decades. Model drift as battlefield conditions change faster than retraining cycles. I've lived these problems. They're the same issues that cause outages in cloud infrastructure, except here, an outage means a missile hits the wrong building.

Gospel and Lavender: The AI Kill Chain in Practice

The most concrete public evidence of the AI kill chain in operation comes from Israel's use of two distinct systems during the Gaza conflict.

The Gospel, first reported by The Guardian and +972 Magazine in late 2023, is a target recommendation system focused on buildings and infrastructure. Israeli military sources described it as a "mass assassination factory" that could generate targets far faster than any human intelligence team. The system reportedly cross-references multiple data sources to identify structures it classifies as military assets.

Lavender, revealed in a separate +972 Magazine investigation in April 2024, works differently. It's a person-targeting system — a classification model that assigns every individual in Gaza a score indicating the probability of being affiliated with a militant organization. According to the investigation, the system marked roughly 37,000 people as potential targets, and human operators were given as little as 20 seconds to approve each strike.

Twenty seconds. That's not human-in-the-loop oversight. That's a rubber stamp.

This is where the AI kill chain discussion stops being abstract. I've built classification systems. I know exactly how these models work. They're probabilistic. They output confidence scores, not certainties. Every model has a false positive rate. When you're classifying email spam, a false positive means someone misses a newsletter. When you're classifying human beings as military targets, a false positive means a family dies. The same engineering trade-off — precision versus recall — takes on a meaning that should make every ML engineer deeply uncomfortable.

Why AI Military Systems Are Dangerously Brittle

A RAND Corporation report on military AI highlighted a fundamental problem: AI algorithms are "brittle." They perform well within their training distribution and fail catastrophically outside it. This isn't a bug that gets patched. It's a structural limitation of how machine learning works.

Battlefields are precisely the kind of environment where distribution shift is constant. New tactics, different terrain, civilians behaving in unexpected ways, adversarial actors deliberately trying to fool sensors. I've watched ML models in production degrade over weeks as user behavior shifted — and that was with stable, non-adversarial data. In a military context, your adversary is actively trying to make your models fail. That's not distribution drift. That's adversarial attack at scale.

Anthony King, Chair of War Studies at the University of Warwick, describes how AI is fundamentally changing military command and control from a human-centered model to an "algorithmic" one. The danger isn't just that AI makes mistakes. It's that AI makes mistakes at machine speed, across an entire theater of operations, simultaneously. A human commander making a bad call affects one engagement. A flawed algorithm affects thousands.

The question isn't whether AI will make errors in warfare. It will. The question is whether the speed advantage is worth the systematic risk of errors at scale.

This brittleness problem connects directly to what I've written about in AI tech debt in production systems. Hidden feedback loops, undeclared dependencies on training data, untested edge cases — the same patterns that plague enterprise AI are present in military AI. Except the consequences of failure aren't revenue loss. They're civilian casualties.

The Human-in-the-Loop Illusion

Every military deploying AI targeting systems claims to maintain "human-in-the-loop" oversight. The human makes the final call. The AI just recommends.

This is, at best, misleading. At worst, it's a deliberate fiction.

Here's what actually happens: an AI system processes thousands of data points, runs inference, and presents a recommendation with a confidence score to a human operator. That operator has seconds to approve or reject. They don't have access to the underlying data. They can't interrogate the model's reasoning. They're under enormous pressure to act quickly because the entire point of the system is speed.

This is automation bias — one of the most well-documented phenomena in human factors research. When humans supervise automated systems, they overwhelmingly defer to the machine's judgment. It happens in aviation. It happens in medical diagnostics. It happens in financial trading. There is zero reason to believe it won't happen in military targeting. The 20-second approval window reported for the Lavender system isn't oversight. It's theater.

UN Secretary-General António Guterres has called for a new international treaty to ban autonomous weapons systems, describing them as "politically unacceptable and morally repugnant." But the diplomatic process moves at human speed while the technology advances at machine speed. By the time any treaty gets negotiated, the systems it aims to regulate will be two generations ahead.

For those interested in how these dynamics play out in AI safety more broadly, the security risks of giving AI systems autonomous control apply here with far higher stakes. The fundamental challenge is the same: how do you maintain meaningful human oversight over a system designed to operate faster than humans can think?

What Comes Next

The AI kill chain isn't a future threat. It's a current reality that's expanding rapidly. The U.S., China, Russia, Israel, Turkey, and Iran are all developing or deploying autonomous targeting capabilities. The 2015 open letter from AI researchers — signed by Stuart Russell, Stephen Hawking, Elon Musk, and thousands of others through the Future of Life Institute — warned that autonomous weapons would become "the third revolution in warfare, after gunpowder and nuclear weapons." A decade later, that prediction is materializing in front of us.

What concerns me most as an engineer is the gap between what these systems are marketed as and what they actually are. They're marketed as precise, intelligent, reliable. What they actually are is probabilistic classification models running on messy, incomplete data in adversarial environments where the cost of a false positive is measured in human lives. I've seen production systems with 99.9% accuracy still generate thousands of errors at scale. In warfare, that math doesn't work.

The engineers building these systems know this. The question is whether the institutions deploying them care, or whether the strategic advantage of speed will always outweigh the moral weight of accuracy. My prediction: within three years, we'll see the first publicly documented case of an autonomous system executing a strike with zero human approval in the loop. Not because anyone planned it that way, but because the system moved faster than the human could intervene.

If you build software, you already understand the AI kill chain. You just never imagined your design patterns being used to decide who lives and who dies.


Originally published on kunalganglani.com

Top comments (0)