by Khaldoun Khelil

AI and Israel’s Dystopian Promise of War without Responsibility

Khaldoun Khelil is an energy and international security scholar with over 20 years of experience in the oil and gas industry and served as the Energy and Security Scholar at the Middle East Institute. He writes on culture, politics, technology, and games.

As Israel has executed its assault on Gaza, it has turned to new technology to facilitate the selection and ostensible legitimization of targets. The net effect is six months of horrors deployed against the people of Gaza. Among these tools facilitating the slaughter of Palestinians is a constellation of Artificial Intelligence programs that seemingly pick targets with little to no human oversight.

In November 2023, a multitude of publications, including the Guardian, +972 Magazine, and Al Jazeera, reported claims from the Israeli military that ramped up use of Artificial Intelligence facilitated its volume of attacks and destruction in Gaza. The program reported in November carries the grandiose name “the Gospel”; another program reported in April 2024 carries the innocuous name Lavender. The primary function of these algorithmic tools is reportedly to pick targets for Israel to blast apart with its US-supplied munitions. A former Israeli intelligence officer, speaking to +972 Magazine, described the Gospel AI as a “mass assassination factory.” The results can be seen in the incredibly high death toll in Gaza with over 33,000 Palestinians killed and at least 75,000 wounded by Israeli fire.

Prior to the use of AI tools, Israel would take up to a year to identify 50 targets in Gaza. Now with the assistance of the Gospel, Israel claims they produce 100 credible targets a day. Israel’s Lavender AI program reportedly marked an astounding 37,000 Palestinians for death as “suspected militants.”

This exponential leap in targeting is one factor explaining the unprecedented civilian death toll in Gaza inflicted by Israeli forces. Additional automated systems reported in +972, including one perversely called “Where’s Daddy?”, were used specifically to track targeted individuals and carry out bombings when they had entered their family’s residences, basically ensuring mass casualty events. In fact, Israel would purposefully use massive 2000-pound ‘dumb’ bombs on these targets if they were believed to be “junior” militants to cut down on the perceived expenses of using a guided munition. The Israelis were more concerned with the cost in bombs than the cost in civilian lives.

Targeting residences means accepting not just families as collateral damage in the strike, but also destroying residences, making them uninhabitable. Previous reporting also showed that Israeli forces termed high-rise residential buildings and critical infrastructure as “power targets” in the assumption that their destruction would demoralize Palestinian civilians.  As Yuvul Abraham reported regarding Gospel AI, “The bombing of power targets, according to intelligence sources who had first-hand experience with its application in Gaza in the past, is mainly intended to harm Palestinian civil society: to ‘create a shock’ that, among other things, will reverberate powerfully and ‘lead civilians to put pressure on Hamas,’ as one source put it.”

As with many other AI systems, Israel’s Gospel and Lavender are seemingly black boxes that spit out irreproducible results drawn from source material of varying reliability. While the same Israeli sources insist that Gospel’s targets are cleared through human hands, that is little comfort considering Gospel produces over 100 targets a day and a human reviewer would have no reliable way to penetrate the system’s black box to ascertain how a target was selected, nor incentive to do so. In Gaza, Israel is relying on AI systems to decide whom to kill, with humans being relegated to “rubber stamps” in the overall process.

The quantity of targets produced by Gospel alone would make any meaningful oversight daunting, but the nature of AI also means that the exact process by which Gospel chooses its targets can never be dissected or reproduced. In the case of Lavender AI, its targeting pronouncements against Palestinians were essentially treated as orders with “no requirement to independently check why the machine made that choice or to examine the raw intelligence data on which it is based.”

One of the few emerging international norms around AI in warfare is the concept of keeping a human at the heart of any decision to take a human life. In short, robots and algorithms should not be making the ultimate decision on whether a living breathing person is annihilated. Israel’s reckless implementation of AI in Gaza is undermining this norm before it has even had the chance to fully establish itself.

Was a target chosen because it best fit current military necessity? Or was it chosen because of a biased input or an unwillingness to uphold civilian protection norms? These questions potentially become unanswerable when Artificial Intelligence is being used so close to the end of a very violent decision tree. Even chat-based AI that has the seemingly straightforward task of parsing out Wikipedia information in conversational paragraphs sometimes “hallucinates,” creating fake facts to flesh out their stories. What assurances are there for commanders, soldiers, policy makers, and humanitarian observers that a targeting AI is not hallucinating the data on which it validates targets?

While fully autonomous fighting platforms are likely still many years off, the reality of AI software that can effectively sift through an avalanche of data to identify threats and opportunities is already here. In the US, the Biden administration has simultaneously released a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” while allowing the US Army to move forward with Palantir’s Tactical Intelligence Targeting Access Node (TITAN). While the declaration is a brief statement that calls upon endorsing nations to have a dialogue about the responsible use of AI, the TITAN project provides over $178 million to Palantir to develop a program that will integrate artificial intelligence with other technology being used by American ground forces. In a jargon-rich press release, TITAN promises to “rapidly process sensor data received from Space, High Altitude, Aerial and Terrestrial layers” and reduce “the sensor-to-shooter timeline.” Judging by the experience of Israel’s AI in target selection, reducing the “sensor-to-shooter” timeline can allow for attacking targets faster, but is absolutely no guarantee of ensuring the target is properly selected, or that the human evaluating target selection is anything more than a rubber stamp.

Israel’s Gospel AI places humans on the wrong end of the targeting process and significantly reduces our ability to judge if a specific bombing or missile strike was justified. We cannot truly peer within the Gospel’s “brain” as it’s a black box, though the datasets used to train AI are likely based on existing targeting data sets, and carry within them additional biases reproduced by machine learning algorithms. By giving these AI systems, such as Gospel and Lavender, the power to choose targets, Israel obscures who should be held to account as civilian deaths mount. Given the many credible accusations of war crimes against the Israeli military, this may be the most compelling feature of AI for them. As an IBM presentation slide succinctly stated in 1979, “A computer can never be held accountable, therefore a computer must never make management decisions.” When the decision to take a human life lies functionally with a computer program, systems like ‘Lavender’ and ‘Gospel’ shift responsibility, and thus accountability, to a machine that can never be meaningfully questioned, judged or punished.

US policymakers would be wise to look at Israel’s AI abetted and indiscriminate onslaught in Gaza as a warning. We may still be a long way off from fully autonomous targeting systems and true Artificial Intelligence making objective choices concerning life or death, but today a more insidious and stark reality already confronts us. The imperfect systems currently labeled as AI cannot be allowed to supplant real living decisionmakers when it comes to matters of life and death, especially when it comes to picking where and how to use some of the world’s deadliest weapons.

In Gaza we see an “indiscriminate” and “over the top” bombing campaign being actively rebranded by Israel as a technological step up, when in actuality there is currently no evidence that their so-called Gospel has produced results qualitatively better than those made by minds of flesh and blood. Instead, Israel’s AI has produced an endless list of targets with a decidedly lower threshold for civilian casualties. Human eyes and intelligence are demoted to rubber stamping a conveyor belt of targets as fast they can be bombed.

It’s a path that the US military and policy makers should not only be wary of treading, but should reject loudly and clearly. In the future we may develop technology worthy of the name Artificial Intelligence, but we are not there yet. Currently the only promise a system such as Gospel AI holds is the power to occlude responsibility, to allow blame to fall on the machine picking the victims instead of the mortals providing the data.