20/09/24

AI warfare ‘fuelling civilian deaths in Gaza’

2021-05-15-133730947899
The effects of destruction inside a house in Gaza after it was targeted by the occupation forces' weapons supported by artificial intelligence. Copyright: ICRC

Speed read

  • AI-based warfare systems used by Israeli forces ‘lack accuracy’, say analysts
  • Systems such as ‘the Gospel’ and ‘Lavender’ are used to identify targets rapidly
  • Analysts say they fail to distinguish between fighters and civilians

Send to a friend

The details you provide on this page will not be used to send unsolicited email, and will not be sold to a 3rd party. See privacy policy.

[CAIRO] Israel’s use of weapons powered by artificial intelligence (AI) in Gaza has significantly contributed to the deaths of civilians, according to analysts, after almost a year of bloodshed in the enclave.

Computer scientists say the AI warfare applications used by the Israeli Defence Forces (IDF) lack accuracy and likely fail to distinguish between fighters and civilians.

The war on Gaza, which started on 7 October 2023, has provided an unprecedented opportunity for the IDF to use such tools in a vast theater of operations.

“Given the extent of the civilian casualties, the weapons are either not precise or the IDF does not care about collateral harms. The reality it appears … is both.”

Toby Walsh, AI professor, University of New South Wales, Australia

Israeli forces have defended their use of AI to identify Palestinian targets, using a system named the “Habsora”, or “Gospel” in English, and another dubbed “Lavender”.

The system can automatically extract new intelligence material to identify targets rapidly, according to a statement by the Israeli army, which said the goal was “precise attacks on infrastructure affiliated with [the Palestinian group] Hamas”.

The IDF refuted media reports that the two platforms use AI to select targets for attacks autonomously. It said both the Gospel and Lavender systems help intelligence analysts “cross-reference existing intelligence sources” but that “identification of targets for attack is always done by the analyst”.

However, Paul Biggar, a software engineer and founder of Tech for Palestine, was scathing about the AI algorithm in the Lavender system, which he said was used to “guess” which Palestinians to target.

“Lavender is not so much a system that can make errors, and more one that is based on such incredible flaws that there is no reason to believe it ever works,” he told SciDev.Net.

According to Biggar, the system has no real advantages in terms of reducing civilian deaths as it can’t accurately distinguish between a combatant and a civilian.

“Lavender has no investigative component – it cannot learn who militants are,” he explained.

“The only conclusion is that it is a system to provide plausible deniability, and to blame civilian bombing…on a machine instead of a human,” he added.

According to a statement issued by the IDF, both the Gospel and Lavender systems are used to target Palestinian militants and minimise civilian casualties. It says the systems do not replace the intelligence analyst, but improve access to relevant information, making intelligence analysis more accurate and less prone to errors.

However, the number of Palestinians killed since the conflict began has exceeded 41,000, of which most are civilians and more than half are women and children, according to the Palestinian Central Bureau of Statistics.

A study published in the The Lancet in July estimated the true number of those killed in the conflict to be around 186,000, when accounting for “indirect deaths”, such as those linked to diseases and damaged health infrastructure, but said collecting data was increasingly difficult.

Toby Walsh, an AI professor in the Department of Computer Science and Engineering at the University of New South Wales, Australia, told SciDev.Net: “Given the extent of the civilian casualties, the weapons are either not precise or the IDF does not care about collateral harms. The reality it appears … is both.”

Walsh believes that the notion that humans have control of these weapons is “disingenuous”, as hundreds of targets are sent to data analysts over a short period of time, limiting their ability to analyse these targets accurately.

Noah Sylvia, a research analyst in the military science team at the Royal United Services Institute, a UK-based defence and security think tank, explained that these are not autonomous weapons, but systems designed to support human decisions.

Automation bias

However, having a human “in the loop” doesn’t necessarily mean lower casualties, he cautioned.

“One factor is called automation bias, when humans defer to the outputs of a machine simply because of their bias to consider machines more accurate… A human might not understand how a model produced a given target, but trust the machine enough to still strike the target.”

Added to this, there appears to be “a complete lack of accountability”, said Sylvia.

Armies usually issue reports after their operations to determine what went right or wrong, including errors that cost civilian lives, he explained, adding: “If the IDF was doing this type of assessment, we would see the number of innocent civilians killed decrease, but there has been no such decrease.”

Walsh believes action is needed at a global level to stem the use of AI in warfare.

Military AI technologies “are making the world a less safe space, encouraging conflicts, and causing harms,” he said, adding: “The world can stop [their development by Israel and other countries] by having the UN ban weapons that lack meaningful human control.”

“We need to decide that such weapons are repugnant, contrary to human dignity, destabilising and unnecessary,” he added.

This article was produced by SciDev.Net’s Middle East and North Africa desk.