Gaza civilian deaths test Israel's AI precision claims

The health ministry in the Hamas-run Gaza Strip says the war has killed upwards of 30,000 people, the majority of them civilians.

Updated - March 04, 2024 06:55 pm IST

Published - March 04, 2024 08:05 am IST - Paris

A child plays in a makeshift camp for displaced Palestinians in Deir al-Balah in central Gaza on March 3, 2024, amid the ongoing battles between Israel and the militant Hamas group.

A child plays in a makeshift camp for displaced Palestinians in Deir al-Balah in central Gaza on March 3, 2024, amid the ongoing battles between Israel and the militant Hamas group. | Photo Credit: AFP

The Israeli military has said AI helps it more accurately target militants in its five-month war against Hamas, but as Gaza deaths rise, experts are questioning how effective algorithms can really be.

The health ministry in the Hamas-run Gaza Strip says the war has killed upwards of 30,000 people, the majority of them civilians.

"Either the AI is as good as claimed and the IDF (Israeli military) doesn't care about collateral damage, or the AI is not as good as claimed," Toby Walsh, chief scientist at the University of New South Wales AI Institute in Australia, told AFP.

The health ministry does not specify how many militants are included in the Gaza toll.

Israel has said its forces "eliminated 10,000 terrorists" since the war began in early October, triggered by a deadly Hamas attack on southern Israel.

Israel's claimed use of algorithms adds another layer of concern for activists already alarmed by artificial intelligence-powered hardware like drones and gunsights that are being deployed in Gaza.

The Israeli military told AFP it had no comment on its AI targeting systems.

But the army has repeatedly claimed its forces target only militants and take measures to avoid harm to civilians.

Precise attacks

Israel began hyping AI-powered targeting after an 11-day conflict in Gaza during May 2021, which commanders branded the world's "first AI war".

The military chief during the 2021 war, Aviv Kochavi, told Israeli news website Ynet last year that the force had used AI systems to identify "100 new targets every day".

"In the past, we would produce 50 targets in Gaza in a year," he said.

Weeks after the October 7 attack, a blog entry on the Israeli military's website said its AI-enhanced "targeting directorate" had identified more than 12,000 targets in just 27 days.

An unnamed Israeli official was quoted as saying the AI system, called Gospel, produced targets "for precise attacks on infrastructure associated with Hamas, inflicting great damage on the enemy and minimal harm to those not involved".

But an anonymous former Israeli intelligence officer, quoted in November by independent Israeli-Palestinian publication +972 Magazine, described Gospel's work as creating a "mass assassination factory".

Citing an intelligence source, the report said Gospel crunches vast amounts of data faster than "tens of thousands of intelligence officers" and identifies, in real time, locations likely to be used by suspected militants.

However, the sources gave no detail of the data put into the system or the criteria used to determine the targets.

Dubious data

Several experts said the military was likely to be feeding the system with drone footage, social media posts, information from agents on the ground, mobile phone locations and other surveillance data.

Once the system identifies a target, it could use population data from official sources to estimate the likelihood of civilian harm.

But Lucy Suchman, professor of anthropology of science and technology at Britain's Lancaster University, said the idea that more data would produce better targets was untrue.

Algorithms are trained to find patterns in data that match a certain designation - in the Gaza conflict, possibly "Hamas affiliate", she said.

Any pattern in the data matching a previously identified affiliate would generate a new target, but any "questionable assumptions" would be amplified, Ms. Suchman explained.

"In other words, more dubious data equals worse systems."

Humans in control

The Israelis are not the first fighting force to deploy automated targeting on the battlefield.

As far back as the 1990-91 Gulf War, the US military worked on algorithms to improve targeting.

For the 1999 Kosovo bombing campaign, NATO began using algorithms to calculate potential civilian casualties.

And the US military had hired secretive data firm Palantir to provide battlefield analytics in Afghanistan.

Backers of the technology have repeatedly insisted it will reduce civilian deaths.

But some military analysts are sceptical that the technology is advanced enough to be trusted.

In a blog post for the British Royal United Services Institute defence think-tank, analyst Noah Sylvia said last month that humans would still need to cross-check every output.

The Israeli military is "one of the most technologically advanced and integrated militaries in the world", he said.

But "the odds of even the IDF using an AI with such a degree of sophistication and autonomy are low".

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.