Gaza civilian deaths test Israel's AI precision claims

'Either the AI is as good as claimed and the IDF doesn't care about collateral damage, or the AI is not as good as claimed," says scientist

Read more...
Israeli military vehicles operate during ground operation of the Israeli army against Hamas, at a location given as Gaza. Photo: Reuters

By AFP

Published: Sun 3 Mar 2024, 2:04 PM

The Israeli military has said AI helps it more accurately target militants in its five-month war against Hamas, but as Gaza deaths rise, experts are questioning how effective algorithms can really be.

The health ministry in the Hamas-run Gaza Strip says the war has killed upwards of 30,000 people, the majority of them civilians.

"Either the AI is as good as claimed and the IDF (Israeli military) doesn't care about collateral damage, or the AI is not as good as claimed," Toby Walsh, chief scientist at the University of New South Wales AI Institute in Australia, told AFP.

Advertising
Advertising

The health ministry does not specify how many militants are included in the Gaza toll.

Israel has said its forces "eliminated 10,000 terrorists" since the war began in early October, triggered by a deadly Hamas attack on southern Israel.

Israel's claimed use of algorithms adds another layer of concern for activists already alarmed by artificial intelligence-powered hardware like drones and gunsights that are being deployed in Gaza.

The Israeli military told AFP it had no comment on its AI targeting systems.

But the army has repeatedly claimed its forces target only militants and take measures to avoid harm to civilians.

Israel began hyping AI-powered targeting after an 11-day conflict in Gaza during May 2021, which commanders branded the world's "first AI war".

The military chief during the 2021 war, Aviv Kochavi, told Israeli news website Ynet last year that the force had used AI systems to identify "100 new targets every day".

"In the past, we would produce 50 targets in Gaza in a year," he said.

The current Gaza offensive began when Hamas launched an attack on October 7 that resulted in the deaths of about 1,160 people in Israel, mostly civilians, according to an AFP tally of official figures.

Weeks later, a blog entry on the Israeli military's website said its AI-enhanced "targeting directorate" had identified more than 12,000 targets in just 27 days.

An unnamed Israeli official was quoted as saying the AI system, called Gospel, produced targets "for precise attacks on infrastructure associated with Hamas, inflicting great damage on the enemy and minimal harm to those not involved".

But an anonymous former Israeli intelligence officer, quoted in November by independent Israeli-Palestinian publication +972 Magazine, described Gospel's work as creating a "mass assassination factory".

Citing an intelligence source, the report said Gospel crunches vast amounts of data faster than "tens of thousands of intelligence officers" and identifies, in real time, locations likely to be used by suspected militants.

However, the sources gave no detail of the data put into the system or the criteria used to determine the targets.

Several experts told AFP the military was likely to be feeding the system with drone footage, social media posts, information from agents on the ground, mobile phone locations and other surveillance data.

Once the system identifies a target, it could use population data from official sources to estimate the likelihood of civilian harm.

But Lucy Suchman, professor of anthropology of science and technology at Britain's Lancaster University, said the idea that more data would produce better targets was untrue.

Algorithms are trained to find patterns in data that match a certain designation -- in the Gaza conflict, possibly "Hamas affiliate", she said.

Any pattern in the data matching a previously identified affiliate would generate a new target, but any "questionable assumptions" would be amplified, Suchman explained.

"In other words, more dubious data equals worse systems."

The Israelis are not the first fighting force to deploy automated targeting on the battlefield.

As far back as the 1990-91 Gulf War, the US military worked on algorithms to improve targeting.

For the 1999 Kosovo bombing campaign, Nato began using algorithms to calculate potential civilian casualties.

And the US military had hired secretive data firm Palantir to provide battlefield analytics in Afghanistan.

Backers of the technology have repeatedly insisted it will reduce civilian deaths.

But some military analysts are sceptical that the technology is advanced enough to be trusted.

In a blog post for the British Royal United Services Institute defence think-tank, analyst Noah Sylvia said last month that humans would still need to cross-check every output.

The Israeli military is "one of the most technologically advanced and integrated militaries in the world", he said.

But "the odds of even the IDF using an AI with such a degree of sophistication and autonomy are low".

ALSO READ:

AFP

Published: Sun 3 Mar 2024, 2:04 PM

Recommended for you