this post was submitted on 01 Jun 2024
116 points (93.9% liked)

Futurology

1807 readers
20 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Lugh 20 points 5 months ago (2 children)

Palantir's panel at a recent military conference where they joked and patted themselves on the back about the work their AI tools are doing in Gaza was like a scene with human ghouls in the darkest of horror movies.

Estimates vary as to how many of the 30,000-40,000 dead in Gaza are military combatants, but they seem to average about 20%. This seems like a terrible record of failure for an AI tool that touts its precision.

Why does the US government want to reward and endorse this tech? Why aren't people more alarmed? By any measure, surely Palantir's demonstrated track record is one of failure. The Israel-Hammas war is the first time the world has seen AI used in significant warfare. It's a grim indication for the future.

[–] jmcs@discuss.tchncs.de 8 points 5 months ago (1 children)

Palantir's "AI" is crap and it's clear it has tons of false positives. But even if it was 100% accurate it wouldn't prevent the civilian deaths if the military getting the report carpet bombs everything around the identified terrorists without caring if there are civilians around - and when the Israeli spokesperson has a very precise estimate of how many Hamas members they killed but not even a ballpark number of estimated civilian deaths it's clear that's what's happening here.

but they seem to average about 20%. This seems like a terrible record of failure for an AI tool that touts its precision.

That does seem pretty bad.

To play devil's advocate for a moment, what systems were they using before implementing the AI tool? Were those systems better? Seems like a low bar to beat...