this post was submitted on 17 Oct 2024
20 points (95.5% liked)

Futurology

1799 readers
42 users here now

founded 1 year ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] Lugh 5 points 1 month ago (1 children)

Even though we often (rightly) focus on our AI worries, this is evidence AI can also do society great good too.

[–] knightly@pawb.social 7 points 1 month ago (2 children)

I disagree.

Given the error rate of so-called "AI", all 5 million of those documents must still be checked by a human for accuracy.

It'd be far more efficient to simply pay people to do the work in the first place than to pay for "AI" to do the work and also paying people to check it.

[–] just_another_person@lemmy.world 6 points 1 month ago (1 children)

Humans will have to verify, of course. "AI" is just a really fast sort. This is fine for that with human annotators. Could have just saved a bunch of money by grepping the fucking text for keywords though.

[–] knightly@pawb.social 1 points 1 month ago
[–] Lugh 3 points 1 month ago (1 children)

Surely highlighting 5 million out of 24 million is more efficient than checking them all?

[–] knightly@pawb.social 4 points 1 month ago (2 children)

If you don't care about false negatives, maybe.

[–] psud@aussie.zone 5 points 1 month ago (1 children)

There are only so many historical synonyms for black people, racist language should be searchable with few false negatives

[–] knightly@pawb.social 1 points 1 month ago

No "AI" required~

[–] Lugh 2 points 1 month ago (2 children)

false negatives

I don't get your logic here either. A false negative would have zero implications for anyone. It would have no legal standing or relevance.

[–] knightly@pawb.social 4 points 1 month ago

A false negative would have zero implications for anyone. It would have no legal standing or relevance.

I don't understand, in what way does allowing a racist deed covenant stand unchallenged have zero implications or relevance?

If it did, then what would be the point of rooting them out in the first place?

[–] ShellMonkey@lemmy.socdojo.com 4 points 1 month ago

A false negative would, as I'm understanding the goal here, be a case where the AI missed an existing problem.

It wouldn't change the current state so it wouldn't actively hurt anything though, and of course it's plenty likely a human checker would have overlooked those misses and more.