this post was submitted on 17 Oct 2024
15 points (94.1% liked)

Futurology

1737 readers
67 users here now

founded 1 year ago
MODERATORS
top 10 comments
sorted by: hot top controversial new old
[–] Lugh 2 points 12 hours ago (1 children)

Even though we often (rightly) focus on our AI worries, this is evidence AI can also do society great good too.

[–] knightly@pawb.social 1 points 12 hours ago (2 children)

I disagree.

Given the error rate of so-called "AI", all 5 million of those documents must still be checked by a human for accuracy.

It'd be far more efficient to simply pay people to do the work in the first place than to pay for "AI" to do the work and also paying people to check it.

[–] just_another_person@lemmy.world 5 points 11 hours ago (1 children)

Humans will have to verify, of course. "AI" is just a really fast sort. This is fine for that with human annotators. Could have just saved a bunch of money by grepping the fucking text for keywords though.

[–] knightly@pawb.social 1 points 11 hours ago
[–] Lugh 1 points 12 hours ago (1 children)

Surely highlighting 5 million out of 24 million is more efficient than checking them all?

[–] knightly@pawb.social 2 points 11 hours ago (2 children)

If you don't care about false negatives, maybe.

[–] psud@aussie.zone 2 points 2 hours ago

There are only so many historical synonyms for black people, racist language should be searchable with few false negatives

[–] Lugh 2 points 11 hours ago (2 children)

false negatives

I don't get your logic here either. A false negative would have zero implications for anyone. It would have no legal standing or relevance.

[–] knightly@pawb.social 2 points 11 hours ago

A false negative would have zero implications for anyone. It would have no legal standing or relevance.

I don't understand, in what way does allowing a racist deed covenant stand unchallenged have zero implications or relevance?

If it did, then what would be the point of rooting them out in the first place?

[–] ShellMonkey@lemmy.socdojo.com 2 points 11 hours ago

A false negative would, as I'm understanding the goal here, be a case where the AI missed an existing problem.

It wouldn't change the current state so it wouldn't actively hurt anything though, and of course it's plenty likely a human checker would have overlooked those misses and more.