this post was submitted on 26 Aug 2024
54 points (98.2% liked)

Futurology

3300 readers
2 users here now

founded 2 years ago
MODERATORS
all 10 comments
sorted by: hot top controversial new old
[–] Anissem@lemmy.ml 28 points 1 year ago (2 children)

If I have to beg a computer for a job please just shoot me now.

[–] Dasus@lemmy.world 11 points 1 year ago

Let's shoot them (the enemies of the working class) instead? If you really have a deathwish, you can do some glorious "witness me" rampage, I promise.

[–] ruckblack@sh.itjust.works 11 points 1 year ago

Same, this headline definitely triggered the suicidal ideation

[–] CanadaPlus@lemmy.sdf.org 19 points 1 year ago* (last edited 1 year ago)

AIs can and will learn bias about any data they're fed. I'm guessing these get fed everything possible, because a client isn't about to leave over unquantified bias, but absolutely would if it doesn't work. In the article they mention training against specific biases, but I'm skeptical they've done a good job, and certain they couldn't get every bias out that way.

If you're just using it as a fancy answering machine, that's fine, but it's implied that they score candidates automatically as well.

[–] Dasus@lemmy.world 12 points 1 year ago

They're systemising and sanitising bias.

It's not gonna go anywhere. LLM's learn from datasets made by people, and people are biased. Also the writers of the LLM's are biased.

Everyone is. So we'll never be rid of it, and it's ridiculous to claim so. If we keep a watch out, we can recognise and fix biases. If we pretend biases have been fixed, biases will get out of control.

[–] voidx 4 points 1 year ago* (last edited 1 year ago)

I get creeped out even answering to an AI bot screening phone support calls. Having video AI interviewer is a whole another level of creep.

[–] fubarx@lemmy.ml 2 points 1 year ago

The backlash on this will be glorious.