this post was submitted on 20 Nov 2023
1496 points (98.4% liked)

Technology

59578 readers
3661 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] jonne@infosec.pub 4 points 1 year ago (1 children)

Hold on, why exactly do they need people to label this shit?

[–] decisivelyhoodnoises@sh.itjust.works 15 points 1 year ago (1 children)

How else will the AI be able to recognize that such text is "bad"?

[–] reksas@sopuli.xyz 5 points 1 year ago* (last edited 1 year ago) (1 children)

This is actually extremely critical work, if results are going to be used by ai's that are going to be used widely. This essentially determines the "moral compass" of the ai.

Imagine if some big corporation did the labeling and such, trained some huge ai with that data and it became widely used. Then years pass and eventually ai develops to such extent it can be reliably be used to replace entire upper management. Suddenly becoming slave for "evil" ai overlord is starting to move from being beyond crazy idea to plausible(years and years in future, not now obviously).

[–] ColdFenix@discuss.tchncs.de 4 points 1 year ago (1 children)

Extremely critical but mostly done by underpaid workers in poor countries who have to look at the most horrific stuff imaginable and develop lifelong trauma because it's the only job available and otherwise they and their family might starve. Source This is one of the main reasons I have little hope that if OpenAI actually manages to create an AGI that it will operate in an ethical way. How could it if the people trying to instill morality into it are so lacking in it themselves.

[–] reksas@sopuli.xyz 1 points 1 year ago

True. Though while its horrible for those people, they might be doing more important work than they or us even realize. I also kind of trust moral judgement of oppressed more than oppressor(since they are the ones who do the work). Though i'm definitely not condoning the exploitation of those people.

Its quite awful that this seems to be the best we can hope for regarding this. I doubt google or microsoft are going to give very positive guidance whether its ok for people to suffer if it leads to more money for investors when they do their own labeling.