this post was submitted on 23 Sep 2024
536 points (98.0% liked)

Technology

59666 readers
3188 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered. 

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted. 

But why did Copilot hallucinate these terrible and false accusations?

you are viewing a single comment's thread
view the rest of the comments
[–] futatorius@lemm.ee 0 points 1 month ago

Unless there is a huge disclaimer before every interaction saying "THIS SYSTEM OUTPUTS BOLLOCKS!" then it's not good enough. And any commercial enterprise that represents any AI-generated customer interaction as factual or correct should be held legally accountable for making that claim.

There are probably already cases where AI is being used for life-and-limb decisions, probably with a do-nothing human rubber stamp in the loop to give plausible deniability. People will be maimed and killed by these decisions.