this post was submitted on 27 Aug 2025
102 points (97.2% liked)

Technology

39654 readers
537 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS
 

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

The teenager discussed a method of suicide with ChatGPT on several occasions, including shortly before taking his own life. According to the filing in the superior court of the state of California for the county of San Francisco, ChatGPT guided him on whether his method of taking his own life would work.

It also offered to help him write a suicide note to his parents.

you are viewing a single comment's thread
view the rest of the comments
[–] TheseusNow@lemmy.zip 1 points 1 week ago (1 children)

This is like the person who won the case where they burned themselves with hot coffee because the coffee cup had no warning of being hot.

These AIs will need to always have a suicide hotline disclaimer in each response regardless of what is being done like world building.

[–] Showroom7561@lemmy.ca 2 points 1 week ago

These AIs will need to always have a suicide hotline disclaimer in each response regardless of what is being done like world building.

ChatGPT gave multiple warnings to this teen, which he ignored. Warnings do very little to protect users, unless they are completely naive (i.e. hot coffee is hot), and warnings really only exist to guard against legal liability.