this post was submitted on 27 Aug 2025
102 points (97.2% liked)

Technology

39654 readers
537 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS
 

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

The teenager discussed a method of suicide with ChatGPT on several occasions, including shortly before taking his own life. According to the filing in the superior court of the state of California for the county of San Francisco, ChatGPT guided him on whether his method of taking his own life would work.

It also offered to help him write a suicide note to his parents.

you are viewing a single comment's thread
view the rest of the comments
[–] lakemalcom@sh.itjust.works 2 points 1 week ago (1 children)

Agreed that ChatGPT has no motives.

But the thing about these chatbots (as opposed to search engine or library) is that the responses will be in natural language. It won't just spit out a list of instructions, it will assemble a natural language response that affirms your actions or choices, and sometimes include words that sound empathetic.

I would imagine some of the generated replies would say something to the effect of:

"It's terribly sad that you've committed to ending your own life, but given the circumstances, it's an understandable course of action. Here are some of the least painful ways to die:...."

Are people looking for something to blame besides themselves? Absolutely. But I think the insidious thing here is that AI companies are absolutely trying to make chatbots a replacement for human connection.

[–] Showroom7561@lemmy.ca 4 points 1 week ago (1 children)

“It’s terribly sad that you’ve committed to ending your own life, but given the circumstances, it’s an understandable course of action. Here are some of the least painful ways to die:…”

We don't know what kind of replies this teen was getting, but according to reports, he was only getting this information under the context that it would be for some kind of creative writing or "world-building", thus bypassing the guardrails that were in place.

It would be hard to imagine a reply like that, when the chatbot's only context is to provide creative writing ideas based on the user's prompts.

[–] TheseusNow@lemmy.zip 1 points 1 week ago (1 children)

This is like the person who won the case where they burned themselves with hot coffee because the coffee cup had no warning of being hot.

These AIs will need to always have a suicide hotline disclaimer in each response regardless of what is being done like world building.

[–] Showroom7561@lemmy.ca 2 points 1 week ago

These AIs will need to always have a suicide hotline disclaimer in each response regardless of what is being done like world building.

ChatGPT gave multiple warnings to this teen, which he ignored. Warnings do very little to protect users, unless they are completely naive (i.e. hot coffee is hot), and warnings really only exist to guard against legal liability.