It's not too surprising that ChatGPT will resort to flagging conversations to people to decide whether to call police or not, especially considering when holding places like OpenAI accountable for things like suicide, which seems to be somewhat a response to that or other mental health episodes. It is worth noting is less the llm, and more of a human team doing that. I think it is worth pointing out that since a lot of other online platforms are also like this to? Like if I posted very specific information about harming myself or others to some other place, it definitely will be reviewed by people and may or may not be reported to police.
More reasons to use other things like DeepSeek or even better, local llms for privacy. Especially since OpenAI can get fucked for lots of reasons.
Last month, the company's CEO Sam Altman admitted during an appearance on a podcast that using ChatGPT as a therapist or attorney doesn't confer the same confidentiality that talking to a flesh-and-blood professional would — and that thanks to the NYT lawsuit, the company may be forced to turn those chats over to courts.
Also this is funny! Therapists will do the exact same thing if you come in with specific plans on hurting yourself or others, they will call the police. They have to. Speaking from personal experience of having a psychiatrist call the police on me. It would be nice if mental health systems could become detached from the carceral system. Since as it stands, with it being linked to the carceral system like it is now, it just makes it linked to a form of social control and oppression that punishes someone for being in a mental health crises.
I'm going to feel sorry considering that generally people using it are those usually not having anyone else to talk to or feel like they can't, and they shouldn't be punished by cops coming in and coming to kill them since some people reviewing a flagged conversation decided it was worth calling the pigs. Just because they used ChatGPT to talk about something, shouldn't mean they should be punished for that.
I think it is worth looking into as to why someone would go to talk to ChatGPT about issues. In which, If anything, this just shows a heightening contradiction in a lot of societies where mental health is just not really taken seriously or punished, plus a few other factors like alienation. In a way it sort of reminds me of how people say to reach out if your struggling, but if you start talking about suicidal thoughts, it just tends to be "go away" or "go talk to a therapist" who, much like openai, will call the police to depending on how specific you are with said suicidal thoughts or if it thoughts of harming others. Mainly since lots of people don't know how to handle such things or don't take it seriously or belittle someone going through a mental health crises. Least not to mention to go into other things, assuming someone even can afford to go see a therapist or mental health professional.