It's not too surprising that ChatGPT will resort to flagging conversations to people to decide whether to call police or not, especially considering when holding places like OpenAI accountable for things like suicide, which seems to be somewhat a response to that or other mental health episodes. It is worth noting is less the llm, and more of a human team doing that. I think it is worth pointing out that since a lot of other online platforms are also like this to? Like if I posted very specific information about harming myself or others to some other place, it definitely will be reviewed by people and may or may not be reported to police.
More reasons to use other things like DeepSeek or even better, local llms for privacy. Especially since OpenAI can get fucked for lots of reasons.
Last month, the company's CEO Sam Altman admitted during an appearance on a podcast that using ChatGPT as a therapist or attorney doesn't confer the same confidentiality that talking to a flesh-and-blood professional would — and that thanks to the NYT lawsuit, the company may be forced to turn those chats over to courts.
Also this is funny! Therapists will do the exact same thing if you come in with specific plans on hurting yourself or others, they will call the police. They have to. Speaking from personal experience of having a psychiatrist call the police on me. It would be nice if mental health systems could become detached from the carceral system. Since as it stands, with it being linked to the carceral system like it is now, it just makes it linked to a form of social control and oppression that punishes someone for being in a mental health crises.