this post was submitted on 19 Sep 2025
96 points (98.0% liked)

Fuck AI

4175 readers
1102 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

Thanks to Ann Reardon for pointing out that you shouldn’t trust ChatGPT.

https://youtu.be/rZinHm5nBhY

top 17 comments
sorted by: hot top controversial new old
[–] Kirk@startrek.website 40 points 1 week ago* (last edited 1 week ago) (5 children)

If there was a book or website out there that described something poisonous as not poisonous, and someone believed what was written and became poisoned, I think most reasonable people would point the blame at whoever published the bad information.

Yet when the bad (potentially deadly in this case!) information comes from ChatGPT, OpenAI gets a pass (including by everyone so far in this comment section) and the blame is placed on the person who was poisoned!

[–] artyom@piefed.social 14 points 1 week ago* (last edited 1 week ago)

the blame is placed on the person who was poisoned!

I think there's a toxic mentality that has infected society that there's only ever 1 person or entity or group or "side" to blame. It's OpenAI's fault for feeding him deadly information and it's also his fault for not fact-checking said information. He has paid dearly for his mistake. Has OpenAI?

That said, if we can put aside blame for a second and just agree that OpenAI is feeding dangerous and unchecked information to the masses, and it should be OpenAI's responsibility to either figure out how to fix it or (preferably) just stop doing it entirely. I'm not sure if it's legal for a company to be giving out medical advice, or if they're responsible for the ramifications of such advice, without a doctor involved, but it probably should be. They can't just put a "you should fact check this info" at the bottom and absolve themselves of all responsibility.

[–] LustyArgonianMana@lemmy.world 6 points 1 week ago

Engineers, including for software and AI, have a literal moral duty to not make stuff that will kill people. Eg Hyatt Hotel collapse in Kansas City. This has long been established.

[–] black_flag@lemmy.dbzer0.com 6 points 1 week ago

Well, thankfully it wasn't deadly in this case, even if it was potentially.

[–] bluGill@fedia.io 5 points 1 week ago (1 children)

home depot type books on diy wiring have been forcably recalled for deadly information. That you can't make something safe is reason not to do it at all.

[–] Kirk@startrek.website 1 points 1 week ago

Glad to hear they were recalled!

This is equivalent to seeing "poison is good for you" scribbled on a bathroom wall. If you believe that, it's on you.

[–] s@piefed.world 16 points 1 week ago

I thought this comment on Ann’s video was interesting:

I recently read a story about a teacher who got so fed up with students using ChatGPT to "write" their essays that they flipped the tables and had their students use ChatGPT to write their essays on a particular subject... and then do manual research of what the AI got wrong. Apparently, almost the entire class stopped using ChatGPT for any of their schoolwork,. (And yes, that "almost" is still concerning, but at least ChatGPT got put in its place for a change.)

[–] ekZepp@lemmy.world 13 points 1 week ago
[–] PostaL@lemmy.world 10 points 1 week ago* (last edited 1 week ago) (1 children)

-ChatGPT, should I use salt?

-NaBro ...

Man poisons himself by consuming crystallized internet stupidity.