Do not engage with chatbots at all. Let this LLM psychosis end. That's the best result for humanity.
Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Post guidelines
[Opinion] prefix
Opinion (op-ed) articles must use [Opinion] prefix before the title.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
!globalnews@lemmy.zip
!interestingshare@lemmy.zip
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @brikox@lemmy.zip.
Ahhh, yes, the "stick your head in the sand until it blows over" strategy. Because that's always worked, right?
I'm saying don't use unethical tech. If you want to interpret that as sticking your head in the sand, you can, but I think you're being obtuse.
Unless you're advocating that we go and burn down the infrastructure that runs these LLMs - is that what you're saying?
It's foolish to think this will just blow over, and the tech will magically disappear, no matter how you think about its ethics.
It's better to take control of the technology directly, promote open-source models, push local usage, use it as a tool for the people, not as a tool for corporations. Use the tech against the elites. Show how fragile their position is.
If you don't take control of the situation, the world will take control of it for you.
I don't want to use LLMs. I feel the majority of use cases for LLMs are inauthentic, lazy, unhelpful, and uncreative.
It feels like I've said "fizzy drinks are bad" and you've told me to drink diet sodas.
I understand that dbzero is a pro-AI instance, so I think we're just fundamentally not going to agree on this. 🤷♂️
I don’t want to use LLMs. I feel the majority of use cases for LLMs are inauthentic, lazy, unhelpful, and uncreative.
Well, that's just your opinion. Don't accuse everybody else who are using it as evil psychopaths, under some "LLM psychosis".
Greeeat... So now the insufferable jackasses asking a dumbshit AI for things will not only be wrong 70℅ of the time, they will be reinforced to also be assholes 100% of the time.
I'm very polite so that the basilisk spares me
Roko, if you see this, I'm doing my part!
I'm surprised the best strategy wasn't the neutral prompt, due to removal of any fluff.
Being ethical in real life doesn't always give you the best outcome either - doesn't mean you still shouldn't be so.
Being ethical to H100 clusters running inferences on models that were created with stolen, pirated, and appropriated data does not matter. Abuse the shit out of them. It’s absolutely meaningless.
Or just don’t use them, if you care at all about economic, ecological, and societal stability.
You’re right that an LLM doesn’t care how it’s treated - it’s not conscious. But that’s not really the point. The way people treat things that seem human still says something about them, not the thing. If someone goes out of their way to be cruel to a chatbot that’s just trying to be helpful, it’s not the bot that’s being tested - it’s the person’s capacity for empathy and restraint.
It’s the same instinct behind how we treat animals, or even how kids treat toys - being kind to something that can’t fight back is part of what keeps us human. And historically, the “it’s not really human, so it doesn’t matter” argument has been used to justify a lot of awful behavior.
So no, the AI doesn’t care. But maybe it still matters that we do.
I'm always very polite to them so that when some Skynet type shit eventually goes down, perhaps they will remember me as the polite person and spare me. Just in case.
Is anyone surprised by this? That's just how people on the internet are, too. j Everyone knows that the best way to get an answer on a forum is to be insulting in the post title /j
I found you get more specific solutions the worse you treat it. none of the "it's a known issue" or problem confirmations or any of that. you just call it an idiot, moron, use all caps, cuss at it a few times.
The funny thing is, at least with Claude, is that it will mimic your language and start cussing back. ChatGPT on the other hand will just start insulting you back.
This is what will trigger the robot apocalypse