this post was submitted on 18 Apr 2025
321 points (94.5% liked)

Technology

69109 readers
2305 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] thisbenzingring@lemmy.sdf.org 3 points 2 days ago (3 children)

why are you arguing that at me? I just argued that its not a human, AI is a tool and should be treated as such. If my tool sucks, I will tell it so and quit using it. If my tool is great, I will use it to the best of my ability and respect its functionality.

everyone else here is making scarecrow arguments because I just don't think it needs to be anthropomorphized. The link speaks about "tens of millions of dollars" wasted on computing please and thank you

that is fucking stupid behavior

[–] nickwitha_k@lemmy.sdf.org 1 points 22 hours ago

why are you arguing that at me?

Rationally and in vacuum, anthropomorphizing tools and animals is kinda silly and sometimes dangerous. But human brains don't work do well at context separation and rationality. They are very noisy and prone to conceptual cross-talk.

The reason that this is important is that, as useless as LLMs are at nearly everything they are billed as, they are really good at fooling our brains into thinking that they possess consciousness (there's plenty even on Lemmy that ascribe levels of intelligence to them that are impossible with the technology). Just like knowledge and awareness don't grant immunity to propaganda, our unconscious processes will do their own thing. Humans are social animals and our brains are adapted to act as such, resulting in behaviors that run the gamut from wonderfully bizzare (keeping pets that don't "work") to dangerous (attempting to pet bears or keep chimps as "family").

Things that are perceived by our brains, consciously or unconsciously, are stored with associations to other similar things. So the danger here that I was trying to highlight is that being abusive to a tool, like an LLM, that can trick our brains into associating it with conscious beings, is that that acceptability of abusive behavior towards other people can be indirectly reinforced.

Basically, like I said before, one can unintentionally train themselves into practicing antisocial behaviors.

You do have a good point though that people believing that ChatGPT is a being that they can confide in, etc is very harmful and, itself, likely to lead to antisocial behaviors.

that is fucking stupid behavior

It is human behavior. Humans are irrational as fuck, even the most rational of us. It's best to plan accordingly.

[–] CileTheSane@lemmy.ca 2 points 1 day ago

If my tool sucks, I will tell it so

So thanking your tools: dangerous on a humanity level scale

Telling your tool it sucks: Normal behaviour

Exactly!

I'm a parent, and I set a good example by being incredibly respectful to people, whether it's the cashier at the grocery store, their teacher at school, or a police officer. I show the same respect because I'm talking to a person.

When I'm talking to a machine, I'm direct without any respect because the goal is to clearly indicate intent. "Alexa play " or "Hey Google, what's ?" They're tools, and there is zero value in being polite to a machine, it just adds more chances for the machine to misinterpret me.

Kids are capable of understanding that you act differently in different situations. They're super respectful to their teachers, they don't bother with that w/ their peers, and us as parents are somewhere in between. I don't want my kids to associate AI/LLMs more with their teachers than their pencils. They're tools, and their purpose is to be used efficiently.