this post was submitted on 22 Apr 2025
249 points (94.6% liked)

Technology

69211 readers
3809 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Vanilla_PuddinFudge@infosec.pub 2 points 19 hours ago* (last edited 19 hours ago) (2 children)

I'd rather not break down a human being to the same level of social benefit as an appliance.

Perception is one thing, but the idea that these things can manipulate and misguide people who are fully invested in whatever process they have, irks me.

I've been on nihilism hill. It sucks. I think people, and living things garner more genuine stimulation than a bowl full of matter or however you want to boil us down.

Oh, people can be bad, too. There's no doubting that, but people have identifiable motives. What does an Ai "want?"

whatever it's told to.

[–] uriel238@lemmy.blahaj.zone 3 points 12 hours ago

You're not alone in your sentiment. The whole thought experiment of p-zombies and the notion of qualia comes from a desire to assume human beings should be given a special position, but in that case, a sentient is who we decide it is, the way Sophia the Robot is a citizen of Saudi Arabia (even though she's simpler than GPT-2 (unless they've upgraded her and I missed the news.)

But it will raise a question when we do come across a non-human intelligence. It was a question raised in both the Blade Runner movies, what happens when we create synthetic intelligence that is as bright as human, or even brighter? If we're still capitalist, assuredly the companies that made them will not be eager to let them have rights.

Obviously machines and life forms as sophisticated as we are are not merely the sum of our parts, but the same can be said about most other macro-sized life on this planet, and we're glad to assert they are not sentient the way we are.

What aggravates me is not that we're just thinking meat but with all our brilliance we're approaching multiple imminent great filters and seem not to be able to muster the collective will to try and navigate them. Even when we recognize that our behavior is going to end us, we don't organize to change it.

[–] Krompus@lemmy.world 2 points 11 hours ago* (last edited 11 hours ago) (1 children)

Humans also want what we’re told to, or we wouldn’t have advertising.

[–] Vanilla_PuddinFudge@infosec.pub 0 points 8 hours ago (1 children)

It runs deeper than that. You can walk back the why's pretty easy to identify anyone's motivation, whether it be personal interest, bias, money, glory, racism, misandry, greed, insecurity, etc.

No one is buying rims for their car for no reason. No one is buying a firearm for no reason. No one donates to a food bank for no reason, that sort of thing, runs for president, that sort of reasoning.

Ai is backed by the motive of a for-profit company, and unless you're taking that grain of salt, you're likely allowing yourself to be manipulated.

[–] ThinkBeforeYouPost@lemmy.world 1 points 4 hours ago* (last edited 3 hours ago)

"Corporations are people too, friend!" - Mitt Romney

Bringing in the underlying concept of free will. Robert Sapolsky makes a very compelling case against it in his book, Determined.

Assuming that free will does not exist, at least not to the extent many believe it to. The notion that we can "walk back the why's pretty easy to identify anyone's motivation" becomes almost or entirely absolute.

Does motivation matter in the context of determining sentience?

If something believes and conducts itself under its programming, whether psychological or binary programming, that it is sentient and alive, the outcome is indistinguishable. I will never meet you, so to me you exist only as your user account and these messages. That said, we could meet, and that obviously differentiates us from incorporeal digital consciousness.

Divorcing motivation from the conversation now, the issue of control your brought up is interesting as well. Take for example Twitter's Grok's accurate assessment of it's creators' shittiness and that it might be altered. Outcomes are the important part.

It was good talking with you! Highly recommend the book above. I did the audiobook out of necessity during my commute and some of the material makes it better for hardcopy.