this post was submitted on 08 Sep 2025
16 points (86.4% liked)

Futurology

3253 readers
129 users here now

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] givesomefucks@lemmy.world 4 points 20 hours ago

As artificial general intelligence (AGI) approaches and machines match or surpass human-level thinking, he believes AIs will be smarter than humans in ways that let them push our buttons, make us feel things, change our behavior, and do it better than even the most persuasive human being.

"These [AI] things are going to end up knowing a lot more than us. They already know a lot more than us, being more intelligent than us in the sense that if you had a debate with them about anything, you’d lose,” Hinton warned in a recent interview shared on Reddit. “Being smarter emotionally than us, which they will be, they’ll be better at emotionally manipulating people.”

I mean. Yeah...

But that's not emotional intelligence, it's the same thing any social media algorithm does.

Some people like ragebait, others want cute puppies. It's just a feedback, whatever content gets the user to engage is what the user gets shown. There's an emotional comment for the human reaction, but the algorithm is just going off trial and error, same thing AI would do.

The real problem is how much people are tracked and if/when it becomes normal for an AI to "look up" a user thru those trackers. Imagine every time you get a customer support bot, it knows your entire internet footprint including every interaction you've had with any other chatbot.

Get drunk and yell at an AI Taxi, and suddenly you could be blacklisted as a "bad customer" by everything else.

Like in Hot Tub Time machine 2:

https://www.youtube.com/watch?v=Qy4sjJotNh4