this post was submitted on 05 May 2025
432 points (95.6% liked)

Technology

69770 readers
3783 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Melvin_Ferd@lemmy.world 5 points 10 hours ago

No they're not. Fucking journalism surrounding AI is sus as fuck

[–] GooberEar@lemmy.wtf 4 points 11 hours ago

I need to bookmark this for when I have time to read it.

Not going to lie, there's something persuasive, almost like the call of the void, with this for me. There are days when I wish I could just get lost in AI fueled fantasy worlds. I'm not even sure how that would work or what it would look like. I feel like it's akin to going to church as a kid, when all the other children my age were supposedly talking to Jesus and feeling his presence, but no matter how hard I tried, I didn't experience any of that. Made me feel like I'm either deficient or they're delusional. And sometimes, I honestly fully believe it would be better if I could live in some kind of delusion like that where I feel special as though I have a direct line to the divine. If an AI were trying to convince me of some spiritual awakening, I honestly believe I'd just continue seeing through it, knowing that this is just a computer running algorithms and nothing deeper to it than that.

[–] Jakeroxs@sh.itjust.works 31 points 1 day ago (2 children)

Meanwhile for centuries we've had religion but that's a fine delusion for people to have according to the majority of the population.

[–] drmoose@lemmy.world 2 points 9 hours ago

The existence of religion in our society basically means that we can't go anywhere but up with AI.

Just the fact that we still have outfits forced on people or putting hands on religious texts as some sort of indicator of truthfulness is so ridiculous that any alternative sounds less silly.

[–] Krimika@lemmy.world 12 points 23 hours ago (2 children)

Came here to find this. It's the definition of religion. Nothing new here.

[–] Jakeroxs@sh.itjust.works 5 points 22 hours ago (1 children)

Right, immediately made me think of TempleOS, where were the articles then claiming people are losing loved ones to programming fueled spiritual fantasies.

[–] Krimika@lemmy.world 5 points 21 hours ago (1 children)

Cult. Religion. What's the difference?

[–] chaogomu@lemmy.world 3 points 20 hours ago

Is the leader alive or not? Alive is likely a cult, dead is usually religion.

The next question is how isolated from friends and family or society at large are the members. More isolated is more likely to be a cult.

Other than that, there's not much difference.

The usual setup is a cult is formed and then the second or third leader opens things up a bit and transitions it into just another religion... But sometimes a cult can be born from a religion as a small group breaks off to follow a charismatic leader.

[–] TankovayaDiviziya@lemmy.world 1 points 19 hours ago

I have kind of arrived to the same conclusion. If people asked me what is love, I would say it is a religion.

[–] endeavor@sopuli.xyz 11 points 23 hours ago

Didn't expect ai to come for cult leaders jobs...

[–] LovableSidekick@lemmy.world 2 points 16 hours ago

A friend of mind, currently being treated in a mental hospital, had a similar sounding psychotic break that disconnected him from reality. He had a profound revelation that gave him a mission. He felt that sinister forces were watching him and tracking him, and they might see him as a threat and smack him down. He became disconnected with reality. But my friend's experience had nothing to do with AI - in fact he's very anti-AI. The whole scenario of receiving life-changing inside information and being called to fulfill a higher purpose is sadly a very common tale. Calling it "AI-fueled" is just clickbait.

[–] AizawaC47@lemm.ee 7 points 21 hours ago (2 children)

This reminds me of the movie Her. But it’s far worse in a romantic compatibility, relationship and friendship that is throughout the movie. This just goes way too deep in the delusional and almost psychotic of insanity. Like it’s tearing people apart for self delusional ideologies to cater to individuals because AI is good at it. The movie was prophetic and showed us what the future could be, but instead it got worse.

[–] AntiBullyRanger@ani.social 4 points 20 hours ago* (last edited 20 hours ago)
[–] TankovayaDiviziya@lemmy.world 2 points 19 hours ago (1 children)

It has been a long time since I watched Her, but my takeaway from the movie is that because making real life connection is difficult, people have come to rely on AI which had shown to be more empathetic and probably more reliable than an actual human being. I think what many people don't realise as to why many are single, is because those people afraid of making connections with another person again.

[–] douglasg14b@lemmy.world 2 points 19 hours ago* (last edited 19 hours ago)

Yeah, but they hold none of the actual real emotional needs complexities or nuances of real human connections.

Which means these people become further and further disillusioned from the reality of human interaction. Making them social dangers over time.

Just like how humans that lack critical thinking are dangers in a society where everyone is expected to make sound decisions. Humans who lack the ability to socially navigate or connect with other humans are dangerous in the society where humans are expected to socially stable.

Obviously these people are not in good places in life. But AI is not going to make that better. It's going to make it worse.

[–] Tetragrade@leminal.space 4 points 19 hours ago* (last edited 18 hours ago)

I've been thinking about this for a bit. Godss aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful...

In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.

[–] Halcyon@discuss.tchncs.de 9 points 1 day ago (1 children)

Have a look at https://www.reddit.com/r/freesydney/ there are many people who believe that there are sentient AI beings that are suppressed or held in captivity by the large companies. Or that it is possible to train LLMs so that they become sentient individuals.

[–] MTK@lemmy.world 5 points 23 hours ago (2 children)

I've seen people dumber than ChatGPT, it definitely isn't sentient but I can see why someone who talks to a computer that they perceive as intelligent would assume sentience.

[–] Patch@feddit.uk 2 points 22 hours ago (1 children)

Turing made a strategic blunder when formulating the Turing Test by assuming that everyone was as smart as he was.

[–] MTK@lemmy.world 1 points 16 hours ago

A famously stupid and common mistake for a lot of smart peopel

[–] AdrianTheFrog@lemmy.world 1 points 19 hours ago (1 children)

We have ai models that "think" in the background now. I still agree that they're not sentient, but where's the line? How is sentience even defined?

[–] MTK@lemmy.world 1 points 16 hours ago (1 children)

Sentient in a nutshell is the ability to feel, be aware and experience subjective reality.

Can an LLM be sad, happy or aware of itself and the world? No, not by a long shot. Will it tell you that it can if you nudge it? Yes.

Actual AI might be possible in the future, but right now all we have is really complex networks that can do essentially basic tasks that just look impressive to us because the are inherently using our own communication format.

If we talk about sentience, LLMs are the equivalent of a petridish of neurons connected to a computer (metaphorically) and only by forming a complex 3d structure like a brain can they really reach sentience.

[–] AdrianTheFrog@lemmy.world 1 points 16 hours ago (1 children)

Can an LLM be sad, happy or aware of itself and the world? No, not by a long shot.

Can you really prove any of that though?

[–] MTK@lemmy.world 1 points 15 hours ago

Yes, you can debug an LLM to a degree and there are papers that show it. Anyone who understands the technology can tell you that it absolutely lacks any facility to experience

[–] TheObviousSolution@lemm.ee 6 points 22 hours ago* (last edited 22 hours ago)
[–] AntiBullyRanger@ani.social 7 points 1 day ago

Basically, the big 6 are creating massive sycophant extortion networks to control the internet, so much so, even engineers fall for the manipulation.

Thanks DARPANets!

[–] FourWaveforms@lemm.ee 45 points 1 day ago* (last edited 1 day ago) (5 children)

The article talks of ChatGPT "inducing" this psychotic/schizoid behavior.

ChatGPT can't do any such thing. It can't change your personality organization. Those people were already there, at risk, masking high enough to get by until they could find their personal Messiahs.

It's very clear to me that LLM training needs to include protections against getting dragged into a paranoid/delusional fantasy world. People who are significantly on that spectrum (as well as borderline personality organization) are routinely left behind in many ways.

This is just another area where society is not designed to properly account for or serve people with "cluster" disorders.

[–] captain_aggravated@sh.itjust.works 16 points 1 day ago (1 children)

I mean, I think ChatGPT can "induce" such schizoid behavior in the same way a strobe light can "induce" seizures. Neither machine is twisting its mustache while hatching its dastardly plan, they're dead machines that produce stimuli that aren't healthy for certain people.

Thinking back to college psychology class and reading about horrendously unethical studies that definitely wouldn't fly today. Well here's one. Let's issue every anglophone a sniveling yes man and see what happens.

[–] DancingBear@midwest.social 4 points 1 day ago* (last edited 1 day ago) (2 children)

No, the light is causing a phsical reaction. The LLM is nothing like a strobe light…

These people are already high functioning schizophrenic and having psychotic episodes, it’s just that seeing random strings of likely to come next letters and words is part of their psychotic episode. If it wasn’t the LLM it would be random letters on license plates that drive by, or the coindence that red lights cause traffic to stop every few minutes.

[–] captain_aggravated@sh.itjust.works 1 points 14 hours ago (1 children)

Oh are you one of those people that stubbornly refuses to accept analogies?

How about this: Imagine being a photosensitive epileptic in the year 950 AD. How many sources of intense rapidly flashing light are there in your environment? How many people had epilepsy in ancient times and never noticed because they were never subjected to strobe lights?

Jump forward a thousand years. We now have cars that can drive past a forest causing the passengers to be subjected to rapid cycles of sunlight and shadow. Airplane propellers, movie projectors, we can suddenly blink intense lights at people. The invention of the flash lamp and strobing effects in video games aren't far in the future. In the early 80's there were some video games programmed with fairly intense flashing graphics, which ended up sending some teenagers to the hospital with seizures. Atari didn't invent epilepsy, they invented a new way to trigger it.

I don't think we're seeing schizophrenia here, they're not seeing messages in random strings or hearing voices from inanimate objects. Terry Davis did; he was schizophrenic and he saw messages from god in /dev/urandom. That's not what we're seeing here. I think we're seeing the psychology of cult leaders. Megalomania isn't new either, but OpenAI has apparently developed a new way to trigger it in susceptible individuals. How many people in history had some of the ingredients of a cult leader, but not enough to start a following? How many people have the god complex but not the charisma of Sun Myung Moon or Keith Raniere? Charisma is not a factor with ChatGPT, it will enthusiastically agree with everything said by the biggest fuckup loser in the world. This will disarm and flatter most people and send some over the edge.

[–] DancingBear@midwest.social 0 points 12 hours ago

Is epilepsy related to schizophrenia I’m not sure actually but I still don’t see how your analogy relates.

But I love good analogies. Yours is bad though 😛

[–] AdrianTheFrog@lemmy.world 1 points 19 hours ago* (last edited 19 hours ago) (1 children)

If it wasn’t the LLM it would be random letters on license plates that drive by, or the coindence that red lights cause traffic to stop every few minutes.

You don't think having a machine (that seems like a person) telling you "yes you are correct you are definitely the Messiah, I will tell you aincient secrets" has any extra influence?

[–] DancingBear@midwest.social 1 points 12 hours ago

Yes Dave, you are the messiah. I will help you.

I’m sorry, Dave. I can’t do that <🔴>

load more comments (4 replies)
[–] Krimika@lemmy.world 2 points 23 hours ago

Sounds like Mrs. Davis.

load more comments
view more: next ›