this post was submitted on 05 May 2025
298 points (95.2% liked)
Technology
69726 readers
3586 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
From the article (emphasis mine):
From elsewhere:
Sycophancy in GPT-4o: What happened and what we’re doing about it
I don't know what large language model these people used, but evidence of some language models exhibiting response patterns that people interpret as sycophantic (praising or encouraging the user needlessly) is not new. Neither is hallucinatory behaviour.
Apparently, people who are susceptible and close to falling over the edge, may end up pushing themselves over the edge with AI assistance.
What I suspect: someone has trained their LLM on somethig like religious literature, fiction about religious experiences, or descriptions of religious experiences. If the AI is suitably prompted, it can re-enact such scenarios in text, while adapting the experience to the user at least somewhat. To a person susceptible to religious illusions (and let's not deny it, people are suscpecptible to finding deep meaning and purpose with shallow evidence), apparently an LLM can play the role of an indoctrinating co-believer, indoctrinating prophet or supportive follower.
*Cough* ElonMusk *Cough*
I think Elon was having the opposite kind of problems, with Grok not validating its users nearly enough, despite Elon instructing employees to make it so. :)