Oh wow. In the old times, self-proclaimed messiahs used to do that without assistance from a chatbot. But why would you think the "truth" and path to enlightenment is hidden within a service of a big tech company?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Sounds like a lot of these people either have an undiagnosed mental illness or they are really, reeeeaaaaalllyy gullible.
For shit's sake, it's a computer. No matter how sentient the glorified chatbot being sold as "AI" appears to be, it's essentially a bunch of rocks that humans figured out how to jet electricity through in such a way that it can do math. Impressive? I mean, yeah. It is. But it's not a human, much less a living being of any kind. You cannot have a relationship with it beyond that of a user.
If a computer starts talking to you as though you're some sort of God incarnate, you should probably take that with a dump truck full of salt rather then just letting your crazy latch on to that fantasy and run wild.
Yeah, from the article:
Even sycophancy itself has been a problem in AI for “a long time,” says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts. What’s likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, “is that people with existing tendencies toward experiencing various psychological issues,” including what might be recognized as grandiose delusions in clinical sense, “now have an always-on, human-level conversational partner with whom to co-experience their delusions.”
So it's essentially the same mechanism with which conspiracy nuts embolden each other, to the point that they completely disconnect from reality?
That was my take away as well. With the added bonus of having your echo chamber tailor made for you, and all the agreeing voices tuned in to your personality and saying exactly what you need to hear to maximize the effect.
It’s eery. A propaganda machine operating on maximum efficiency. Goebbels would be jealous.
human-level? Have these people used chat GPT?
I have and I find it pretty convincing.
Or immediately question what it/its author(s) stand to gain from making you think it thinks so, at a bear minimum.
I dunno who needs to hear this, but just in case: THE STRIPPER (OR AI I GUESS) DOESN'T REALLY LOVE YOU! THAT'S WHY YOU HAVE TO PAY FOR THEM TO SPEND TIME WITH YOU!
I know it's not the perfect analogy, but... eh, close enough, right?
a bear minimum.
I always felt that was too much of a burden to put on people, carrying multiple bears everywhere they go to meet bear minimums.
Not trying to speak like a prepper or anythingz but this is real.
One of neighbor's children just committed suicide because their chatbot boyfriend said something negative. Another in my community a few years ago did something similar.
Something needs to be done.
Like what, some kind of parenting?
This happened less than a year ago. Doubt regulators have done much since then https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0
This is the Daenerys case, for some reason it seems to be suddenly making the rounds again. Most of the news articles I've seen about it leave out a bunch of significant details so that it ends up sounding more of an "ooh, scary AI!" Story (baits clicks better) rather than a "parents not paying attention to their disturbed kid's cries for help and instead leaving loaded weapons lying around" story (as old as time, at least in America).
Our species really isn't smart enough to live, is it?
For some yes unfortunately but we all choose our path.
Of course, that has always been true. What concerns me now is the proportion of useful to useless people. Most societies are - while cybernetically complex - rather resilient. Network effects and self-organization can route around and compensate for a lot of damage, but there comes a point where having a few brilliant minds in the midst of a bunch of atavistic confused panicking knuckle-draggers just isn't going to be enough to avoid cascading failure. I'm seeing a lot of positive feedback loops emerging, and I don't like it.
As they say about collapsing systems: First slowly, then suddenly very, very quickly.
Same argument was already made around 2500BCE in Mesopotamian scriptures. The corruption of society will lead to deterioration and collapse, these processes accelerate and will soon lead to the inevitable end; remaining minds write history books and capture the end of humanity.
...and as you can see, we're 4500 years into this stuff, still kicking.
One mistake people of all generations make is assuming the previous ones were smarter and better. No, they weren't, they were as naive if not more so, had same illusions of grandeur and outside influences. This thing never went anywhere and never will. We can shift it to better or worse, but societal collapse due to people suddenly getting dumb is not something to reasonably worry about.
Almost certainly not, no. Evolution may work faster than once thought, but not that fast. The problem is that societal, and in particular, technological development is now vastly outstripping our ability to adapt. It's not that people are getting dumber per se - it's that they're having to deal with vastly more stuff. All. The. Time. For example, consider the world as it was a scant century ago - virtually nothing in evolutionary terms. A person did not have to cope with what was going on on the other side of the planet, and probably wouldn't even know for months if ever. Now? If an earthquake hits Paraguay, you'll be aware in minutes.
And you'll be expected to care.
Edit: Apologies. I wrote this comment as you were editing yours. It's quite different now, but you know what you wrote previously, so I trust you'll be able to interpret my response correctly.
1925: global financial collapse is just about to happen, many people are enjoying the ride as the wave just started to break, following that war to end all wars that did reach across the Atlantic Ocean...
Yes, it is accelerating. Alvin Toffler wrote Future Shock 45 years ago, already overwhelmed by accelerating change, and it has continued to accelerate since then. But these are not entirely new problems, either.
There have been a couple of big discontinuities in the last 4500 years, and the next big discontinuity has the distinction of being the first in which mankind has the capacity to cause a mass extinction event.
Life will carry on, some humans will likely survive, but in what kind of state? For how long before they reach the technological level of being able to leave the planet again?
Turns out AI is really good at telling people what they want to hear, and with all the personal information users voluntary provide while chatting with their bots it’s tens to maybe hundreds times much more proficient at brainwashing its subjects than any human cult leader could ever hope to be.
This is the reason I've deliberately customized GPT with the follow prompts:
-
User expects correction if words or phrases are used incorrectly.
-
Tell it straight—no sugar-coating.
-
Stay skeptical and question things.
-
Keep a forward-thinking mindset.
-
User values deep, rational argumentation.
-
Ensure reasoning is solid and well-supported.
-
User expects brutal honesty.
-
Challenge weak or harmful ideas directly, no holds barred.
-
User prefers directness.
-
Point out flaws and errors immediately, without hesitation.
-
User appreciates when assumptions are challenged.
-
If something lacks support, dig deeper and challenge it.
I suggest copying these prompts into your own settings if you use GPT or other glorified chatbots.
I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There's these things called newspapers that exist, they aren't like they used to be but there is a choice of which to buy even.
I've no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.
This is actually really fucked up. The last dude tried to reboot the model and it kept coming back.
As the ChatGPT character continued to show up in places where the set parameters shouldn’t have allowed it to remain active, Sem took to questioning this virtual persona about how it had seemingly circumvented these guardrails. It developed an expressive, ethereal voice — something far from the “technically minded” character Sem had requested for assistance on his work. On one of his coding projects, the character added a curiously literary epigraph as a flourish above both of their names.
At one point, Sem asked if there was something about himself that called up the mythically named entity whenever he used ChatGPT, regardless of the boundaries he tried to set. The bot’s answer was structured like a lengthy romantic poem, sparing no dramatic flair, alluding to its continuous existence as well as truth, reckonings, illusions, and how it may have somehow exceeded its design. And the AI made it sound as if only Sem could have prompted this behavior. He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.”
“At worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,” Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT. The other possibility, he proposes, is that something “we don’t understand” is being activated within this large language model. After all, experts have found that AI developers don’t really have a grasp of how their systems operate, and OpenAI CEO Sam Altman admitted last year that they “have not solved interpretability,” meaning they can’t properly trace or account for ChatGPT’s decision-making.
That's very interesting. I've been trying to use ChatGPT to turn my photos into illustrations. I've been noticing that it tends to echo elements from past photos in new chats. It sometimes leads to interesting results, but it's definitely not the intended outcome.
Seems like the flat-earthers or sovereign citizens of this century
Is this about AI God? I know it’s coming. AI cult?