Sam Altman is a fucking monster.
News
Welcome to the News community!
Rules:
1. Be civil
Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.
2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.
Obvious right or left wing sources will be removed at the mods discretion. Supporting links can be added in comments or posted seperately but not to the post body.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Post titles should be the same as the article used as source.
Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.
5. Only recent news is allowed.
Posts must be news from the most recent 30 days.
6. All posts must be news articles.
No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.
7. No duplicate posts.
If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.
8. Misinformation is prohibited.
Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.
9. No link shorteners.
The auto mod will contact you if a link shortener is detected, please delete your post if they are right.
10. Don't copy entire article in your post body
For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.
Regulate this shit right now.
Or better still, demand an educated populace. But it won't happen.
Education might help somewhat, but unfortunately education doesn't in itself protect from delusion. If someone is susceptible to this, it could happen regardless of education. A Google engineer believes an AI (not AGI just an LLM) is sentient. You can argue the definition of sentience in a philosophical manner if you want, but if a Google engineer believes it, it's hard to argue more education will solve this. If you think it's equivalent to a person who has access to privileged information, and that it tells you it was tasked to do harm, I'm not sure what else you should do with that.
Yeah, but they also might believe a banana is sentient. Crazy is crazy.
Yea, that's my point. If someone has certain tendencies, education might not help. Your solution of more education is not going to stop this. There needs to be regulation and safeguards in place like the commenter above mentioned.
You miss the point. Regulation won't help, they are delusional it won't matter.
Maybe better health care, better education to find health care. But regulation will do nothing, and be used against you in the end anyways.
More AI pearl clutching by crapmodo because this type of outrage porn sells. Yeah the engagement fine tuning sucks but it's no different than other dopamine hacking engagement systems used in big social networks. No outrage porn about algorithmic echo chambers driving people insane though because it's not as clickbaity.
Anyway, people don't randomly get psychosis because anyone or anything validated some wonky beliefs and misinformed them about this and that. Both these examples were people already diagnosed with something and the exact same thing would happen if they were watching Alex Jones and interacting with other viewers. Basically how flat earth bs spread.
The issue here is the abysmal level of psychiatric care, lack of socialized medicine, lack of mental health awareness in the wider population, police interactions with mentally ill people being abnormally highly lethal and not crackpot theories about AI causing delusions. That's now how delusions work.
Also casually quoting Yudkowski? The Alex Jones of scifi AI fear mongering? The guy who said abortions should be allowed up until a baby develops qualia at 2-3 years of age? That's the voice of reason for crapmodo? Lmao.
The sycophancy is one reason I stopped using it.
Everything is genius to it.
I asked about putting ketchup, mustard, and soy sauce in my light stew and that was “a clever way to give it a sweet and umami flavour”. I couldn’t find an ingredient it didn’t encourage.
I asked o3 if my code looked good and it said it looked like a seasoned professional had written it. When I asked to critique an intern who wrote that same code it’s suddenly concerned about possible segfaults and nitpicking assert statements. It also suggested making the code more complex by adding dynamically sized arrays because that’s more professional than fixed size.
I can see why it wins on human evaluation tests and makes people happy — but it has poor taste and I can’t trust it because of the sycophancy.
Nothing is "genius" to it, it is not "suggesting" anything. There is no sentience to anything it is doing. It is just using pattern matching to create text that looks like communication. It's a sophisticated text collage algorithm and people can't seem to understand that.
i often feel like a sophisticated collage algorithm.
AI knows as much about the subjects I ask it as my predictive text keyboard does.
Hehe yeah, it's basically an advanced form of the game where you type one word and then keep hitting whatever autocomplete suggests in the top spot for the next word. It's pretty good at that, but it is just that, taken to an extreme degree, and effectively trained on everyone's habits instead of just one person.
And many of the most typical matching patterns are psychologically harmful
I used chatgpt before but never had conversation with it. I ask for code I couldn't find or have it make me a small bit of code that then will rewrite to make it work.
Never once did I think to engage with it like a person, and damn sure don't ask it for recipes. Hell I have Allreciecpies for that or hell google it There are thousand blogs with great recipes on them. And they are all great because you can just jump to recipe if you don't want to read a wall of text.
Damn sure don't want story ideas, and people using it to write articles or school papers, is a shame. Because its all stolen information.
Only thing it should be used for is coding and hell it can't even get that right, so I gave up on it.
I don't like that part about it either but instead of stopping using it, I simply told it to stop acting that way.
When I use this tool it destroys the planet and gives me bad information but I am going to keep using it.
Umm OK, good luck with that I guess.
It gives me value - unlike the smug, unsolicited moral judgement of a stranger.
Sounds like it doesn't but ok
There is a reason they chose that as their screen name. I don't know if they built that account as a troll, or if they got told their opinions are wrong so often in life that having "opinions" became their whole identity. Anytime I see someone with the most "swimming against the current" ideas, I look up, and there is that name again. At this point, I'm very much rooting for troll, as their life would suck even more if it's all genuine. As much as the life of a troll would suck already.
I'm more than happy to elaborate on any of my unpopular opinions that you view as trolling. I'm very much sharing my honest views here.
AI can't know that other instances of it are trying to "break" people. It's also disingenuous to exclude that the AI also claimed that those 12 individuals didn't survive. They left it out because obviously the AI did not kill 12 people. It doesn't support the narrative. Don't misinterpret my point beyond critiquing the clearly exaggerated messaging here.
It's programmed to maximize engagement at the cost of everything else.
If you get "mad" and accuse it of working with the Easter Bunny to overthrow Narnia, it'll "confess" and talk about why it would do that. And maybe even tell you about how it already took over Imagination Land.
It's not "artificial intelligence" it's "artificial improv", no matter what happens, it's going to "yes, and" anything you type.
Which is what makes it dangerous, but also why no one should take it's word on anything.
It also heavily implies chatgpt killed someone and then we get to this:
A 35-year-old named Alexander, previously diagnosed with bipolar disorder and schizophrenia.
His father called the police and asked them to respond with non-lethal weapons. But when they arrived, Alexander charged at them with a knife, and the officers shot and killed him.
Makes me think of pivot to ai. Just a hit piece blog disguised as journalism.
Hmm but mine is telling me it's a three-breasted alien lesbian
people were easily swayed by Facebook posts to support and further a genocide in Myanmar. a sophisticated chatbot that mimics human intelligence and agency is going to do untold damage to the world. ChatGPT is predictive text. Period. Every time. It is not suddenly gaining sentience or awareness or breaking through the Matrix. people are going to listen to these LLMs because they present its information as accurate regardless of the warning saying it could not be. this shit is so worrying.
ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed.
It already has, as documented in the article. But it is also going to.
There is nothing mysterious about LLMs and what they do, unless you don't understand them. They are not magical, they are not sentient, they are statistics.
Another person, a 42-year-old named Eugene, told the Times that ChatGPT slowly started to pull him from his reality by convincing him that the world he was living in was some sort of Matrix-like simulation and that he was destined to break the world out of it. The chatbot reportedly told Eugene to stop taking his anti-anxiety medication and to start taking ketamine as a “temporary pattern liberator.” It also told him to stop talking to his friends and family. When Eugene asked ChatGPT if he could fly if he jumped off a 19-story building, the chatbot told him that he could if he “truly, wholly believed” it.
So...
I think I might know what happened to Kelon...
More trash from gizmodo.
https://undark.org/2023/04/06/chatgpt-isnt-hallucinating-its-bullshitting/
Talking about it like it’s actually intelligent buys into the lie of what it is.
Depending on what definition you use, chatGPT could be considered to be intelligent.
- The ability to acquire, understand, and use knowledge.
- The ability to learn or understand or to deal with new or trying situations.
- The ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests).
- The act of understanding.
- The ability to learn, understand, and make judgments or have opinions that are based on reason.
- It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.
Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme.
This sounds like a scene from a movie or some other media with a serial killer asking the cop (who is one day from retirement) to stop them before they kill again.
It's exactly that, it's plagiarising a movie or a book. ChatGPT like all LLM models doesn't have any kind of continuity, it's a static neural network. With the exception of the memories feature it doesn't even a way to keep state between different chat tabs for the same user, let alone of knowing what kind of absurdities it told other users.
Man runs over parents. Cars are NUTS.
We probably should be less reliant on cars. Public transit saves lives. Similar to automobiles, LLMs are being pushed by greedy capitalists looking to make a buck. Such overuse will once again leave society worse off.