Clicked the article to give it a read, saw the slop they're using right next to the text, laughed, closed the damn thing
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
using AI images in an article about AI use leading to cognitive decline gotta be crazy💀
Yeah but what do you expect them to do, actually pay a human to make sure they don't do that?
Or just survive on the merit of the text content?
honestly it's hard to tell if the text is also ai slop before reading it fully. and I usually don't have much time to waste on shitty articles, so i just skip those with ai slop images.
Does context escape your brain, The images are not the focus of this article, the fucking article is you weirdo
You know I’m getting real tired of stupid people being online and thinking they’re allowed to speak to me
Then why do you put that reply button under your post?
Yawn
Just wanna point out that every time something scares you enough, it also reprograms/rewires your brain. Not trying to discredit the study, but the reprogramming really isn't the concern; it's if the reprogramming is beneficial, which this isn't.
The study appears to be saying something different from what the headline implies.
Basically it might be better to say that using an LLM doesn't require you to think as hard, you remember less of the essay, and when you go back to rewrite a previous essay without the LLM you have more trouble.
They also noted that for some people, using the LLM made them learn much better. Basically the difference between getting it to write for you and using it as a tool to structure information.
One reduced cognitive load from all sources, and the other reduced load relating to integrating different information sources.
Basically it was a proper study by people who knew what they were doing. They never actually said anything about rewiring.
This comment just reprogrammed my brain after the reprogramming it got from that post 😮
The name and presentation of that site has a veneer of legitimacy, but it really doesn't seem credible.
There's also a lot of general antivax stuff.
Now, sharing a lot of ... Questionable articles... Doesn't make the article in question invalid. It does however call into extreme doubt any editorial context the site might be adding.
https://arxiv.org/pdf/2506.08872
This is the actual study being referenced. It's conclusions are significantly less severe than this presents them as, while still conveying "LLMs are not generally the best tool for facilitating education".
trade-off highlights an important educational concern: AI tools, while valuable for supporting performance, may unintentionally hinder deep cognitive processing, retention, and authentic engagement with written material. If users rely heavily on AI tools, they may achieve superficial fluency but fail to internalize the knowledge or feel a sense of ownership over it.
from an educational standpoint, these results suggest that strategic timing of AI tool introduction following initial self-driven effort may enhance engagement and neural integration. The corresponding EEG markers indicate this may be a more neurocognitively optimal sequence than consistent AI tool usage from the outset
Ultimately, this isn't saying AI tools cause brain damage or make you stupid. It's saying that learning via LLM often causes worse retention of the information being learned. It also says that search engines and LLMs can remove certain types of cognitive load that are not conducive to retention, making learning easier and faster in some cases where engagement can be kept high.
It's important to be clear and honest about what a study is saying, even if it's not as unequivocally negative as the venue might appreciate.
Well yeah thanks. The headline is an obvious lie so that's kind of a red flag.
Amusingly using what appears to be an AI generated image.
They always have a weird overuse of sepia tones.
Tech bros stay losing, it is a good day
I feel like kids are the primary loss-sufferers here :(
(phrasing there is me trying not to call them losers)
Hooray!
We invented cyberpsychosis, for reals!
Isn't it so cool to live in a cyberpunk dystopia!?!
Brb, gonna go OD on some early access/preview alpha braindances!
I don't know why, but I think I just realized what happened to my ex. I thought we were mending our relationship before she started sexting the guy she had an affair with, but it seemed really dumb, even for her.
But I also remember when Chat-gpt came out, that was the time she started using a VPN. Why? Idk didn't bother me. But then I read about the LLMs essentially just being the ultimate sycophant, and studies like this show harm to critical thought, and I'm wondering - is this what happened with her?
Ever since I moved out, she just sort of got dumber. Like it's possible I was blissfully unaware of just how unintelligent she was, but I think I would have even noticed some of this. This might be a bigger problem than we know of.
Why would she need a VPN to access chat gpt though?
I actually just sought out this comment to see if any reason was given about what a VPN had to do with any of it.
I think it was just a strange coincidence. In the past she never took my comments on IT security seriously, so it seemed odd that at the same time she started using Chat-gpt she started using the VPN.
Shortly after that, she wanted me to pay a credit card bill of hers, I said sure no problem, just get me the statement so I know how much. She refused. She could have just given me the total, but she refused because "I wanted to verify her purchases."
That, obviously, made me very upset, because I wasn't suspicious until she said I wanted to inspect her statement. That weekend she traveled out of state, and when she came back I discovered the sexting.
Clearly she was on the way out, but the point remains, did chat-gpt accelerate things downward? That's my question.
Ah, that makes some sense. At a guess I'd say it's plausible that she was asking chatgpt how to hide info online and it suggested a VPN. If she never used one or talked about digital privacy before that it could make sense.
There’s a good body of research on cognitive capacity and creativity in regard to enriching environments. Even down to rats. Give rats playgrounds and toys and they perform better at memory tasks and solving puzzles.
I suppose you could train rats to press a button to get a human to come solve problems for them. Take the human away, then what?
What’s insidious here is the same over scheduled kids, having their childhoods choreographed for enrichment, are often the ones coming out of childhood with critical thinking, anxiety, and socialization deficits, we think because they’re using their phones for every problem solve.
There are a million legit reasons to avoid and despise LLMs, their makers, and their pushers. I don't think this is one of them.
Literally every piece of technology introduced in the past thousand years has had this kind of hue and cry built up around it, beginning with the printing press and books in Europe. Every form of communication or information technology has had "studies" (or what passed for them in ages past) claiming that the new technology would ruin the minds and morals of people who used it. Remember when television would "rot kids' minds"? Remember when the Internet was going to end civilization as we know it?
This study is just more of the same. You'll find equivalent studies about television back in the '50s to even as late as the '70s.
There are (a myriad of) good arguments for despising LLMs. This (not yet peer-reviewed) MIT study is not one of them. (And I should point out that the actual paper instead of this summary of it has quite a bit more nuance than is reported in the linked article.)
The study itself is entirely benign, and I'd actually accept it as a reason to eschew AI in an educational context. Their conclusion is basically "if you use an LLM to write an essay you tend to not retain the information as well", which is... Downright boring in how reasonable it is. Particularly given the converse observation I wouldn't have expected: if you are already familiar with a subject then using an LLM to write an essay can strengthen your understanding.
The "journal" this summary of the study was shared in is quackery, so I'm not surprised they distorted the findings.
Nothing bad ever happens.
Oh, are we playing a game of non sequitur? OK. My move is:
炮二平五
Your move.
No, like, you're seemingly saying "new inventions never cause health problems"
Y'know, like asbestos. It was a wonder material! And then the health effects were uncovered.
Oh, baby, did you read the thing? Something tells me you didn't read the thing.