242
MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline
(publichealthpolicyjournal.com)
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
The name and presentation of that site has a veneer of legitimacy, but it really doesn't seem credible.
I warned about this for the past 3 years. The WHO wants universal mental health care and to drug at least a billion of us.
Do Viruses Exist?
There's also a lot of general antivax stuff.
Now, sharing a lot of ... Questionable articles... Doesn't make the article in question invalid. It does however call into extreme doubt any editorial context the site might be adding.
https://arxiv.org/pdf/2506.08872
This is the actual study being referenced. It's conclusions are significantly less severe than this presents them as, while still conveying "LLMs are not generally the best tool for facilitating education".
Ultimately, this isn't saying AI tools cause brain damage or make you stupid. It's saying that learning via LLM often causes worse retention of the information being learned. It also says that search engines and LLMs can remove certain types of cognitive load that are not conducive to retention, making learning easier and faster in some cases where engagement can be kept high.
It's important to be clear and honest about what a study is saying, even if it's not as unequivocally negative as the venue might appreciate.
Well yeah thanks. The headline is an obvious lie so that's kind of a red flag.
Of course. If you're talking about presenting nuance then I would just briefly mention the generation of studies that showed exposure to television reduced cognitive abilities and were full of nuance. Because all of those studies were ignored, and more showing television advertising had no effect on people (how did those studies get funded I wonder, well anyway) nothing happened and here we are in Libertarian paradise.
AI is much more affecting and it's adoption isn't being "offered", it's being mandated. I think we can dispense with some of the nuance in headlines and leave that to the researchers looking at the raw data.
Nah, I don't think we can. You may be okay with hyperbolic lies from an antivax quackery website, but I'm not.
I think our use of LLMs is overblown and rife with issues, but I don't think the answer to that is to wrap your concerns in so much obvious bullshit that anyone who does even a cursory glance will see that it's bunk. All you do is convey "people who think LLMs and generative AI are worrisome are full of shit".
Gee, if only there were some way to find information that validates those claims and be confident that people haven't labeled them grossly incorrectly....
Why are you talking about TV, as an aside? People doing research poorly or ignoring research in the past is irrelevant to if we should lie to people now.
First of all, it's as relevant as anything can be. Just say you don't know anything about it. Secondly, who's lying?
The article you linked to. Most people would call "saying inaccurate things" was a form of "lying".
Explain why it's relevant. I get that you're saying "they said TV was fine and it caused problems". I don't see how that's relevant to "we should say things that aren't true about AI".