this post was submitted on 12 Aug 2025
19 points (91.3% liked)

Technology

4008 readers
310 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Technus@lemmy.zip 2 points 2 weeks ago (1 children)

A neurotypical human mind, acting rationally, is able to remember the chain of thought that lead to a decision, understand why they reached that decision, find the mistake in their reasoning, and start over from that point to reach the "correct" decision.

Even if they don't remember everything they were thinking about, they can reason based on their knowledge of themselves and try to reconstruct their mental state at the time.

This is the behavior people are expecting from LLMs but not understanding that it's something they're fundamentally incapable of.

One major difference (among many others, obviously) is that AI models as currently implemented don't have any kind of persistent working memory. All they have for context is the last N tokens they've generated, the last N tokens of user input, and any external queries they've made. All the intermediate calculations (the "reasoning") that led to them generating that output is lost.

Any instance of an AI appearing to "correct" their mistake is just the model emitting what it thinks a correction would be, given the current context window.

Humans also learn from their mistakes and generally make efforts to avoid them in the future, which doesn't happen for LLMs until that data gets incorporated into the training for the next version of the model, which can take months to years. That's why AI companies are trying to capture and store everything from user interactions, which is a privacy nightmare.

It's not a compelling argument to compare AI behavior to that of a dysfunctional human brain and go "see, humans do this too, teehee!" Not when the whole selling point of these things is that they supposed to be smarter and less fallible than most humans.

I'm deliberately trying not to be ableist in my wording here, but it's like saying, "hey, you know what would do wonders for productivity and shareholder value? If we fired half our workforce, then found someone with no experience, short-term memory loss, ADHD and severe untreated schizophrenia, then put them in charge of writing mission-critical code, drafting laws, and making life-changing medical and business decisions."

I'm not saying LLMs aren't technically fascinating and a breakthrough in AI development, but the way they have largely been marketed and applied is scammy, misleading, and just plain irresponsible.

[–] givesomefucks@lemmy.world 0 points 2 weeks ago (2 children)

A neurotypical human mind, acting rationally, is able to remember the chain of thought that lead to a decision, understand why they reached that decision, find the mistake in their reasoning, and start over from that point to reach the “correct” decision.

No.

What we learned from those experiments was that if we don't know a reason for why we did something, we'd invent and whole heartedly believe the first plausible explanation we come up with.

I didn't read any further because you had a fundamental misunderstanding about what those studies actually proved

[–] Technus@lemmy.zip 4 points 2 weeks ago

I like how you've deliberately ignored the specifically chosen wording of my statement, and completely disregarded the rest of my point, simply because you perceive it as counter-factual in your world-view, thus exhibiting the exact kind of behavior you were talking about. That's really funny.

[–] Mac@mander.xyz 2 points 2 weeks ago

They're not talking about the split brain exoeriments.
L2R noob