this post was submitted on 24 Aug 2025
128 points (99.2% liked)
Fuck AI
3843 readers
842 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Internal consistency is also usually considered a good thing. Any individual sentence an LLMbecile generates is usually grammatically correct and internally consistent (though I have caught sentences whose endings have contradicted the beginning here and there), but as soon as you reach a second sentence the odds of finding a direct contradiction mount.
LLMbeciles are just not very good for anything.
Some models are better than others at holding context. They all wander at some point if you push them though. Ironically, the newer versions that have a "thinking mode" are worse because of this, the context gets stretched out and they start second guessing even correct answers.
Indeed. The reasoning models can get incredibly funny to watch. I had one (DeepSeek) spinning around for over 850 seconds only to have it come up with the wrong answer to a simple maths question.