this post was submitted on 06 May 2025
85 points (100.0% liked)

technology

23873 readers
222 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Philosoraptor@hexbear.net 47 points 2 months ago (14 children)

According to the article, it's a bigger problem for the "reasoning models" than for the older-style LLMs. Since those explicitly break problems down into multiple smaller steps, I wonder if that's creating higher hallucination rates because each step introduces the potential for errors/fabrications. Even a very small amount of "cognitive drift" might have a very large impact on the final answer if it compounds across multiple steps.

[โ€“] Speaker@hexbear.net 5 points 2 months ago

The few times I've had a local one share the "reasoning", the machine mainly just ties itself in knots over trivial nonsense for thousands of words before ignoring all of that and going with the first answer it came up with. Machine God is a Redditor.

load more comments (13 replies)