486
Advanced OpenAI models hallucinate more than older versions, internal report finds
(www.ynetnews.com)
This is a most excellent place for technology news and articles.
No shit.
The fact that is news and not inherently understood just tells you how uninformed people are in order to sell idiots another subscription.
Why would somebody intuitively know that a newer, presumably improved, model would hallucinate more? Because there’s no fundamental reason a stronger model should have worse hallucination. In that regard, I think the news story is valuable - not everyone uses ChatGPT.
Or are you suggesting that active users should know? I guess that makes more sense.
There is definitely reason a larger model would have worse hallucinations. Why do you say not? It's a fundamental problem with data scaling in these architectures
I've never used ChatGPT and really have no interest in it whatsoever.
How about I just do some LSD. Guaranteed my hallucinations will surpass ChatGPT's in spectacular fashion.