484
Advanced OpenAI models hallucinate more than older versions, internal report finds
(www.ynetnews.com)
This is a most excellent place for technology news and articles.
This is an increasingly bad take. If you work in an industry where LLMs are becoming very useful, you would realize that hallucinations are a minor inconvenience at best for the applications they are well suited for, and the tools are getting better by leaps and bounds, week by week.
edit: Like it or not, it’s true. I use LLMs at work, most of my colleagues do too, and none of us use the output raw. Hallucinations are not an issue when you are actively collaborating with the model and not using it to either “know things for you” or “do the work for you.” Neither of those things are what LLMs are really good at, but that’s what most laypeople use them for, so these criticisms are very obviously short-sighted to those of us who have real-world experience with them in a domain where they work well.
My pacemaker decided to one day run at 13,000 rpm. Just a minor inconvenience. That light that was supposed to be red turned green causing a massive pile up. Just a small inconvenience.
If all you’re doing is re writing emails or needing a list on how to start learning python, or explain to someone what a glazier does, yeah AI must be so nice lmao.
The only use for AI is for people who have zero skill and talent to look like they actually have skill and talent. You’re scraping an existence off the backs of all the collective talent to, checks notes, make rule34 galvanized. Good job?
You fundamentally don't understand the hallucination problem and when it arises
Too many mushrooms. It’s always the mushrooms.