this post was submitted on 26 Sep 2025
168 points (98.8% liked)

Chapotraphouse

14135 readers
623 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Slop posts go in c/slop. Don't post low-hanging fruit here.

founded 4 years ago
MODERATORS
 

Source: https://mastodon.social/@Daojoan/115259068665906083

As a reminder, "hallucinations" are inevitable in LLMs

Explanation of hallucinations from 2023

I always struggle a bit with I'm asked about the "hallucination problem" in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines.

We direct their dreams with prompts. The prompts start the dream, and based on the LLM's hazy recollection of its training documents, most of the time the result goes someplace useful.

It's only when the dreams go into deemed factually incorrect territory that we label it a "hallucination". It looks like a bug, but it's just the LLM doing what it always does.

At the other end of the extreme consider a search engine. It takes the prompt and just returns one of the most similar "training documents" it has in its database, verbatim. You could say that this search engine has a "creativity problem" - it will never respond with something new. An LLM is 100% dreaming and has the hallucination problem. A search engine is 0% dreaming and has the creativity problem.

All that said, I realize that what people actually mean is they don't want an LLM Assistant (a product like ChatGPT etc.) to hallucinate. An LLM Assistant is a lot more complex system than just the LLM itself, even if one is at the heart of it. There are many ways to mitigate hallucinations in these systems - using Retrieval Augmented Generation (RAG) to more strongly anchor the dreams in real data through in-context learning is maybe the most common one. Disagreements between multiple samples, reflection, verification chains. Decoding uncertainty from activations. Tool use. All an active and very interesting areas of research.

TLDR I know I'm being super pedantic but the LLM has no "hallucination problem". Hallucination is not a bug, it is LLM's greatest feature. The LLM Assistant has a hallucination problem, and we should fix it.

Okay I feel much better now :)

Explanation source: https://xcancel.com/karpathy/status/1733299213503787018

you are viewing a single comment's thread
view the rest of the comments
[–] KuroXppi@hexbear.net 4 points 3 weeks ago* (last edited 3 weeks ago) (1 children)
[–] alexei_1917@hexbear.net 3 points 3 weeks ago (1 children)

Aww, thanks. I just try to be a person I'd want to hang out with, and a big part of that is to not treat young people like crap, even when they're being annoying. An annoying toddler is just a little human with big feelings. Even a screaming baby is crying for a reason, and you shouldn't get mad at the helpless infant or at the parent doing their best to find and solve the problem. (Getting mad at a neglectful parent with headphones on, though...) Little kids are silly sometimes, and require a lot of patience, but being patient with kids and playing along with silly stuff is a good thing to do if you can, and kids can be really fun if you're patient with them and treat them as people. Even though my little brother's not a little kid anymore, I still see him in every little boy who drives me nuts in public, and it reminds me to be patient with kids, all I wanted at that age was for grownups and bigger kids to be patient and take me seriously. Treat people the way you want to be treated. And that includes treating kids the way you would have wanted grownups to treat you as a kid. Don't just give a kid under your responsibility everything they want, but hear them out and don't be a jerk.

[–] KuroXppi@hexbear.net 2 points 3 weeks ago

Thanks for this, I like your explanation