ThermonuclearEgg

joined 1 year ago
[–] ThermonuclearEgg@hexbear.net 2 points 13 minutes ago

1 ± 2 per frame

[–] ThermonuclearEgg@hexbear.net 2 points 16 minutes ago (1 children)

You might be interested in the comments on .ml: https://lemmy.ml/post/37681719

[–] ThermonuclearEgg@hexbear.net 1 points 27 minutes ago* (last edited 24 minutes ago)

In case you missed the thread on that from Wednesday: https://hexbear.net/post/6439393

[–] ThermonuclearEgg@hexbear.net 48 points 6 hours ago

WOKE ICE: "DEPORT THE POLICE" mao-aggro-shining

 
[–] ThermonuclearEgg@hexbear.net 3 points 1 day ago (1 children)

I'm glad some people have given you some more info over in the other thread

[–] ThermonuclearEgg@hexbear.net 8 points 1 day ago (1 children)

I'm sorry I still don't see the link can you please just comment it

I don't know, maybe this temple is a start?

[–] ThermonuclearEgg@hexbear.net 14 points 1 day ago (1 children)

IIRC this isn't even new for him, he's done that before

[–] ThermonuclearEgg@hexbear.net 10 points 1 day ago (3 children)

INFO: Can I use this to sneak into the DPRK?

[–] ThermonuclearEgg@hexbear.net 14 points 1 day ago (1 children)

Sure, and when they do, the headlines read EU directive opens door for China to steal West’s tech secrets as recently as 2024

We can use insults that aren't misogynist to make that point

 

TIL there are DPRK-operated restaurants in China

 
167
submitted 3 weeks ago* (last edited 3 weeks ago) by ThermonuclearEgg@hexbear.net to c/chapotraphouse@hexbear.net
 

Source: https://mastodon.social/@Daojoan/115259068665906083

As a reminder, "hallucinations" are inevitable in LLMs

Explanation of hallucinations from 2023

I always struggle a bit with I'm asked about the "hallucination problem" in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines.

We direct their dreams with prompts. The prompts start the dream, and based on the LLM's hazy recollection of its training documents, most of the time the result goes someplace useful.

It's only when the dreams go into deemed factually incorrect territory that we label it a "hallucination". It looks like a bug, but it's just the LLM doing what it always does.

At the other end of the extreme consider a search engine. It takes the prompt and just returns one of the most similar "training documents" it has in its database, verbatim. You could say that this search engine has a "creativity problem" - it will never respond with something new. An LLM is 100% dreaming and has the hallucination problem. A search engine is 0% dreaming and has the creativity problem.

All that said, I realize that what people actually mean is they don't want an LLM Assistant (a product like ChatGPT etc.) to hallucinate. An LLM Assistant is a lot more complex system than just the LLM itself, even if one is at the heart of it. There are many ways to mitigate hallucinations in these systems - using Retrieval Augmented Generation (RAG) to more strongly anchor the dreams in real data through in-context learning is maybe the most common one. Disagreements between multiple samples, reflection, verification chains. Decoding uncertainty from activations. Tool use. All an active and very interesting areas of research.

TLDR I know I'm being super pedantic but the LLM has no "hallucination problem". Hallucination is not a bug, it is LLM's greatest feature. The LLM Assistant has a hallucination problem, and we should fix it.

Okay I feel much better now :)

Explanation source: https://xcancel.com/karpathy/status/1733299213503787018

 
 

putin-wink「安全の保証」の枠組みにロシアも加わるべきだ!

 
 

biden-leftist

 
127
submitted 5 months ago* (last edited 5 months ago) by ThermonuclearEgg@hexbear.net to c/memes@lemmy.ml
 

Context:

view more: next ›