this post was submitted on 07 Sep 2025
47 points (94.3% liked)

Futurology

3253 readers
129 users here now

founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.sdf.org/post/41849856

If an LLM can't be trusted with a fast food order, I can't imagine what it is reliable enough for. I really was expecting this was the easy use case for the things.

It sounds like most orders still worked, so I guess we'll see if other chains come to the same conclusion.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] CanadaPlus@lemmy.sdf.org 0 points 1 day ago* (last edited 1 day ago) (1 children)

Arguably, thinking is extreme pattern matching. And, they can make original things that were definitely not in their training data.

The problem seems to be more about alignment. They're rewarded for generating human-looking text, nothing more, and we have no obvious alternative to training them that way. So, of course they're a bit imprecise at any other task.

[โ€“] webghost0101@sopuli.xyz 1 points 1 day ago

I think its one of the systems thinking emergent from, not the only system though.

I do regularly feel like i have an llm-analoge component within my consciousnesses, as autihd i will sometimes say the exact opposite word from the meaning i intend.

I am also known to use certain sentences wrong because i apparently misunderstood their meaning, its autocopy things i heard others say in a similar appearing context so my brain believes i can say them to stall socially to have more time to think about what i really want to say.