this post was submitted on 04 Dec 2023
697 points (92.7% liked)

Technology

59597 readers
3334 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

you are viewing a single comment's thread
view the rest of the comments
[โ€“] DarkGamer@kbin.social 4 points 11 months ago* (last edited 11 months ago) (1 children)

People invent false memories and confabulate all the time without even being "aware" of it. I wouldn't be surprised if the vast majority of "lies" that humans tell have no intentionality behind them.

Humans understand symbology of concepts as they relate to the real world. If I stole a cookie from the cookie jar, and someone asked if I took one, I would understand that saying "no" would mean that I was misrepresenting reality, and therefore lying.

LLMs have no idea what a cookie is, what taking one means, or that saying one thing and doing another implies a lie. It just sees lists of words and returns them in an order it thinks would be statistically likely to be a correct reply. It does not understand what words mean, what lying means, or have any idea how to classify anything as such. It just figures out that "did you take a cookie from the cookie jar" should return a series of words in an order like "yes, I took a cookie," or, "no I never took a cookie," depending on what sorts of responses it's trained on because those fit the patterns matched in the training data.

Essentially it's the Chinese room. There is no understanding or intentionality, and this behavior isn't comparable to humans thoughtlessly blurting out a lie. It's being incapable of comprehension of symbolic concepts in general, (at least thus far.)

[โ€“] 0ops@lemm.ee -1 points 11 months ago* (last edited 11 months ago)

LLMs have no idea what a cookie is

The large language model takes in language, so it's only understand things in terms of language. This isn't surprising. Personally, I've tasted a cookie. I've crushed one in my fist watching it crumble, and I remember the sound. I've seen how they were made, and I've made them myself. It feels good when I eat it, apparently that's the dopamine. Why can't the LLM understand cookies the way I do? The most glaring difference is it doesn't have my body. It doesn't have all of my different senses constantly feeding data into it, and it doesn't have a body with muscles to manipulate it's environment, and observe the results. I argue that we shouldn't assume that human consciousness has a "special sauce" until our model's inputs and outputs are similar to our own, the model's scaled/modified sufficiently, and it's still not sentient/sapient by our standards, whatever they are.

My problem with the Chinese room is that how it applies depends on scale. Where do you draw the line between understanding and executing a program? An atom bonding with another atom? A lipid snuggling next to a neighboring lipid? A single neuron cell firing to its neighbor? One section of the nervous system sending signals to the other? One homo sapien speaking to another? Hell, let's go one further: one culture influencing another? Do we actually have free will and sapience, or are we just complicated enough, through layers and layers of Chinese rooms inside of Chinese buildings inside of Chinese cities inside of China itself, that we assume that we are for practical purposes?