this post was submitted on 04 Dec 2023
        
      
      698 points (92.7% liked)
      Technology
    76567 readers
  
      
      2965 users here now
      This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
 - Only tech related news or articles.
 - Be excellent to each other!
 - Mod approved content bots can post up to 10 articles per day.
 - Threads asking for personal tech support may be deleted.
 - Politics threads may be removed.
 - No memes allowed as posts, OK to post as comments.
 - Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
 - Check for duplicates before posting, duplicates may be removed
 - Accounts 7 days and younger will have their posts automatically removed.
 
Approved Bots
        founded 2 years ago
      
      MODERATORS
      
    you are viewing a single comment's thread
view the rest of the comments
    view the rest of the comments
          
          
That's not what I said at all, I said as the paper stated the model is encoding trueness into its internal weights during training, this was then demonstrated to be more effective when given data sets with more equal distribution of true and false data points were used during training. If they used one-sided training data the effect was significantly biased. That's all the paper is describing.
So how is this not what I originally said, that LLMs are capable of abstracting the concepts of truth vs falsehood into linear representations? Which again, is the key point of the paper: