this post was submitted on 02 Oct 2023
165 points (89.5% liked)

Technology

34987 readers
395 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments

sampling a fraction of another person's imagery or written work.

So citing is a copyright violation? A scientific discussion on a specific text is a copyright violation? This makes no sense. It would mean your work couldn't build on anything else, and that's plain stupid.

Also to your first point about reasoning and advanced collage process: you are right and wrong. Yes an LLM doesn't have the ability to use all the information a human has or be as precise, therefore it can't reason the same way a human can. BUT, and that is a huge caveat, the inherit goal of AI and in its simplest form neural networks was to replicate human thinking. If you look at the brain and then at AIs, you will see how close the process is. It's usually giving the AI an input, the AI tries to give the desired output, them the AI gets told what it should have looked like, and then it backpropagates to reinforce it's process. This already pretty advanced and human-like (even look at how the brain is made up and then how AI models are made up, it's basically the same concept).

Now you would be right to say "well in it's simplest form LLMs like GPT are just predicting which character or word comes next" and you would be partially right. But in that process it incorporates all of the "knowledge" it got from it's training sessions and a few valuable tricks to improve. The truth is, differences between a human brain and an AI are marginal, and it mostly boils down to efficiency and training time.

And to say that LLMs are just "an advanced collage process" is like saying "a car is just an advanced horse". You're not technically wrong but the description is really misleading if you look into the details.

And for details sake, this is what the paper for Llama2 looks like; the latest big LLM from Facebook that is said to be the current standard for LLM development:

https://arxiv.org/pdf/2307.09288.pdf