this post was submitted on 28 Feb 2024
-27 points (26.3% liked)

Technology

59555 readers
4488 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

First of all, the take that LLM are just Parrots without being able to think for themself is dumb. They do in a limited way! And they are an impressive step compared to what we had before them.

Secondly, the take that LLMs are dumb and make mistakes that takes more work to correct compared to do the work yourself from the start. That is something I often hear from programmers. That might be true for now!

But the important question is how will they develop! And now my take, that I have not seen anywhere besides it is quite obvious imo.

For me, the most impressive thing about LLMs is not how smart they are. The impressive thing is, how much knowledge they have and how they can access and work with this knowledge. And they can do this with a neuronal network with only a few billion parameters. The major flaws at the moment is their inability to know what they don't know and what they can't answer. They hallucinate instead of answering a question with "I don't know." or "I am not sure about this." The other flaw is how they learn. It takes a shit ton of data, a lot of time and computing power for them to learn. And more importantly they don't learn from interactions. They learn from static data. This similar to what the Company DeepMind did with their chess and go engine (also neuronal networks). They trained these engines with a shit tone of games that were played by humans. And they became really good with that. But then the second generation of their NN game engines did not look at any games played before. They only knew the rules of chess/go and then started to learn by playing against themself. It took only a few days and they could beat their predecessors that needed a lot of human games to learn from.

So that is my take! When LLMs start to learn while interacting with humans but more importantly with themself. Teach them the rules (that is the language) and then let them talk or more precise let them play a game of asking and answering. It is more complicated than it sounds. How evaluate the winner in this game for example. But it can be done.

And this is where the AGI will come from in the future. It is only a question how big do these NN need to be to become really smart and how much time they need to train. But this is also when AI can gets dangerous. When they interact with themself and learn from that without outside control.

The main problem right now is they are slow as you can see when you talk to them. And they need a lot of data, or in this case a lot of interactions to learn. But they will surely get better at both in the near future.

What do you think? Would love to hear some feedback. Thanks for reading!

you are viewing a single comment's thread
view the rest of the comments
[–] WeirdGoesPro@lemmy.dbzer0.com 15 points 8 months ago (1 children)

Everything I have read about how LLM’s work suggest that you’re giving them too much credit. Their “thinking” is heavily based on studied examples to the point that they don’t seem capable of original “thought”.

For instance, there was a breakdown of the capabilities of some new imaging models the other day (one of the threads on DB0) that showed that none of the tested models were able to produce a cube balanced on a sphere because there were simply too few examples of a cubic object balancing on a spherical one in its learning model. When asked to show soldiers, the ones that could produce more accurate images could not produce accurate diversity because their improved rendering was due to it drawing from a more limited, and thus less creative, dataset. The result was that it kept looking like it had a specific soldier “in mind” rather than an understanding of soldiers in general.

These things would be trivial for even a child to do, though they may not be able to produce the “uncanny valley” effect that AI is good at. If a kid knows what a cube is, knows what a sphere is, and understands the request, they can easily draw a cube on a sphere without having seen an example of that specific thing before.

I agree that the parrot analogy isn’t correct, but neither is the idea that these things will learn from their own echo chamber in the way you have described. Maybe the idea of dreaming is more accurate—an unusual shuffling of input to make bizzaro results that don’t have any intrinsic meaning at all beyond their relation to the data that is being used.

[–] niva@discuss.tchncs.de 0 points 8 months ago

Well get a concept of how physics work (balancing in your example) only by being trained with (random?) still images is a lot to ask imo. But these picture generating NN can produce "original" pictures. They can draw a spider riding a bike. Might not look very good but it is no easy task. LLM's aren't very smart, compared to a human. But they have a huge amount of knowledge stored in them that they can access and also combine to a degree.

Yes well today's LLM's would not produce anything if they talk to each other. They can't learn persistently from any interaction. But if they will become able to in the future, that is where I think it will go in the direction of AGI.