this post was submitted on 14 Apr 2024
273 points (91.7% liked)

Futurology

1798 readers
62 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Imalostmerchant@lemmy.world 1 points 7 months ago (1 children)

I hear you. You make very good points.

I'm tempted to argue that many humans aren't generally intelligent based on your definition of requiring original thought/solving things they haven't been told/trained on, but we don't have to go there. Lol

Can you expand on your last paragraph? You're saying if the model was trained on more theory and less examples of solved problems it might be improved?

[โ€“] itsralC@lemm.ee 1 points 7 months ago

If I'm being completely honest, now that I've woken up with a fresh mind, I have no idea where I was going with that last part. Giving LLMs access to tools like running code so that they can fact check or whatever is a really good idea (that is already being tried) but I don't think it has anything to do with the problem at hand.

The real key issue (I think) is getting AI to keep learning and iterating over itself past the training stage. Which is actually what many people call AGI/the "singularity".