this post was submitted on 26 Nov 2023
29 points (93.9% liked)
Futurology
1808 readers
49 users here now
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't think LLM's will lead to AGI, but at some point a system is going to be implemented that does lead to AGI but it will be unexpected.
This seems to be the worse case scenario as the AGI will likely be clever enough to ensure humans don't realise it's AGI. There are network effects and complexity that we are not 100% knowledgeable about. The net result is the same: we lose the control problem. Badly.