This is an automated archive made by the Lemmit Bot.
The original was posted on /r/okbuddyphd by /u/Im_Diggin on 2024-07-23 18:29:36+00:00.
OK, hear me out here. I know this sounds silly at first glance, but I think I might be onto something here, or at least something worthy of discussion and contemplation. We are familiar with the narrative about LLMs already surpassing the average Joe at X benchmark, or even expert level, but what if this turns out is more akin to testing a calculator on its ability to do sums? What if to create a true intelligence like our own, we discover there are good reasons the "average Joe" isn't a calculator or a database, and that trade-offs must be made in order to, well, create us.
I think about all the recent research that appears to be slowly coming to a consensus that scaling much further will not create new emergent behaviours, and that at a fundamental level, training on text, video and audio will create flawed and inferior world models to our own and that "real" 1st hand sensory data is ultimately required to eliminate hallucinations? I also think about an adage I've been seeing around about how we keep re-inventing the train, what if are are just ultimately re-inventing "the brain"?
What if in the end an AGI takes decades to "grow" with sensory inputs, and requires about the same amount of energy and data that we need (probably currently underestimated and poorly understood in both regards)? What if there are no shortcuts, for energy consumption or even sanity reasons? What if what we consider "mundane" (like driving) or even "menial" (like cleaning) tasks require a human level of intelligence to accomplish effectively?
So after all that, we get "some guy", who might be exceptional in some ways or easier to manipulate (e.g. slavery), but ultimately nothing that nature couldn't produce via good ol' reproduction, and we all feel a bit silly that we went through all that effort to replicate something we already have for "free" (though we learn a lot about ourselves in the process), and maybe even some despair that we don't have our AI god to help us sort out all our issues for us?
I'm not saying an artificial "some guy" wouldn't be a monumental, potentially world changing achievement, but I think different enough from this idea many of us have had about a Data-like super-computer or ASI level deity.
EDIT: As some people are fairly asking, here is a link to the paper that came to mind when I mentioned the diminishing returns on scaling and the The Computerphile video that made me aware of it. This isn't the only version of this argument I've seen, for example AI Explained frequently raises this possibility when referring to papers and comments from prominent AI researchers that it will at least take more than just scaling what we currently have to reach AGI. That said, I should be clear my objective here isn't to argue that LLM scaling is over, it was simply a way to support my hypothetical, speculative proposition.