I don't think this author has any idea what she's talking about. The italic paragraph near the top attempts to justify her ignorance, but she's very clearly drinking the AI CEO Kool-Aid. AGI is not a stop down the road that LLMs are taking us on, it's a hypothetical destination that we don't even know if we're driving toward. We can't even define what causes consciousness yet, so there's no way for us to model it digitally. It will happen eventually, but even the non-PR quotes in the article are hedging based on rough estimates and general trends. It's totally possible that we go another century before true AGI, and that's ignoring the real potential for big losses in scientific progress given current political happenings.
this post was submitted on 02 Apr 2025
-3 points (42.9% liked)
Futurology
2420 readers
230 users here now
founded 2 years ago
MODERATORS
AGI is not a stop down the road that LLMs are taking us on
Oh, she's still on that? Yep, that's soooo 2023. I'll probably just skip this one.
There is absolutely no way an advanced probabilistic algorithm can gain any sort of real, substantial sentience.
Like those hoverboards who don’t hover at all, just because it’s being marketed as "artificial intelligence", doesn’t mean there is any trace of conscience or intelligence in it.
... depending on which experts' opinions and which definitions : anything between 1 year and 20 years from now.
I can't even get an LLM to reliably play spot the difference with text compared to a PDF.