this post was submitted on 02 Apr 2025
-3 points (42.9% liked)

Futurology

2420 readers
230 users here now

founded 2 years ago
MODERATORS
top 5 comments
sorted by: hot top controversial new old
[–] verdigris@lemmy.ml 11 points 23 hours ago (1 children)

I don't think this author has any idea what she's talking about. The italic paragraph near the top attempts to justify her ignorance, but she's very clearly drinking the AI CEO Kool-Aid. AGI is not a stop down the road that LLMs are taking us on, it's a hypothetical destination that we don't even know if we're driving toward. We can't even define what causes consciousness yet, so there's no way for us to model it digitally. It will happen eventually, but even the non-PR quotes in the article are hedging based on rough estimates and general trends. It's totally possible that we go another century before true AGI, and that's ignoring the real potential for big losses in scientific progress given current political happenings.

[–] CanadaPlus@lemmy.sdf.org 1 points 23 hours ago

AGI is not a stop down the road that LLMs are taking us on

Oh, she's still on that? Yep, that's soooo 2023. I'll probably just skip this one.

[–] MadMadBunny@lemmy.ca 2 points 22 hours ago

There is absolutely no way an advanced probabilistic algorithm can gain any sort of real, substantial sentience.

Like those hoverboards who don’t hover at all, just because it’s being marketed as "artificial intelligence", doesn’t mean there is any trace of conscience or intelligence in it.

[–] A_A@lemmy.world 1 points 22 hours ago* (last edited 22 hours ago)


... depending on which experts' opinions and which definitions : anything between 1 year and 20 years from now.

[–] Flamekebab@piefed.social 1 points 23 hours ago

I can't even get an LLM to reliably play spot the difference with text compared to a PDF.