this post was submitted on 18 Oct 2025
129 points (96.4% liked)

Futurology

3396 readers
113 users here now

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] justOnePersistentKbinPlease@fedia.io 40 points 2 weeks ago (2 children)

LLMs are a dead end to AGI. They do not reason or understand in any way. They only mimic it.

It is the same technology now as 20 years ago with the first chatbots. Just LLMs have models approaching a Trillion items instead of a few thousand.

[–] Perspectivist@feddit.uk 4 points 2 weeks ago (1 children)

I haven't said a word about LLMs.

[–] justOnePersistentKbinPlease@fedia.io 4 points 2 weeks ago (1 children)

They are the closest things to AI that we have. The so called LRMs fake their reasoning.

They do not think or reason. We are at the very best decades away from anything resembling an AI.

The best LLMs can do is a mass effect(1) VI and that is still more than a decade away

[–] Perspectivist@feddit.uk 3 points 2 weeks ago (1 children)

The chess opponent on Atari is AI - we’ve had AI systems for decades.

An asteroid impact being decades away doesn’t make it any less concerning. My worries about AGI aren’t about the timescale, but about its inevitability.

[–] Sconrad122@lemmy.world 1 points 2 weeks ago

Decades is plenty of time for society to experience a collapse or major setback that prevents AGI from being discovered in the lifetime of any currently alive human. Whether that comes from war, famine, or natural phenomena induced by man-made climate change, we have plenty of opportunities as a species to take the offramp and never "discover" AGI. This comment is brought to you by optimistic existentialism

[–] m532@lemmygrad.ml 1 points 2 weeks ago (1 children)

No, the first chatbots didn't have neural networks inside. They didn't have intelligence.

[–] booty@hexbear.net 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

LLMs aren't intelligence. We've had similar technology in more primitive forms for a long time, like Markov chains. LLMs are hyper specialized at passing a turing test but are not good at basically anything else.

[–] m532@lemmygrad.ml 0 points 2 weeks ago (1 children)

A turing test has nothing to do with intelligence.

[–] booty@hexbear.net 1 points 2 weeks ago (1 children)
[–] m532@lemmygrad.ml 1 points 2 weeks ago (1 children)

You define intelligence wrong.

[–] booty@hexbear.net 2 points 2 weeks ago* (last edited 2 weeks ago)

I didn't say turing tests had anything to do with intelligence. I didn't define intelligence at all. What are you even talking about?