this post was submitted on 18 Oct 2025
129 points (96.4% liked)

Futurology

3393 readers
5 users here now

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Perspectivist@feddit.uk 16 points 2 weeks ago (8 children)

An asteroid impact not being imminent doesn’t really make me feel any better when the asteroid is still hurtling toward us. My concern about AGI has never been about the timescale - it’s the fact that we know it’s coming, and almost no one seems to take the repercussions seriously.

[–] justOnePersistentKbinPlease@fedia.io 40 points 2 weeks ago (2 children)

LLMs are a dead end to AGI. They do not reason or understand in any way. They only mimic it.

It is the same technology now as 20 years ago with the first chatbots. Just LLMs have models approaching a Trillion items instead of a few thousand.

[–] Perspectivist@feddit.uk 4 points 2 weeks ago (1 children)

I haven't said a word about LLMs.

[–] justOnePersistentKbinPlease@fedia.io 4 points 2 weeks ago (1 children)

They are the closest things to AI that we have. The so called LRMs fake their reasoning.

They do not think or reason. We are at the very best decades away from anything resembling an AI.

The best LLMs can do is a mass effect(1) VI and that is still more than a decade away

[–] Perspectivist@feddit.uk 3 points 2 weeks ago (1 children)

The chess opponent on Atari is AI - we’ve had AI systems for decades.

An asteroid impact being decades away doesn’t make it any less concerning. My worries about AGI aren’t about the timescale, but about its inevitability.

[–] Sconrad122@lemmy.world 1 points 2 weeks ago

Decades is plenty of time for society to experience a collapse or major setback that prevents AGI from being discovered in the lifetime of any currently alive human. Whether that comes from war, famine, or natural phenomena induced by man-made climate change, we have plenty of opportunities as a species to take the offramp and never "discover" AGI. This comment is brought to you by optimistic existentialism

[–] m532@lemmygrad.ml 1 points 2 weeks ago (1 children)

No, the first chatbots didn't have neural networks inside. They didn't have intelligence.

[–] booty@hexbear.net 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

LLMs aren't intelligence. We've had similar technology in more primitive forms for a long time, like Markov chains. LLMs are hyper specialized at passing a turing test but are not good at basically anything else.

[–] m532@lemmygrad.ml 0 points 2 weeks ago (1 children)

A turing test has nothing to do with intelligence.

[–] booty@hexbear.net 1 points 2 weeks ago (1 children)
[–] m532@lemmygrad.ml 1 points 2 weeks ago (1 children)

You define intelligence wrong.

[–] booty@hexbear.net 2 points 2 weeks ago* (last edited 2 weeks ago)

I didn't say turing tests had anything to do with intelligence. I didn't define intelligence at all. What are you even talking about?

[–] gbzm@piefed.social 17 points 2 weeks ago (3 children)

At the risk of sounding like I've been living under a rock, how do we know it's coming, exactly?

[–] Perspectivist@feddit.uk 5 points 2 weeks ago (1 children)

We’ll keep incrementally improving our technology, and unless we - or some outside force - destroy us first, we’ll get there eventually.

We already know that general intelligence is possible, because humans are generally intelligent. There’s no reason to assume that what our brains do couldn’t be replicated artificially.

At some point, unless something stops us, we’ll create an artificially intelligent system that’s as intelligent as we are. From that moment on, we’re no longer needed to improve it further - it will make a better version of itself, which will make an even better version, and so on. Eventually, we’ll find ourselves in the presence of something vastly more intelligent than us - and the idea of “outsmarting” it becomes completely incoherent. That’s an insanely dangerous place for humanity to end up in.

We're growing a tiger puppy. It's still small and cute today but it's only a matter of time untill it gets big and strong.

[–] gbzm@piefed.social 3 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

What if human levels of intelligence requires building something that is so close in its mechanisms to a human brain that it's indistinguishable from a brain, or a complete physical and chemical simulation of a brain? What if the input-output "training" required to make it work in any comprehensible way is so close in fullness and complexity to the human sensory perception system interacting with the world, that it ends up being indistinguishable from a human body or a complete physical simulation of a body, with its whole environment?

There's no reason to assume our brains or their mechanisms can't be replicated artificially, but there's also no reason to assume it can be made practical, or that because we can make it it can self-replicate at no cost in terms of material resources, or refine its own formula. Humans have human-level intelligence, and they've never successfully created anything as intelligent as them.

I'm not saying it won't happen, mind you, I'm just saying it's not a certainty. Plenty of things are impossible, or sufficiently impractical that humans - or any species - may never create it.

[–] thevoidzero@lemmy.world 2 points 2 weeks ago

This is what I think might be more reasonable to do. Even with a very strong capabilities of reason, I think we might have to train the AGI like how we train children. It'll take time as they interact through the environment not just read a bunch of data on the internet that comes from a various sources and might not lead into a coherent direction on how someone should live their life, or act.

This way might make better AGI that are actually closer to human in variations on how they act compared to rapid training on the same data. Because having the diversity of thoughts and discussions are what leads into better outcomes in many situations.

[–] m532@lemmygrad.ml 1 points 2 weeks ago

This is like that "only planets that are 100% exactly like earth can create life, because the only life we know is on earth" backward reasoning

[–] crazycraw@crazypeople.online 3 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

well we often equate predictions around AGI with ASI and a singularity event, which has been predicted for decades based on several aspects of computing over the years; advancing hardware, software, throughput and then of course neuroscience.

ASI is more of a prediction of the capabilities where even imitating intelligence with enough presence will give rise to tangible, real higher intelligence after a few iterations, then doing so on its own then doing improvements. once those improvements are beyond human capability, we have our singularity.

back to just AGI, it seems to be achievable based on mimicking the processing power of a human mind, which isn't currently possible, but we are steadily working toward it and have achieved some measures of success. we may decide that certain aspects of artifical intelligence are reached prior to that, but IMO it feels like we're only a few years away.

[–] gbzm@piefed.social 4 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Alright. I had already seen that stuff and I've never seen really convincing arguments for these predictions beyond pretty sci-fi-esque speculation.
I'm not at all convinced we have anything even remotely resembling "mimicking the processing power of a human mind", either through material simulation of a complete brain and the multi sensorial interactions with an environment to let it grow into a functioning mind, or the party tricks we tend to call AI these days (which boil down to Chinese Rooms built with thousands of GPU's worth of piecewise linear regressions, and that are unable to reason or even generalize beyond their training distributions according to the source).
I guess embedding cultivated neurons on microchips could maybe make new things possible, but even then I wouldn't be surprised if it turned out making a human-level intelligence ended up requiring building an actual whole ass human, or at least most of one. Seeing where we are with that stuff, I would rather surmise a time scale in the decades to centuries, if at all. Which could very well be longer than the time climate changes leaves us with the required levels of industry to even attempt it.

[–] Perspectivist@feddit.uk 3 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Can you think of a reason why we wouldn’t ever get there? We know it’s possible - our brains can do it. Our brains are made of matter, and so are computers.

The timescale isn’t the important part - it’s the apparent inevitability of it.

[–] gbzm@piefed.social 2 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

I've given reasons. We can imagine Dyson Spheres, and we know it's possible. It doesn't mean we can actually build them or ever will be able to.

The fact that our brains are able to do stuff that we don't even know how they do doesn't necessarily mean rocks can. If it somehow requires the complexity of biology, depending on how much of this complexity it requires it could just end up meaning creating a fully fledged human, which we can already do, and it hasn't caused a singularity because creating a human costs resources even when we do it the natural way.

[–] Perspectivist@feddit.uk 3 points 2 weeks ago (1 children)

I don’t see any reason to assume substrate dependence either, since we already have narrowly intelligent, non-biological systems that are superhuman within their specific domains. I’m not saying it’s inconceivable that there’s something uniquely mysterious about the biological brain that’s essential for true general intelligence - it just seems highly unlikely to me.

[–] gbzm@piefed.social 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

The thing is I'm not assuming substrate dependence. I'm not saying there's something uniquely mysterious about the biological brain, I'm saying what we know about "intelligence" right now is that it's an emergent property observed in brains that have been in interaction with a physical and natural environment through complex sensory feedback loops, materialized by the rest of the human body. This is substrate independent, but the only thing that rocks can do for sure is simulate this whole system, and good simulations of complicated systems are not an easy feat at all, and it's not at all certain that we ever be able to do it without it requiring too much resources for it to be worth the hassle.

The things we've done that most closely resemble human intelligence in computers are very drastic oversimplifications of how biological brains work, sprinkled with mathematical translations of actual cognitive processes. And right now they appear very limited, even though a lot of resources - physical and economic - have been injected into them. We don't understand how brains work enough to refine this simplification very well, and we don't know much about the formation of cognitive processes relevant to "intelligence" either. Yet you assert it's a certainty that we will, that we will encode it in computers, and that the result will have a bunch of properties of current software, easily copyable and editable (which the human-like intelligences we know are not at all), not requiring more power than is output by the Sun, (which humans don't, but they're completely different physical systems), etc.

The same arguments you're making could be made to say, in 1969 after the moon landing, that the human race will definitely colonize the whole solar system. We know it's possible so it will happen at some point is not how technology works, it also needs to be profitable enough for enough industry to be injected in the problem to solve it, and for the result to live up to profitability expectations. Right now no AI firm is even remotely profitable, and the resources in the Kuiper belt or the real estate on Mars aren't enough of an argument that our rockets can reach them, there's no telling that they will ever be ; our economies might well simply lose interest before then.

[–] Perspectivist@feddit.uk 1 points 2 weeks ago (1 children)

I’m not claiming that AGI will necessarily be practical or profitable by human standards - just that, given enough time and uninterrupted progress, it’s hard to see how it wouldn’t happen.

The core of my argument isn’t about funding or feasibility in the short term, it’s about inevitability in the long term. Once you accept that intelligence is a physical process and that we’re capable of improving the systems that simulate it, the only thing that can stop us from reaching AGI eventually is extinction or total collapse.

So, sure - maybe it’s not 10 years away. Maybe not 100. But if humanity keeps inventing, iterating, and surviving, I don’t see a natural stopping point before we get there.

[–] gbzm@piefed.social 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I get it, the core of your argument is given enough time it will happen, which isn't saying much: given infinite time anything will happen. Even extinction and total collapse aren't enough, infinite time means a thinking computer will just emerge fully formed from quantum fluctuations.

But you're voicing it as though it's a certain direction of human technological progress which is frankly untrue. You've just concocted a scenario for technological progress in your head by extrapolating from the current state of it, and you present it as a certainty. But anyone can do the same for equally credible scenarios without AGI. For instance, if the only way to avoid total collapse is to stabilize energy consumption and demographic growth and we somehow manage it, then if making rocks think costs 10^20W and the entire world's labour, then it will not ever happen in any meaningful sense of the word "ever".

PS - to elaborate a bit on that "meaningful sense of the word ever" bit, I don't want to nitpick but some time scales do make asteroid impacts irrelevant. The Sun will engulf the earth in about 5 billion years. Then there's the heat death of the universe. In computing problems you get millions of years popping here and there for problems that feel like they should be easy

[–] Perspectivist@feddit.uk 1 points 2 weeks ago (2 children)

In my view, we’re heavily incentivized to develop AGI because of the enormous potential benefits - economic, scientific, and military. That’s exactly what worries me. We’re sprinting toward it without having solved the serious safety and control problems that would come with it.

I can accept that the LLM approach might be a dead end, or that building AGI could be far harder than we think. But to me, that doesn’t change the core issue. AGI represents a genuine civilization-level existential risk. Even if the odds of it going badly are small, the stakes are too high for that to be comforting.

Given enough time, I think we’ll get there - whether that’s in 2 years or 200. The timescale isn’t the problem; inevitability is. And frankly, I don’t think we’ll ever be ready for it. Some doors just shouldn’t be opened, no matter how curious or capable we become.

[–] silasmariner@programming.dev 1 points 2 weeks ago (1 children)

Honestly I agree with gbzm here. 'I can't see why I shouldn't be possible' is a far cry from 'it's inevitable'... And I'd hardly say we're sprinting towards it, either. There are, in my view, dozens of absurdly difficult problems, any one of which may be insoluble, that stands between us and agi. Anyone telling you otherwise is selling something or already bought in ;)

Ppl definitely are selling natural language interfaces as if they're intelligent. It's convincing, I guess, to some. It's an illusion though

[–] Perspectivist@feddit.uk 1 points 2 weeks ago (1 children)

This discussion isn't about LLMs per se.

However, I hope you're right. Unfortunelately, I've yet to meet anyone able to convince me that I'm wrong.

[–] silasmariner@programming.dev 1 points 2 weeks ago (1 children)

We can't know that I'm wrong or you're wrong I guess. I am aware of the context of the discussion and mention LLMs as a reason the hype has picked back up. The processing requirements for true intelligence appear, to me, to be far outside the confines of what silicon chips are even theoretically capable of. Seems odd to me you should ever have a full AGI before, say, cyborgs (y'know, semi-biological hybrids). We shall see how things develop over the next half a century or so, and perhaps more light shall be shed.

[–] Perspectivist@feddit.uk 1 points 2 weeks ago (1 children)

I’ve been worried about this since around 2016 - long before I’d ever heard of LLMs or Sam Altman. The way I see it, intelligence is just information processing done in a certain way. We already have narrowly intelligent AI systems performing tasks we used to consider uniquely human - playing chess, driving cars, generating natural-sounding language. What we don’t yet have is a system that can do all of those things.

And the thing is, the system I’m worried about wouldn’t even need to be vastly more intelligent than us. A “human-level” AGI would already be able to process information so much faster than we can that it would effectively be superintelligent. I think that at the very least, even if someone doubts the feasibility of developing such a system, they should still be able to see how dangerous it would be if we actually did stumble upon it - however unlikely that might seem. That’s what I’m worried about.

[–] silasmariner@programming.dev 1 points 2 weeks ago (1 children)

Yeah see I don't agree with that base premise, that it's as simple as information processing. I think sentience - and, therefore, intelligence - is a more holistic process that requires many more tightly-coupled external feedback loops and an embedding of the processes in a way that makes the processing analogous to the world as modelled. But who can say, eh?

[–] Perspectivist@feddit.uk 1 points 2 weeks ago

It’s not obvious to me that sentience has to come along for the ride. It’s perfectly conceivable that there’s nothing it’s like to be a superintelligent AGI system. What I’ve been talking about this whole time is intelligence — not sentience, or what I’d call consciousness.

[–] gbzm@piefed.social 1 points 2 weeks ago

Right. I don't believe it's inevitable, in fact I believe it's not super likely given where we're at and the economic, scientific and military incentives I'm aware of. I think the people who are sprinting now do so blindly, not knowing where or how far it is. I think 2 years is a joke or a lie Sam Altman tells gullible investors, and 200 years means we've survived global warming so if we're still there our incentives look nothing then like they do now, and I don't believe in it then either. I think it's at most a maybe on the far, far horizon of the thousands+ years in a world that looks nothing like ours, and in the meantime we have way more pressing problems than the snake oil a few salesmen are trying desperately to sell. Like the salesmen themselves, for example.

[–] m532@lemmygrad.ml 1 points 2 weeks ago

What does replicating humans have to do with the singularity?

I'd argue the industrial revolution was the singularity. And if it wasn't that, it would be computers.

[–] Aceticon@lemmy.dbzer0.com 1 points 2 weeks ago* (last edited 2 weeks ago)

Intelligence is possible, as proven by the existence of it in the Biological world.

So it makes sense that as Technology evolves we become able to emulate the Biological World in that, just as we have in so many other things, from flight to artificial hearths.

However, there is no guarantee that Mankind will not go extinct before that point is reached, nor there is any guarantee that our Technological progression won't come to an end (though at the moment we're near a peak period in terms of speed of Technological progression), so it is indeed true that we don't know it's coming: we as a species might not be around long enough to make it come or we might high a ceiling in our Technological development before our technology is capable of creating AGI.

Beyond the "maybe one day" view, personally I think that believing that AGI is close is complete total pie in the sky fantasism: this supposed path to it that were LLMs turned out to be a dead end that was decorated with a lot of bullshit to make it seem otherwise, what the technology underlying it does really well - pattern recognition and reproduction - has turned out to not be enough by itself to add up to intelligence and we don't actually have any specific technological direction in the pipeline (that I know of) which can crack that problem.

[–] Lugh 5 points 2 weeks ago

Yes, and there is also the possibility that it could be upon us quite suddenly. It may just take one fundamental breakthrough to make the leap from what we have currently to AGI, and once that breakthrough is achieved, AGI could arrive quite quickly. It may not be a linear process of improvement, where we reach the summit in many years.

[–] vrighter@discuss.tchncs.de 4 points 2 weeks ago (1 children)

we don't know it's coming. What leads you to believe that? the countless times that they promised thea'd fixed their problems which invariably turned out to be bullshit?

load more comments (1 replies)
[–] NuraShiny@hexbear.net 3 points 2 weeks ago (1 children)

Do we know it's coming? by what evidence? I don't see it.

Far as I can tell, we are more likely to discover how to genetically uplift other life to intelligence then we are to making computers actually think.

[–] Perspectivist@feddit.uk 2 points 2 weeks ago (1 children)

I wrote a response to this same question to another user.

[–] NuraShiny@hexbear.net 2 points 2 weeks ago (1 children)
[–] Perspectivist@feddit.uk 3 points 2 weeks ago (1 children)

We’ll keep incrementally improving our technology, and unless we - or some outside force - destroy us first, we’ll get there eventually.

We already know that general intelligence is possible, because humans are generally intelligent. There’s no reason to assume that what our brains do couldn’t be replicated artificially.

At some point, unless something stops us, we’ll create an artificially intelligent system that’s as intelligent as we are. From that moment on, we’re no longer needed to improve it further - it will make a better version of itself, which will make an even better version, and so on. Eventually, we’ll find ourselves in the presence of something vastly more intelligent than us - and the idea of “outsmarting” it becomes completely incoherent. That’s an insanely dangerous place for humanity to end up in.

We’re growing a tiger puppy. It’s still small and cute today but it’s only a matter of time untill it gets big and strong.

[–] NuraShiny@hexbear.net 5 points 2 weeks ago (2 children)

There are limits to technology. Why would we assume infinite growth of technology, when nothing else we have is infinite? It's not like the wheel is getting more round over time. We made it out of better materials, but it still has limits to it's utility. All our computers are computing 1s and 0s and adding more of those per seocnd does not seem to do anything to make them smarter.

I would worry about ecological collapse a lot more then this that's for sure. That's something that current shitty non-smart AI can achieve if they keep making data centers and drinking ourwater.

[–] Perspectivist@feddit.uk 2 points 2 weeks ago (1 children)

I don’t see any reason to assume humans are anywhere near the far end of the intelligence spectrum. We already have narrow-intelligence systems that are superhuman in specific domains. I don’t think comparing intelligence to something like a wheel is fair - there are clear geometric limits to how round a wheel can be, but I’ve yet to hear any comparable explanation for why similar limits should exist for intelligence. It doesn’t need to be infinitely intelligent either - just significantly more so than we are.

Also, as I said earlier - unless some other catastrophe destroys us before we get there. That doesn’t conflict with what I said, nor does it give me any peace of mind. It’s simply my personal view that AGI or ASI is the number one existential risk we face.

[–] NuraShiny@hexbear.net 1 points 2 weeks ago (1 children)

Okay, granted. But if we are on the stupid side of the equation, why would we be able to make something smarter then us? One does not follow from the other.

I also disagree that we have made anything that is actually intelligent. A computer can do math billions of times faster then a human can, but doing math is not smarts. Without human intervention and human input, the computer would just idle and do nothing. That is not intelligence. At no point has code shown the ability to self-improve and grow and the current brand of shitAI is no different. They call what they do to it training, but it's really just telling it how to weigh the reams of data it's eating and without humans, it would not do even that.

Ravens and Octopi can solve quite complex puzzles. Are they intelligent? What is even the cutoff for intelligence? We don't even have a good definition for what intelligence is that encompasses everything. People cite IQ, which is obviously bunk. People try to section it into several types of intelligence, social, logical and so on. If we don't even know what the objective definition of intelligence is, I am not worried about us creating it from whole cloth.

[–] Perspectivist@feddit.uk 1 points 2 weeks ago

AlphaGo became better than humans at playing Go by playing against itself - there was no “dumb human” teaching it. The same applies to every other deep learning system. You can, of course, always move the goalposts by redefining what intelligence even means, but when I use the term, I’m referring to the ability to acquire, understand, and use knowledge.

By that definition, a chess bot is intelligent - it knows the rules of chess, it can observe the pieces on the board, think ahead, and make decisions. It’s not generally intelligent, but within its domain, it’s a genius. The same applies to LLMs. The issue isn’t that they’re bad; it’s that they’re not what people thought they would be. When an average person hears “AI,” they picture HAL 9000, Samantha, or Jarvis - but those are AGI systems. LLMs are not. They’re narrow-intelligence systems designed to produce natural-sounding language, and at that, they’re exceptionally good.

The fact that they also often get things right is a byproduct of being trained on a huge amount of correct information - not what they were designed to do. If anything, the fact that a language bot can also give accurate answers this often should make people more worried, not less. That’s like a chess bot also turning out to be kind of good at conversation.

load more comments (1 replies)
[–] FortifiedAttack@hexbear.net 3 points 2 weeks ago

Except there is no such asteroid and techbros have driven themselves into a frenzy over a phantom.

The real threat to humanity is runaway climate change, which techbros conveniently don't give a single fuck about, since they use gigawatts of power to train bigger and bigger models with further and further diminishing returns.

[–] MotoAsh@piefed.social 2 points 2 weeks ago

Greed blinds all

[–] funkless_eck@sh.itjust.works 1 points 2 weeks ago (1 children)

equally, AGI could be as baked-in a limitation as exceeding the speed of light or time travel - any model that includes it has to put a lot into fictional solutions that have no bearing on current reality.

[–] Perspectivist@feddit.uk 3 points 2 weeks ago (1 children)

General intelligence isn't a theoretical concept though. Human brain can do it quite efficiently.

[–] funkless_eck@sh.itjust.works 1 points 2 weeks ago

time and the speed of light also both exist and their mechanisms operate efficiently, too.