this post was submitted on 24 Sep 2023
19 points (88.0% liked)

Futurology

1809 readers
169 users here now

founded 1 year ago
MODERATORS
top 7 comments
sorted by: hot top controversial new old
[–] superfes@beehaw.org 7 points 1 year ago (3 children)

Good god, we're no closer to AI anything than we were 50 years ago, the only thing that changed is the amount of CPU and storage we could allocate to the maths involved.

AI will never happen with the current models.

[–] Pons_Aelius@kbin.social 6 points 1 year ago* (last edited 1 year ago)

Thank you. I am glad I am not the only one saying this every time this sort of bullshit gets posted.

Simply put.

We are no closer today in understanding how self-awareness and intelligence develops in animals than we were when all this AI research started 60+ years ago.

You can go back to the 1970s and read similar articles to today.

In 10 years we will use AI to talk to other animals

In 10 years AI will be self-aware and as smart as a human

Shit, go watch Colossus: The Forbin Project and see the same shit in a movie from 53 years ago.

[–] agent_flounder@lemmy.one 2 points 1 year ago

Per Kurzweil, "all" we have to do is simulate every neuron in the brain. And supposedly this will be possible in the coming decades.

Maybe that will be computationally possible but I am extremely skeptical that this is sufficient to achieve a general human-like intelligence.

Since a brain without sensors probably isn't going to be much use and because the more we learn the more interconnected everything is such as the interconnectedness of gut biome and mental health, I don't think you can simply isolate and simulate the brain and get humanlike intelligence

Even expanding to the entire nervous system probably isn't sufficient because of all the interconnectedness (much of which we may not even realize yet)

But even if the simulation could be simplified, there's the crucial matter of childhood development.

I get the impression (from reading a little about early development) that our cognition doesn't develop in a vacuum but is built on movement and sensory input. Like if a baby doesn't get enough tummy time they don't learn to crawl as easily which can affect their entire development. Or like if infants aren't held they die. Touch and caring and such are required for normal, healthy development.

Could a simulated brain learn abstract concepts without a physical body to grasp things (or stick them in its mouth lol) as it develops from infancy onward?

General intelligence is, I think, the ability to exist in, interact with, and adapt to our world and the things and creatures in it. So being unable to physically interact with the world I don't see how a simulated nervous system would be of much use.

Not that it is impossible but making an artificial body is going to take a lot longer to develop.

[–] Rhaedas@kbin.social 1 points 1 year ago

You are referring to AGI (artificial general intelligence). AI has been around for a while now in the form of ANI (artificial narrow intelligence). LLMs are still in the latter, but faster compute and ways to use different LLMs to improve their outputs have changed and broadened how narrow they can be. Still not AGI, absolutely, but the point here is still valid because even a narrow AI or even lower can have alignment problems that turn them into an issue. And safety towards such things is very much backseat in any AI operation, even as the same experts talk about sudden and unexpected emergent properties. Eventually with such recklessness for profit and being first, an emergent property will occur that might as well be AGI for the dangerous potential it has, and we are not ready.

Companies are bending over backwards to insert the AI that we've come up with (that's absolutely not AGI) in all sorts of places, with some major failures (because LLMs are being sold as AGI, not as what they are). Eventually someone will go too far even without AGI and it doesn't seem anyone is putting on the brakes.

[–] DavidGarcia@feddit.nl 4 points 1 year ago

the framework is: we'll worry about it when it's too late

[–] owenfromcanada@lemmy.world 2 points 1 year ago

The AI can probably make the framework for us.

[–] Espiritdescali 1 points 1 year ago

Alignment will be one of the biggest challenges the entire human species faces over the next 50 years. Assuming we survive climate change that is!