this post was submitted on 25 Apr 2024
720 points (95.6% liked)

Programmer Humor

19623 readers
1 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 

cross-posted from: https://lemmy.ml/post/14869314

"I want to live forever in AI"

you are viewing a single comment's thread
view the rest of the comments
[–] NaibofTabr@infosec.pub 110 points 7 months ago (29 children)

Even if it were possible to scan the contents of your brain and reproduce them in a digital form, there's no reason that scan would be anything more than bits of data on the digital system. You could have a database of your brain... but it wouldn't be conscious.

No one has any idea how to replicate the activity of the brain. As far as I know there aren't any practical proposals in this area. All we have are vague theories about what might be going on, and a limited grasp of neurochemistry. It will be a very long time before reproducing the functions of a conscious mind is anything more than fantasy.

[–] theoretiker@feddit.de 51 points 7 months ago (3 children)

Counterpoint, from a complex systems perspective:

We don't fully know or are able toodel the details of neurochemistry, but we know some essential features which we can model, action potentials in spiking neuron models for example.

It's likely that the details don't actually matter much. Take traffic jams as an example. There is lots of details going on, driver psychology, the physical mechanics of the car etc. but you only need a handful of very rough parameters to reproduce traffic jams in a computer.

That's the thing with "emergent" phenomena, they are less complicated than the sum of their parts, which means you can achieve the same dynamics using other parts.

[–] tburkhol@lemmy.world 32 points 7 months ago (2 children)

Even if you ignore all the neuromodulatory chemistry, much of the interesting processing happens at sub-threshold depolarizations, depending on millisecond-scale coincidence detection from synapses distributed through an enormous, and slow-conducting dendritic network. The simple electrical signal transmission model, where an input neuron causes reliable spiking in an output neuron, comes from skeletal muscle, which served as the model for synaptic transmission for decades, just because it was a lot easier to study than actual inter-neural synapses.

But even that doesn't matter if we can't map the inter-neuronal connections, and so far that's only been done for the 300 neurons of the c elegans ganglia (i.e., not even a 'real' brain), after a decade of work. Nowhere close to mapping the neuroscientists' favorite model, aplysia, which only has 20,000 neurons. Maybe statistics will wash out some of those details by the time you get to humans 10^11 neuron systems, but considering how badly current network models are for predicting even simple behaviors, I'm going to say more details matter than we will discover any time soon.

[–] DrBob@lemmy.ca 15 points 7 months ago (1 children)

Thanks fellow traveller for punching holes in computational stupidity. Everything you said is true but I also want to point out that the brain is an analog system so the information in a neuron is infinite relative to a digital system (cf: digitizing analog recordings). As I tell my students if you are looking for a binary event to start modeling, look to individual ions moving across the membrane.

[–] Blue_Morpho@lemmy.world 13 points 7 months ago (7 children)

As I tell my students if you are looking for a binary event to start modeling, look to individual ions moving across the membrane.

So it's not infinite and can be digitized. :)

But to be more serious, digitized analog recordings is a bad analogy because audio can be digitized and perfectly reproduced. Nyquist- Shannon theory means the output can be perfectly reproduced. It's not approximate. It's perfect.

https://en.m.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem

load more comments (7 replies)
[–] theoretiker@feddit.de 2 points 7 months ago

Yes the connectome is kind of critical. But other than that, sub threshold oscillations can and are being modeled. It also does not really matter that we are digitizing here. Fluid dynamics are continuous and we can still study, model and predict it using finite lattices.

There are some things that are missing, but very clearly we won't need to model individual ions and there is lots of other complexity that will not affect the outcome.

[–] Yondoza@sh.itjust.works 9 points 7 months ago

I heard a hypothesis that the first human made consciousness will be an AI algorithm designed to monitor and coordinate other AI algorithms which makes a lot of sense to me.

Our consciousness is just the monitoring system of all our bodies subsystems. It is most certainly an emergent phenomenon of the interaction and management of different functions competing or coordinating for resources within the body.

To me it seems very likely that the first human made consciousness will not be designed to be conscious. It also seems likely that we won't be aware of the first consciousnesses because we won't be looking for it. Consciousness won't be the goal of the development that makes it possible.

[–] intensely_human@lemm.ee 2 points 7 months ago (1 children)

I’d say the details matter, based on the PEAR laboratory’s findings that consciousness can affect the outcomes of chaotic systems.

Perhaps the reason evolution selected for enormous brains is that’s the minimum necessary complexity to get a system chaotic enough to be sensitive to and hence swayed by conscious will.

[–] theoretiker@feddit.de 3 points 7 months ago

PEAR? Where staff participated in trials, rather than doing double blind experiments? Whose results could not be reproduced by independent research groups? Who were found to employ p-hacking and data cherry picking?

You might as well argue that simulating a human mind is not possible because it wouldn't have a zodiac sign.

[–] Sombyr@lemmy.zip 25 points 7 months ago (2 children)

We don't even know what consciousness is, let alone if it's technically "real" (as in physical in any way.) It's perfectly possible an uploaded brain would be just as conscious as a real brain because there was no physical thing making us conscious, and rather it was just a result of our ability to think at all.
Similarly, I've heard people argue a machine couldn't feel emotions because it doesn't have the physical parts of the brain that allow that, so it could only ever simulate them. That argument has the same hole in that we don't actually know that we need those to feel emotions, or if the final result is all that matters. If we replaced the whole "this happens, release this hormone to cause these changes in behavior and physical function" with a simple statement that said "this happened, change behavior and function," maybe there isn't really enough of a difference to call one simulated and the other real. Just different ways of achieving the same result.

My point is, we treat all these things, consciousness, emotions, etc, like they're special things that can't be replicated, but we have no evidence to suggest this. It's basically the scientific equivalent of mysticism, like the insistence that free will must exist even though all evidence points to the contrary.

[–] merc@sh.itjust.works 8 points 7 months ago (3 children)

Also, some of what happens in the brain is just storytelling. Like, when the doctor hits your patellar tendon, just under your knee, with a reflex hammer. Your knee jerks, but the signals telling it to do that don't even make it to the brain. Instead the signal gets to your spinal cord and it "instructs" your knee muscles.

But, they've studied similar things and have found out that in many cases where the brain isn't involved in making a decision, the brain does make up a story that explains why you did something, to make it seem like it was a decision, not merely a reaction to stimulus.

load more comments (3 replies)
[–] arendjr@programming.dev 2 points 7 months ago (3 children)

let alone if it’s technically “real” (as in physical in any way.)

This right here might already be a flaw in your argument. Something doesn’t need to be physical to be real. In fact, there’s scientific evidence that physical reality itself is an illusion created through observation. That implies (although it cannot prove) that consciousness may be a higher construct that exists outside of physical reality itself.

If you’re interested in the philosophical questions this raises, there’s a great summary article that was published in Nature: https://www.nature.com/articles/436029a

[–] Sombyr@lemmy.zip 15 points 7 months ago (17 children)

On the contrary, it's not a flaw in my argument, it is my argument. I'm saying we can't be sure a machine could not be conscious because we don't know that our brain is what makes us conscious. Nor do we know where the threshold is where consciousness arises. It's perfectly possible all we need is to upload an exact copy of our brain into a machine, and it'd be conscious by default.

[–] NaibofTabr@infosec.pub 2 points 7 months ago (2 children)

The problem with this is that even if a machine is conscious, there's no reason it would be conscious like us. I fully agree that consciousness could take many forms, probably infinite forms - and there's no reason to expect that one form would be functionally or technically compatible with another.

What does the idea "exact copy of our brain" mean to you? Would it involve emulating the physical structure of a human brain? Would it attempt to abstract the brain's operations from the physical structure? Would it be a collection of electrical potentials? Simulations of the behavior of specific neurochemicals? What would it be in practice, that would not be hand-wavy fantasy?

[–] Sombyr@lemmy.zip 4 points 7 months ago (1 children)

I suppose I was overly vague about what I meant by "exact copy." I mean all of the knowledge, memories, and an exact map of the state of our neurons at the time of upload being uploaded to a computer, and then the functions being simulated from there. Many people believe that even if we could simulate it so perfectly that it matched a human brain's functions exactly, it still wouldn't be conscious because it's still not a real human brain. That's the point I was arguing against. My argument was that if we could mimic human brain functions closely enough, there's no reason to believe the brain is so special that a simulation could not achieve consciousness too.
And you're right, it may not be conscious in the same way. We have no reason to believe either way that it would or wouldn't be, because the only thing we can actually verify is conscious is ourself. Not humans in general, just you, individually. Therefore, how conscious something is is more of a philosophical debate than a scientific one because we simply cannot test if it's true. We couldn't even test if it was conscious at all, and my point wasn't that it would be, my point is that we have no reason to believe it's possible or impossible.

[–] intensely_human@lemm.ee 2 points 7 months ago (1 children)

Unfortunately the physics underlying brain function are chaotic systems, meaning infinite (or “maximum”) precision is required to ensure two systems evolve to the same later states.

That level of precision cannot be achieved in measuring the state, without altering the state into something unknown after the moment of measurement.

Nothing quantum is necessary for this inability to determine state. Consider the problem of trying to map out where the eight ball is on a pool table, but you can’t see the eight ball. All you can do is throw other balls at it and observe how their velocities change. Now imagine you can’t see those balls either, because the sensing mechanism you’re using is composed of balls of equal or greater size.

Unsolvable problem. Like a box trying to contain itself.

[–] Blue_Morpho@lemmy.world 2 points 7 months ago (1 children)

Chaos comes into play as a state changes. The poster above you talks about copying the state. Once copied the two states will diverge because of chaos. But that doesn't preclude consciousness. It means the copy will soon have different thoughts.

load more comments (1 replies)
load more comments (1 replies)
load more comments (16 replies)
[–] Gabu@lemmy.world 3 points 7 months ago (1 children)

That's pseudoscientific bullshit. Quantum physics absolutely does tell us that there is a real physical world. It's incredibly counterintuitive and impossible to fully describe, but does exist.

[–] NaibofTabr@infosec.pub 3 points 7 months ago

Heh, well... I guess that depends on how you define "physical"... if quantum field theory is correct then everything we experience is the product of fluctuations in various fields, including the physical mass of protons, neutrons etc. "Reality" as we experience it might be more of an emergent property, as illusory as the apparent solidity of matter.

load more comments (1 replies)
[–] nnullzz@lemmy.world 14 points 7 months ago (2 children)

Consciousness might not even be “attached” to the brain. We think with our brains but being conscious could be a separate function or even non-local.

[–] Blue_Morpho@lemmy.world 14 points 7 months ago (5 children)

I read that and the summary is, "Here are current physical models that don't explain everything. Therefore, because science doesn't have an answer it could be magic."

We know consciousness is attached to the brain because physical changes in the brain cause changes in consciousness. Physical damage can cause complete personality changes. We also have a complete spectrum of observed consciousness from the flatworm with 300 neurons, to the chimpanzee with 28 billion. Chimps have emotions, self reflection and everything but full language. We can step backwards from chimps to simpler animals and it's a continuous spectrum of consciousness. There isn't a hard divide, it's only less. Humans aren't magical.

[–] nnullzz@lemmy.world 3 points 7 months ago

I understand your point. But science has also shown us over time that things we thought were magic were actually things we can figure out. Consciousness is definitely up there in that category of us not fully understanding it. So what might seem like magic now, might be well-understood science later.

Not able to provide links at the moment, but there are also examples on the other side of the argument that lead us to think that maybe consciousness isn’t fully tied to physical components. Sure, the brain might interface with senses, consciousness, and other parts to give us the whole experience as a human. But does all of that equate to consciousness? Is the UI of a system the same thing as the user?

load more comments (4 replies)
[–] xhieron@lemmy.world 5 points 7 months ago

Thank you for this. That was a fantastic survey of some non-materialistic perspectives on consciousness. I have no idea what future research might reveal, but it's refreshing to see that there are people who are both very interested in the questions and also committed to the scientific method.

[–] Maggoty@lemmy.world 8 points 7 months ago (1 children)

I think we're going to learn how to mimic a transfer of consciousness before we learn how to actually do one. Basically we'll figure out how to boot up a new brain with all of your memories intact. But that's not actually a transfer, that's a clone. How many millions of people will we murder before we find out the Zombie Zuckerberg Corp was lying about it being a transfer?

[–] explodicle@sh.itjust.works 3 points 7 months ago (1 children)

What's the difference between the two?

[–] Maggoty@lemmy.world 3 points 7 months ago (1 children)

A. You die and a copy exists

B. You move into a new body

[–] explodicle@sh.itjust.works 3 points 7 months ago (1 children)

Right, how is moving into a new body not dying?

load more comments (1 replies)
[–] Gabu@lemmy.world 7 points 7 months ago (1 children)

You could have a database of your brain… but it wouldn’t be conscious.

Where is the proof of your statement?

[–] NaibofTabr@infosec.pub 7 points 7 months ago (1 children)

Well there's no proof, it's all speculative and even the concept of scanning all the information in a human brain is fantasy so there isn't going to be a real answer for awhile.

But just as a conceptual argument, how do you figure that a one-time brain scan would be able to replicate active processes that occur over time? Or would you expect the brain scan to be done over the course of a year or something like that?

[–] intensely_human@lemm.ee 5 points 7 months ago

You make a functional model of a neuron that can behave over time like other neurons do. Then you get all the synapses and their weights. The synapses and their weights are a starting point, and your neural model is the function that produces subsequent states.

Problem is brians don’t have “clock cycles”, at least not as strictly as artificial neural networks do.

[–] intensely_human@lemm.ee 3 points 7 months ago

Why would bits not be conscious?

load more comments (23 replies)