this post was submitted on 22 Apr 2025
249 points (94.6% liked)

Technology

69211 readers
3807 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] wagesj45@fedia.io 15 points 1 day ago (3 children)

That's a matter of philosophy and what a person even understands "consciousness" to be. You shouldn't be surprised that others come to different conclusions about the nature of being and what it means to be conscious.

[–] 0x01@lemmy.ml 11 points 1 day ago (2 children)

Consciousness is an emergent property, generally self awareness and singularity are key defining features.

There is no secret sauce to llms that would make them any more conscious than Wikipedia.

[–] General_Effort@lemmy.world 3 points 1 day ago (1 children)

secret sauce

What would such a secret sauce look like? Like, what is it in humans, for example?

[–] 0x01@lemmy.ml 2 points 1 day ago (1 children)

Likely a prefrontal cortex, the administrative center of the brain and generally host to human consciousness. As well as a dedicated memory system with learning plasticity.

Humans have systems that mirror llms but llms are missing a few key components to be precise replicas of human brains, mostly because it's computationally expensive to consider and the goal is different.

Some specific things the brain has that llms don't directly account for are different neurochemicals (favoring a single floating value per neuron), synaptogenesis, neurogenesis, synapse fire travel duration and myelin, neural pruning, potassium and sodium channels, downstream effects, etc. We use math and gradient descent to somewhat mirror the brain's hebbian learning but do not perform precisely the same operations using the same systems.

In my opinion having a dedicated module for consciousness would bridge the gap, possibly while accounting for some of the missing characteristics. Consciousness is not an indescribable mystery, we have performed tons of experiments and received a whole lot of information on the topic.

As it stands llms are largely reasonable approximations of the language center of the brain but little more. It may honestly not take much to get what we consider consciousness humming in a system that includes an llm as a component.

[–] General_Effort@lemmy.world 3 points 1 day ago

a prefrontal cortex, the administrative center of the brain and generally host to human consciousness.

That's an interesting take. The prefrontal cortex in humans is proportionately larger than in other mammals. Is it implied that animals are not conscious on account of this difference?

If so, what about people who never develop an identifiable prefrontal cortex? I guess, we could assume that a sufficient cortex is still there, though not identifiable. But what about people who suffer extensive damage to that part of the brain. Can one lose consciousness without, as it were, losing consciousness (ie becoming comatose in some way)?

a dedicated module for consciousness would bridge the gap

What functions would such a module need to perform? What tests would verify that the module works correctly and actually provides consciousness to the system?

[–] Muaddib@sopuli.xyz -2 points 1 day ago (1 children)

Consciousness comes from the soul, and souls are given to us by the gods. That's why AI isn't conscious.

[–] 0x01@lemmy.ml 4 points 1 day ago (1 children)

How do you think god comes into the equation? What do you think about split brain syndrome in which people demonstrate having multiple consciousnesses? If consciousness is based on a metaphysical property why can it be altered with chemicals and drugs? What do you think happens during a lobotomy?

I get that evidence based thinking is generally not compatible with religious postulates, but just throwing up your hands and saying consciousness comes from the gods is an incredibly weak position to hold.

[–] Muaddib@sopuli.xyz -2 points 1 day ago

I respect the people who say machines have consciousness, because at least they're consistent. But you're just like me, and won't admit it.

[–] Sixtyforce@sh.itjust.works 5 points 1 day ago (3 children)

If it was actually AI sure.

This is an unthinking machine algorithm chewing through mounds of stolen data.

What is thought?

[–] wagesj45@fedia.io 9 points 1 day ago

That is certainly one way to view it. One might say the same about human brains, though.

[–] surewhynotlem@lemmy.world 8 points 1 day ago

To be fair, so am i

[–] Vanilla_PuddinFudge@infosec.pub -2 points 1 day ago* (last edited 1 day ago) (2 children)

Are we really going to devil's advocate for the idea that avoiding society and asking a language model for life advice is okay?

[–] Aatube@kbin.melroy.org 12 points 1 day ago (1 children)

No, but thinking about whether it's conscious is an independent thing.

[–] thiseggowaffles@lemmy.zip 7 points 1 day ago (1 children)

It's not devil's advocate. They're correct. It's purely in the realm of philosophy right now. If we can't define "consciousness" (spoiler alert: we can't), then it makes it impossible to determine with certainty one way or another. Are you sure that you yourself are not just fancy auto-complete? We're dealing with shit like the hard problem of consciousness and free will vs determinism. Philosophers have been debating these issues for millennia and were not much closer to a consensus yet than we were before.

And honestly, if the CIA's papers on The Gateway Analysis from Project Stargate about consciousness are even remotely correct, we can't rule it out. It would mean consciousness preceeds matter, and support panpsychism. That would almost certainly include things like artificial intelligence. In fact, then the question becomes if it's even "artificial" to begin with if consciousness is indeed a field that pervades the multiverse. We could very well be tapping into something we don't fully understand.

[–] tabular@lemmy.world -2 points 1 day ago (1 children)

The only thing one can be 100% certain of is that one is having an experience. If we were a fancy autocomplete then we'd know we had it 😉

[–] thiseggowaffles@lemmy.zip 6 points 1 day ago (1 children)

What do you mean? I don't follow how the two are related. What does being fancy auto-complete have anything to do with having an experience?

[–] tabular@lemmy.world 0 points 1 day ago (1 children)

It's an answer on if one is sure if they are not just a fancy autocomplete.

More directly; we can't be sure if we are not some autocomplete program in a fancy computer but since we're having an experience then we are conscious programs.

[–] thiseggowaffles@lemmy.zip 7 points 1 day ago* (last edited 1 day ago)

When I say "how can you be sure you're not fancy auto-complete", I'm not talking about being an LLM or even simulation hypothesis. I'm saying that the way that LLMs are structured for their neural networks is functionally similar to our own nervous system (with some changes made specifically for transformer models to make them less susceptible to prompt injection attacks). What I mean is that how do you know that the weights in your own nervous system aren't causing any given stimuli to always produce a specific response based on the most weighted pathways in your own nervous system? That's how auto-complete works. It's just predicting the most statistically probable responses based on the input after being filtered through the neural network. In our case it's sensory data instead of a text prompt, but the mechanics remain the same.

And how do we know whether or not the LLM is having an experience or not? Again, this is the "hard problem of consciousness". There's no way to quantify consciousness, and it's only ever experienced subjectively. We don't know the mechanics of how consciousness fundamentally works (or at least, if we do, it's likely still classified). Basically what I'm saying is that this is a new field and it's still the wild west. Most of these LLMs are still black boxes that we only barely are starting to understand how they work, just like we barely are starting to understand our own neurology and consciousness.