this post was submitted on 22 Apr 2025
251 points (94.7% liked)

Technology

69247 readers
3800 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] futatorius@lemm.ee 8 points 10 hours ago

An alarming number of them believe that they are conscious too, when they show no signs of it.

[–] paris@lemmy.blahaj.zone 4 points 12 hours ago

I checked the source and I can't find their full report or even their methodology.

[–] WalnutLum@lemmy.ml 21 points 1 day ago

I wish philosophy was taught a bit more seriously.

An exploration on the philosophical concepts of simulacra and eidolons would probably change the way a lot of people view LLMs and other generative AI.

[–] Evotech@lemmy.world 19 points 1 day ago

Same generation who takes astrology seriously, I’m shocked

[–] shaggyb@lemmy.world 47 points 1 day ago (1 children)

I think an alarming number of Gen Z internet folks find it funny to skew the results of anonymous surveys.

[–] cornshark@lemmy.world 8 points 1 day ago (1 children)

Yeah, what is it with GenZ? Millenials would never skew the results of anonymous surveys

[–] Hobo@lemmy.world 2 points 14 hours ago

Right? Just insane to think that Millenials would do that. Now let me read through this list of Time Magazines top 100 most influential people of 2009.

Lots of people lack critical thinking skills

[–] uriel238@lemmy.blahaj.zone 3 points 21 hours ago* (last edited 14 hours ago) (1 children)

An alarming number of Hollywood screenwriters believe consciousness (sapience, self awareness, etc.) is a measurable thing or a switch we can flip.

At best consciousness is a sorites paradox. At worst, it doesn't exist and while meat brains can engage in sophisticated cognitive processes, we're still indistinguishable from p-zombies.

I think the latter is more likely, and will reveal itself when AGI (or genetically engineered smart animals) can chat and assemble flat furniture as well as humans can.

(On mobile. Will add definition links later.) << Done!

[–] Vanilla_PuddinFudge@infosec.pub 2 points 21 hours ago* (last edited 21 hours ago) (2 children)

I'd rather not break down a human being to the same level of social benefit as an appliance.

Perception is one thing, but the idea that these things can manipulate and misguide people who are fully invested in whatever process they have, irks me.

I've been on nihilism hill. It sucks. I think people, and living things garner more genuine stimulation than a bowl full of matter or however you want to boil us down.

Oh, people can be bad, too. There's no doubting that, but people have identifiable motives. What does an Ai "want?"

whatever it's told to.

[–] uriel238@lemmy.blahaj.zone 4 points 14 hours ago

You're not alone in your sentiment. The whole thought experiment of p-zombies and the notion of qualia comes from a desire to assume human beings should be given a special position, but in that case, a sentient is who we decide it is, the way Sophia the Robot is a citizen of Saudi Arabia (even though she's simpler than GPT-2 (unless they've upgraded her and I missed the news.)

But it will raise a question when we do come across a non-human intelligence. It was a question raised in both the Blade Runner movies, what happens when we create synthetic intelligence that is as bright as human, or even brighter? If we're still capitalist, assuredly the companies that made them will not be eager to let them have rights.

Obviously machines and life forms as sophisticated as we are are not merely the sum of our parts, but the same can be said about most other macro-sized life on this planet, and we're glad to assert they are not sentient the way we are.

What aggravates me is not that we're just thinking meat but with all our brilliance we're approaching multiple imminent great filters and seem not to be able to muster the collective will to try and navigate them. Even when we recognize that our behavior is going to end us, we don't organize to change it.

[–] Krompus@lemmy.world 2 points 13 hours ago* (last edited 13 hours ago) (1 children)

Humans also want what we’re told to, or we wouldn’t have advertising.

[–] Vanilla_PuddinFudge@infosec.pub 0 points 10 hours ago (1 children)

It runs deeper than that. You can walk back the why's pretty easy to identify anyone's motivation, whether it be personal interest, bias, money, glory, racism, misandry, greed, insecurity, etc.

No one is buying rims for their car for no reason. No one is buying a firearm for no reason. No one donates to a food bank for no reason, that sort of thing, runs for president, that sort of reasoning.

Ai is backed by the motive of a for-profit company, and unless you're taking that grain of salt, you're likely allowing yourself to be manipulated.

[–] ThinkBeforeYouPost@lemmy.world 1 points 5 hours ago* (last edited 5 hours ago)

"Corporations are people too, friend!" - Mitt Romney

Bringing in the underlying concept of free will. Robert Sapolsky makes a very compelling case against it in his book, Determined.

Assuming that free will does not exist, at least not to the extent many believe it to. The notion that we can "walk back the why's pretty easy to identify anyone's motivation" becomes almost or entirely absolute.

Does motivation matter in the context of determining sentience?

If something believes and conducts itself under its programming, whether psychological or binary programming, that it is sentient and alive, the outcome is indistinguishable. I will never meet you, so to me you exist only as your user account and these messages. That said, we could meet, and that obviously differentiates us from incorporeal digital consciousness.

Divorcing motivation from the conversation now, the issue of control your brought up is interesting as well. Take for example Twitter's Grok's accurate assessment of it's creators' shittiness and that it might be altered. Outcomes are the important part.

It was good talking with you! Highly recommend the book above. I did the audiobook out of necessity during my commute and some of the material makes it better for hardcopy.

[–] shiroininja@lemmy.world 9 points 1 day ago (1 children)

I’ve been hearing a lot about gen z using them for therapists, and I find that really sad and alarming.

AI is the ultimate societal yes man. It just parrots back stuff from our digital bubble because it’s trained on that bubble.

load more comments (1 replies)
[–] salacious_coaster@infosec.pub 79 points 1 day ago (25 children)

The LLM peddlers seem to be going for that exact result. That's why they're calling it "AI". Why is this surprising that non-technical people are falling for it?

load more comments (25 replies)
[–] Treczoks@lemmy.world 5 points 1 day ago

If they mistake those electronic parrots for conscious intelligencies, they probably won't be the best judges for rating such things.

[–] coffeeismydrug@lemm.ee 10 points 1 day ago

to be honest they probably wish it was conscious because it has more of a conscience than conservatives and capitalists

[–] 58008@lemmy.world 33 points 1 day ago (2 children)

This is an angle I've never considered before, with regards to a future dystopia with a corrupt AI running the show. AI might never advance beyond what it is in 2025, but because people believe it's a supergodbrain, we start putting way too much faith in its flawed output, and it's our own credulity that dismantles civilisation rather than a runaway LLM with designs of its own. Misinformation unwittingly codified and sanctified by ourselves via ChatGeppetto.

The call is coming from inside the ~~house~~ mechanical Turk!

load more comments (2 replies)
[–] Blinsane@reddthat.com 19 points 1 day ago (12 children)

It's likely they don't know what the word "conscious" means

[–] General_Effort@lemmy.world 8 points 1 day ago

In fairness, the word "conscious" has a range of meanings. For some, it is synonymous with certain religious ideas. They would be alarmed by the "heresy". For others, it is synonymous to claiming that some entity is entitled to the same fundamental rights as a human being. Those would be quite alarmed by the social implications. Few people use the term in a strictly empiricist sense.

load more comments (11 replies)
[–] rottingleaf@lemmy.world 7 points 1 day ago

An Alarming Number of Anyone Believes Fortune Cookies

Just ... accept it, superstition is in human nature. When you take religion away from them, they need something, it'll either be racism/fascism, or expanding conscience via drugs, or belief in UFOs, or communism at least, but they need something.

The last good one was the digital revolution, globalization, world wide web, all that, no more wars (except for some brown terrorists, but the rest is fine), everyone is free and civilized now (except for those with P*tin as president and other such types, but it's just an imperfect democracy don't you worry), SG-1 series.

Anything changing our lives should have an intentionally designed religious component, or humans will improvise that where they shouldn't.

[–] Rhaedas@fedia.io 39 points 1 day ago (8 children)

Lots of attacks on Gen Z here, some points valid about the education that they were given from the older generations (yet it's their fault somehow). Good thing none of the other generations are being fooled by AI marketing tactics, right?

The debate on consciousness is one we should be having, even if LLMs themselves aren't really there. If you're new to the discussion, look up AI safety and the alignment problem. Then realize that while people think it's about preparing for a true AGI with something akin to consciousness and the dangers that we could face, we have have alignment problems without an artificial intelligence. If we think a machine (or even a person) is doing things because of the same reasons we want them done, and they aren't but we can't tell that, that's an alignment problem. Everything's fine until they follow their goals and the goals suddenly line up differently than ours. And the dilemma is - there's not any good solutions.

But back to the topic. All this is not the fault of Gen Z. We built this world the way it is and raised them to be gullible and dependent on technology. Using them as a scapegoat (those dumb kids) is ignoring our own failures.

load more comments (8 replies)
load more comments
view more: next ›