this post was submitted on 10 Dec 2023
160 points (85.4% liked)

Technology

59769 readers
3027 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] AstralPath@lemmy.ca 83 points 1 year ago (2 children)

He says as he conveniently ignores the existence of Boston Dynamics.

[–] modifier@lemmy.ca 38 points 1 year ago (1 children)

We're 15 years max from the inevitable "OpenAI + Boston Dynamics: Better Together" ad after they merge.

[–] Knusper@feddit.de 15 points 1 year ago (1 children)

I mean, at this rate, I'm imagining Microsoft will have hollowed out OpenAI in a few years, but I could see them buying Boston Dynamics, too, yes

[–] tutus@links.hackliberty.org 9 points 11 months ago (1 children)
load more comments (1 replies)
load more comments (1 replies)
[–] teft@startrek.website 63 points 1 year ago (3 children)

What happens in the scenario where a super-intelligence just uses social engineering and a human is his arms and legs?

[–] CaptnNMorgan@reddthat.com 16 points 1 year ago (1 children)

I loved Eagle Eye when it came out, I was 10(?). I never ever see it get mentioned though, maybe it doesn't hold up idrk but the concept is great and shows exactly how that could happen

[–] pelespirit@sh.itjust.works 5 points 1 year ago (1 children)

Honest question, is this Eagle Eye? https://sh.itjust.works/post/10786110

They're calling it EagleAI

[–] ReveredOxygen@sh.itjust.works 3 points 11 months ago

Eagle Eye is a movie

[–] jeena@jemmy.jeena.net 3 points 1 year ago
load more comments (1 replies)
[–] Bipta@kbin.social 50 points 1 year ago (1 children)

This is the dumbest take. Humans have a lot of needs and the AI will likely have considerable control over them.

[–] slaacaa@lemmy.world 12 points 11 months ago* (last edited 11 months ago) (1 children)

I would argue society would come to near-collapse with just the internet shut down. If we are talking about no power grid, then anarchy and millions dead in just a few days. Or Mr. AI could display fabricated system data to nuclear power plant operators, blackmail some idiot with their nude photos to give up rocket launch codes, or crash the financial markets with a flood of fake news. I am no way a doomer, but these are logically explainable scenarios utilizing existing tools, the missing link is an AGI who is capable and intends to orchestrate these.

load more comments (1 replies)
[–] just_another_person@lemmy.world 29 points 1 year ago (2 children)

I think a sufficient "Doom Scenario" would be an AI that is widespread and capable enough to poison the well of knowledge we ask it to regurgitate back at us out of laziness.

[–] rsuri@lemmy.world 21 points 1 year ago (1 children)

That's pretty much social media today.

[–] Pons_Aelius@kbin.social 4 points 1 year ago

Yep. People believe the no evidence bullshit that reinforces their current mindset spouted online without LLMs being in the mix.

[–] the_q@lemmy.world 4 points 11 months ago

You best start believing in societal collapse stories, cause you're in one.

[–] pastermil@sh.itjust.works 29 points 11 months ago (1 children)

Who needs arm & leg if you can make the humans kill each other?

[–] stewsters@lemmy.world 5 points 11 months ago (1 children)

Honestly you probably don't even need to exist to do that.

Humans have been trying hard to do that on their own.

[–] pastermil@sh.itjust.works 3 points 11 months ago

What are you talking about? We all live in peace and harmony here.

Gosh, I want to kill my neighbor.

[–] ikidd@lemmy.world 29 points 11 months ago (8 children)

Well, there's a complete lack of imagination for you.

[–] rosymind@leminal.space 13 points 11 months ago (3 children)

Seriously!

Oh, you've tasked AI with managing banking? K. All bank funds are suddenly corrupted. Oh, you've tasked AI with managing lights at traffic intersections? K, they're all green now. Oh, you've tasked AI with filtering 911 calls to dispachers? K, all real emergencies are on hold until disconnected

I could go on and on and on...

[–] intensely_human@lemm.ee 10 points 11 months ago (1 children)

You tasked AI with doing therapy for people? Congrats now humanity as a whole is getting more miserable.

[–] rosymind@leminal.space 6 points 11 months ago

I think this one's my favorite so far!

AI doc: "Please enter your problem" Patient: "Well, I feel depressed because I saw on Facebook that my x-girlfriend has a new guy" AI doc: "Interesting. I advise you to spend more time on social media. Have you checked her insta yet?"

[–] ikidd@lemmy.world 9 points 11 months ago (2 children)

Oh, AI is running your water treatment plant? Or a chemical plant on the outskirts of the city? Or the nuclear plant?

Good luck with that.

[–] derpgon@programming.dev 9 points 11 months ago

Whatever a virus is able to do, AI can, theoretically, perform aswell. Ransomware, keylogging, social engineering (I'd argue this one is most likely - just look at people trusting whatever AI spits out with absolute confidence).

When you mentioned nuclear power plants, Stuxnet comes to mind for me.

load more comments (1 replies)
[–] FishFace@lemmy.world 9 points 11 months ago (1 children)

I've got a great idea. Let's not do those things.

[–] rosymind@leminal.space 6 points 11 months ago (1 children)
[–] FishFace@lemmy.world 6 points 11 months ago (1 children)

My first act will be to grant a boon to anyone who sucked up to me before I was president. You're doing well!

load more comments (1 replies)
load more comments (7 replies)
[–] Boozilla@lemmy.world 24 points 1 year ago (12 children)

Meanwhile, the power grid, traffic controls, and myriad infrastructure & adjacent internet-connected software will be using AI, if not already.

[–] Lodespawn@aussie.zone 16 points 1 year ago

You have a very high opinion of the level of technology running power grids, traffic systems and other infrastructure in most parts of the world.

[–] the_q@lemmy.world 7 points 11 months ago

I'm pretty sure all of the things you listed run on Pentium 4s.

load more comments (10 replies)
[–] pglpm@lemmy.ca 18 points 11 months ago (4 children)

"Bayesian analysis"? What the heck has this got to do with Bayesian analysis? Does this guy have an intelligence, artificial or otherwise?

[–] Transporter_Room_3@startrek.website 10 points 11 months ago

Big word make sound smart

[–] cygnosis@lemmy.world 3 points 11 months ago (1 children)

He's referring to the fact that the Effective Altruism / Less Wrong crowd seems to be focused almost entirely on preventing an AI apocalypse at some point in the future, and they use a lot of obscure math and logic to explain why it's much more important than dealing with war, homelessness, climate change, or any of the other issues that are causing actual humans to suffer today or are 100% certain to cause suffering in the near future.

load more comments (1 replies)
[–] Mahlzeit@feddit.de 3 points 11 months ago

It's likely a reference to Yudkowsky or someone along those lines. I don't follow that crowd.

load more comments (1 replies)
[–] bionicjoey@lemmy.ca 13 points 1 year ago (1 children)

AI companies: "so what you're saying is we should build a killbot that runs on ChatGPT?"

[–] FartsWithAnAccent@lemmy.world 9 points 1 year ago (1 children)

"We're not going to do that...

...because we already did!"

-Also AI companies probably

[–] kpw@kbin.social 7 points 1 year ago

excited stock exchange noises

[–] greybeard@lemmy.one 10 points 1 year ago (1 children)

If an AI was sufficiently advanced, it could manipulate the stock market to gain a lot of wealth real fast under a corporation with falsified documents, then pay Chinese fab house to kick off the war machine.

[–] KevonLooney@lemm.ee 4 points 11 months ago (2 children)

Not really. There's no real way to manipulate other traders and they all use algorithms too. It's people monitoring algorithms doing most of the trading. At best, AI would be slightly faster at noticing patterns and send a note to a person who tweaks the algorithm.

People who don't invest forget: there has to be someone else on the other side of your trade willing to buy/sell. Like how do you think AI could manipulate housing prices? That's just stocks, but slower.

[–] r00ty@kbin.life 3 points 11 months ago (1 children)

On the one hand, yes. But on the other hand when a price hits a low there will (because it's a prerequisite for the low to happen) be people selling market to the bottom. On a high there will be people buying market to the top. And they'll be doing it in big numbers as well as small.

Yes, most of the movements are caused by algorithms, no doubt. But as the price moves you'll find buyer and seller matches right up to hitting the extremes.

AI done well could in theory both learn how to capitalise on these extremes by making smart trades faster, but also know how to trick algorithms and bait humans with their trades. That is, acting like a human with knowledge of the entire history to pattern match and acting in microseconds.

load more comments (1 replies)
load more comments (1 replies)
[–] reverendsteveii@lemm.ee 6 points 11 months ago

doesn't take a lot to imagine a scenario in which a lot of people die due to information manipulation or the purposeful disabling of safety systems. doesn't take a lot to imagine a scenario where a superintelligent AI manipulates people into being its arms and legs (babe, wake up, new conspiracy theory just dropped - roko is an AI playing the long game and the basilisk is actually a recruiting tool). doesn't take a lot to imagine an AI that's capable of seizing control of a lot of the world's weapons and either guiding them itself or taking advantage of onboard guidance to turn them against their owners, or using targeted strikes to provoke a war (this is a sub-idea of manipulating people into being its arms and legs). doesn't take a lot to imagine an AI that's capable of purposefully sabotaging the manufacture of food or medicine in such a way that it kills a lot of people before detection. doesn't take a lot to imagine an AI capable of seizing and manipulating our traffic systems in such a way to cause a bunch of accidental deaths and injuries.

But overall my rebuttal is that this AI doom scenario has always hinged on a generalized AI, and that what people currently call "AI" is a long, long way from a generalized AI. So the article is right, ChatGPT can't kill millions of us. Luckily no one was ever proposing that chatGPT could kill millions of us.

load more comments
view more: next ›