this post was submitted on 16 May 2025
526 points (94.4% liked)

Memes

50328 readers
1109 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] jsomae@lemmy.ml 1 points 11 hours ago (1 children)

I think you're talking about accelerationism. IMO, the main problem with unrestrained AI growth is that if AI turns out to be as good as the hype says it is, then we'll all be dead before revolution occurs.

[–] LovableSidekick@lemmy.world 1 points 4 hours ago (1 children)

The trick is to judge things on their own merit and not on the hype around them.

[–] jsomae@lemmy.ml 1 points 3 hours ago

In that case, you should know that Geoff Hinton (the guy whose lab kicked off the whole AI revolution last decade) quit Google in order to warn about the existential risk of AI. He believes there's at least a 10% chance that it will kill us all within 30 years. Ilya Sutskever, his former student and co-founder of OpenAI, believes similarly, which is why he quit OpenAI and founded Safe Superintelligence (yes that basic html document really is their homepage) to help solve the alignment problem.

You can also find popular rationalist AI pundits like gwern, acx, yudkowsky, etc. voicing similar concerns, with a range of P(doom) from low to the laughably high.