this post was submitted on 23 Oct 2025
18 points (80.0% liked)

science

22206 readers
778 users here now

A community to post scientific articles, news, and civil discussion.

rule #1: be kind

founded 2 years ago
MODERATORS
 

Statement on Superintelligence taken from https://superintelligence-statement.org/

Context: Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction. The succinct statement below aims to create common knowledge of the growing number of experts and public figures who oppose a rush to superintelligence.

Statement: We call for a prohibition on the development of superintelligence, not lifted before there is

  1. broad scientific consensus that it will be done safely and controllably
  2. strong public buy-in.
you are viewing a single comment's thread
view the rest of the comments
[–] Twongo@lemmy.ml 7 points 1 day ago (2 children)

lol, lmao even.

AI developers themselves don't know what their creations are doing so they just give it "guardrails" and the whole process to progress the technology is based on vibes or giving it more computing power. the latter proved to plateau really quick.

this article looks like it's part of the illusive AGI hype. The only realistic consequence i see in this is that researches hit a brick wall, the technology plateaus and even gets worse due to ai cannibalization and a multi-trillion-dollar industry collapses leaving an economy in shambles.

[–] Perspectivist@feddit.uk 2 points 1 day ago

You’re completely missing the point. It honestly sounds like you want them to keep pursuing AGI, because I can’t see any other reason why you’d be mocking the people arguing that we shouldn’t.

How close to the nuclear bomb do researchers need to get before it’s time to hit the brakes? Is it really that unreasonable to suggest that maybe we shouldn’t even be trying in the first place? I don’t understand where this cynicism comes from. From my perspective, these people are actually on the same side as the anti-AI sentiment I see here every day - yet they’re still being ridiculed just for having the audacity to even consider that we might actually stumble upon AGI, and that doing so could be the last mistake humanity ever makes.

[–] brucethemoose@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

There’s a distinction between Tech Bro transformers scaling hype and legit AGI research, which existed well before the former.

Things are advancing extremely rapidly, and an AGI “cap” would be a good idea if it could somehow materialize. But it honestly has nothing to do with Sam Altman and that tech circle, at least not the tech they currently have.