this post was submitted on 22 Sep 2024
11 points (58.2% liked)
Futurology
1854 readers
50 users here now
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
To imagine the threat posed by AI, consider a picture of the Milky way, and a picture of the Milky Way labeled as 10 years later than the first. The second picture has a hole in it 10 light years in radius, centered on the earth.
We need to know how to deal with a potentially rogue AI before it exists, because a rogue AI can win on the time scale of seconds, before anyone knows it's a threat.
The inefficiency of the system isn't relevant to the discussion.
How far away the threat is is irrelevant to the discussion.
The limits of contemporary generative neural networks is irrelevant to the discussion.
The problems of copyright, and job displacement are irrelevant to the discussion.
The abuses of capitalism, while important, are not relevant to the discussion. If your response to this news is "We just need to remove capitalism" dunk your head is a bucket of ice water and keep it there until you either realize you're wrong or can explain how capitalism is relevant to a grey goo scenario.
I was worried about the current problems with AI (everyone losing their jobs) a decade ago, and everyone thought I was stupid for worrying about it. Now we're here, and it's possibly too late to stop it. Today, I am worried about AI destroying the entire universe. Hint: forbidding their development, on any level, isn't going to work.
Things to look up: paperclip maximizer, AI safety, Eleizer Yudkowsky, Robert Miles, Transhumanism, outcome pump, several other things that I can't remember and don't have the time to look up.
I'm sure this will get downvoted, oh well. Guess I'll die.