this post was submitted on 09 Dec 2023
6 points (80.0% liked)

Futurology

1740 readers
262 users here now

founded 1 year ago
MODERATORS
top 2 comments
sorted by: hot top controversial new old
[–] blindsight@beehaw.org 2 points 10 months ago (1 children)

I feel like calling it a "metric" is borderline clickbait since P(doom) isn't measurable. It isn't even Bayesian.

Still, it's an interesting article and discussion point.

I'm in the low-P(doom) camp. There are a lot of vested interests in maintaining the status quo, and there's no reason to expect AI to develop a benefit/cost function that leads to the destruction of humanity or civilization.

If anything, I think we'll end up in an AI-driven utopia, where most of the work necessary to live is done by machines.

[–] CanadaPlus 3 points 10 months ago

There are a lot of vested interests in maintaining the status quo, and there’s no reason to expect AI to develop a benefit/cost function that leads to the destruction of humanity or civilization.

I worry about an AI that has a cost function only favouring a small subset of humanity. There's also one that's just broken, I guess.