this post was submitted on 29 Oct 2023
12 points (73.1% liked)

Futurology

1740 readers
258 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Lugh 6 points 11 months ago (2 children)

This is a fascinating read. It was interesting to hear him say that all of the current problems with factuality and hallucinations are solvable, and he sees the route to doing that.

He was less convincing in discussing how to constrain the power of an AGI that is smarter than us. His solution is to make sure it understands ethics. That idea has lots of weaknesses, and the guy interviewing pressed him on that a few times, and he seemed to fudge his responses.

It's interesting to look at people's past predictions. He said he predicted the 2028 date for AGI back in 2008. Not only was he able to say it hasn't changed, but he was also able to point out everything he expected has gone to schedule in the 2008-2023 timeframe. It makes that 2028 prediction more credible.

[–] Rhaedas@kbin.social 4 points 11 months ago* (last edited 11 months ago)

We've been predicting that we won't be ready for AGI/ASI emergence both in science and in scifi for decades. Still holds true, even as the potential grows. If we're really lucky, AGI isn't possible, but I think that just powerful AI tools like LLMs and such will end up being just as dangerous in their misuse by power seekers and profiteers. We've seen this coming, and though even the actual people working on them are talking about the dangers, we're barreling forward without a care.

[–] RIPandTERROR@lemmy.blahaj.zone 1 points 11 months ago

I really wish there was a good !remindeme bot