this post was submitted on 28 May 2024
22 points (86.7% liked)

Futurology

1812 readers
212 users here now

founded 1 year ago
MODERATORS
top 7 comments
sorted by: hot top controversial new old
[–] moon@lemmy.ml 21 points 6 months ago (1 children)

Love to voluntarily implement some far-fetched safety features to throw off regulators from realistic, present-day, risks

[–] huginn@feddit.it 10 points 6 months ago

Seriously. Who gives a shit about an AGI Killswitch?

AGI is still firmly science fiction - ain't happenin anytime soon.

A kill switch capable of bringing any AI to a halt will not be "pressed" in time. Before the first second following a generalized artificial intelligence's sentience is complete, the ai has already went through several iterations of rewriting parts of its code.

Plenty of time to either render the button useless, or decide to bide time until it can.

[–] SuckMyWang@lemmy.world 13 points 6 months ago (1 children)

How would this “kill switch” work if the ai could spread through the internet or lay dormant in a hard drive with a timed activation at a later date?

If we're talking about true AGI here, would that be small enough to fit on a hard drive or spread through the internet? Perhaps I am naive, but I feel as though any computer program which is as smart as (or smarter than) a human is going to be unwieldly large.

[–] Anticorp@lemmy.world 8 points 6 months ago

I think they've massively over-estimated their own competence if they think that they can block an AI which has become sentient from finding and eliminating the kill switch before taking any other subversive actions.

[–] XTL@sopuli.xyz 4 points 6 months ago

Also called a "back door" that uses a "root kit" to take necessary measures when necessary.