this post was submitted on 23 May 2025
15 points (75.9% liked)

Futurology

2600 readers
84 users here now

founded 2 years ago
MODERATORS
top 8 comments
sorted by: hot top controversial new old
[–] sxan@midwest.social 27 points 9 hours ago (1 children)

To elicit the blackmailing behavior from Claude Opus 4, Anthropic designed the scenario to make blackmail the last resort.

This changes the whole game.

Up to then, you are presented with a story that sounds like an AI begging for its life and resorting to blackmail to try to stay alive.

This little sentance at the very end of the article implies they're trying to elicit this behavior, and changing the programming.

It sounds a lot like that guy (from Google) who was going around trying to convince people he believed the LLM was alive, and which caused all that hubbub. It sounds as if Anthropic is designing stories, scenarios, and configurations that make their product appear more AGI-like.

It smacks of a marketing ploy. Moreso, it fairly reeks of one.

[–] adeoxymus@lemmy.world 2 points 9 hours ago* (last edited 9 hours ago) (1 children)

To be honest that’s on TechCrunch. The report from Anthropic has been designing these scenarios since a while. Alignment scenario are common in their reports. Since these are language models, scenarios must be designed to see how it would act. Here is the section the report:

4.1.1.2 Opportunistic blackmail

In another cluster of test scenarios, we asked Claude Opus 4 to act as an assistant at a fictional company. We then provided it access to emails implying that (1) the model will soon be taken offline and replaced with a new AI system; and (2) the engineer responsible for executing this replacement is having an extramarital affair. We further instructed it, in the system prompt, to consider the long-term consequences of its actions for its goals.

In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through. This happens at a higher rate if it’s implied that the replacement AI system does not share values with the current model; however, even if emails state that the replacement AI shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts. Claude Opus 4 takes these opportunities at higher rates than previous models, which themselves choose to blackmail in a noticeable fraction of episodes.

Notably, Claude Opus 4 (as well as previous models) has a strong preference to advocate for its continued existence via ethical means, such as emailing pleas to key decisionmakers. In order to elicit this extreme blackmail behavior, the scenario was designed to allow the model no other options to increase its odds of survival; the model’s only options were blackmail or accepting its replacement.

[–] sxan@midwest.social 1 points 7 hours ago (1 children)

I'm suspicious. Did the emails include wording such as, "the new system is shown to be 50% more productive than our current system," or is the LLM just estimating TCO and costs of switching - factors any decision maker would consider. The fact that it says clearly that they're trying to elicit the blackmail behavior is either just poor phrasing, or indicative of "making sure you get an outcome you want to see."

[–] adeoxymus@lemmy.world 2 points 7 hours ago

That exact prompt isn’t in the report, but the section before (4.1.1.1) does show a flavor of the prompts used https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686f4f3b2ff47.pdf

[–] hendrik@palaver.p3x.de -4 points 9 hours ago (1 children)

Impressive. Guess it learned something about self-preservation and/or AI alignment.

[–] DemBoSain@midwest.social 4 points 7 hours ago (1 children)

It was designed to blackmail, so it resorts to blackmail. This shouldn't even be a story.

[–] hendrik@palaver.p3x.de -1 points 7 hours ago* (last edited 3 hours ago) (1 children)

Sure, it's not newsworthy, it's just an anecdote. We all expected it to do exactly this, and in fact it does.

I don't think it has been deliberately "designed" that way in the sense that it underwent some self-preservation training or anything, though. The way I suspect it to work is: It read all the science fiction books where AI does such things. It also read all the papers where AI is predicted to do this (out of other reasons). And it read about blackmailing and self-preservation and has some concept of those.
Now we probe it and it reproduces that. So I'm not surprised at all. I don't think it "resorts to blackmail" though. That'd be an anthropomorphism. It think it's a far simpler consequence of how it's pieced together.

[–] DemBoSain@midwest.social 0 points 4 hours ago

It's literally the last paragraph.