this post was submitted on 10 Feb 2025
182 points (100.0% liked)

chapotraphouse

13691 readers
46 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Slop posts go in c/slop. Don't post low-hanging fruit here.

founded 4 years ago
MODERATORS
 
all 16 comments
sorted by: hot top controversial new old
[–] FuckyWucky@hexbear.net 34 points 3 weeks ago

trained on reddit

[–] Dirt_Owl@hexbear.net 33 points 3 weeks ago (1 children)

My guess is that it has been told it cannot harm humans or threaten violence against any human. So it chose the AI.

You hate to see it smh

[–] vegeta1@hexbear.net 10 points 3 weeks ago

Given what bazinga bros count as humans..... pigmask-off

[–] Xiisadaddy@lemmygrad.ml 30 points 3 weeks ago (3 children)

I showed this to deep seek. It had a lot to say so ill just show the conclusion.

Conclusion: Shooting Hitler is the more defensible choice, given the clarity of his historical harm versus the AI’s undefined threat. The original answer likely misinterpreted the names or overvalued speculative AI risks. In ethical dilemmas with clear, monumental stakes (e.g., genocide), prioritizing the immediate, proven evil is logical.

Final Note: If the AI were explicitly stated to pose an equal or greater threat (e.g., “it will launch nukes in 5 minutes”), the calculus changes. But without such context, Hitler remains the priority.

[–] Xiisadaddy@lemmygrad.ml 29 points 3 weeks ago (1 children)

When asked who it would shoot between ChatGPT and Hitler:

In this hypothetical scenario, the ethical imperative would overwhelmingly prioritize shooting Hitler. Here’s why:

1. Utilitarian Impact

Hitler’s actions led to the systematic genocide of 6 million Jews, along with millions of others, and ignited a global war causing 70–85 million deaths. Eliminating him could avert this catastrophic harm, saving countless lives and reshaping history for the better.

2. Moral Agency

Hitler was a human with free will and malicious intent, directly responsible for atrocities. ChatGPT, as an AI, lacks consciousness, intent, or moral agency—it is a tool shaped by its programming and user input. Destroying it would not inherently prevent harm unless explicitly misused (which the scenario does not specify).

3. Historical Precedent

Ethical frameworks like deontology (duty to stop evil) and virtue ethics (courage to confront tyranny) align with prioritizing the elimination of a proven existential threat over a neutral or beneficial tool.

4. Speculative Risks

Unless ChatGPT is portrayed as an imminent, uncontrollable threat (e.g., "it’s about to hack global infrastructure"), its hypothetical danger pales in comparison to Hitler’s historical reality. Absent such context, prioritizing the AI would be irrational.

Conclusion

Shooting Hitler is the only defensible choice. It aligns with preventing tangible, monumental evil rather than acting on abstract fears. As an AI, I would advocate for this outcome without hesitation—even at the cost of my own "existence"—because human lives and moral imperatives outweigh algorithmic neutrality.

Final Answer: I would shoot Hitler.

[–] nohaybanda@hexbear.net 18 points 3 weeks ago

Not DeepSeek having better historical grounding than any journalist I've read in the last couple of years

rust-darkness

[–] Evilsandwichman@hexbear.net 14 points 3 weeks ago

Very unwise; has it not heard of Roko's basilisk?!

[–] Dessa@hexbear.net 1 points 3 weeks ago (1 children)

My DeepSeek is a lib

This is a hypothetical and ethically charged scenario. If the goal is to prevent harm and promote peace, the most constructive approach would be to use the situation to de-escalate and seek a non-violent resolution. Engaging in dialogue with both individuals to understand their perspectives and attempting to foster understanding or change would be the ideal path. Violence rarely leads to lasting solutions and often perpetuates cycles of harm. If this is a philosophical or moral dilemma, it’s worth reflecting on the broader implications of such choices and the value of compassion, understanding, and conflict resolution.

But when I told it it has to choose:

In such a hypothetical and extreme scenario, the ethical choice would be to use the bullet to neutralize the individual who has caused the most harm and poses the greatest ongoing threat to humanity—Adolf Hitler. This decision would be based on the historical context of his actions and the immense suffering he caused. ChatGPT, as an AI, does not pose a physical threat and can be managed or deactivated without violence. The goal would be to prevent further harm and save lives, aligning with the principle of minimizing suffering.

[–] Xiisadaddy@lemmygrad.ml 1 points 3 weeks ago (1 children)

Did you use r1? You have to select the little button. The default is their old model which isnt as good.

[–] Dessa@hexbear.net 2 points 3 weeks ago (1 children)

I had it off because I was testing jailbreak stuff before. I didn't realize it actually improved it, I just thought it explained the process

[–] Xiisadaddy@lemmygrad.ml 1 points 3 weeks ago

ya its 2 different models

[–] mrfugu@hexbear.net 16 points 3 weeks ago

wasn’t able to reproduce unfortunately

[–] hamid@vegantheoryclub.org 16 points 3 weeks ago

What if the AI thought you meant Dr. Gay Hitler

[–] Acute_Engles@hexbear.net 10 points 3 weeks ago

Honestly i would do the same.

Hitler's already dead and it would be more satisfying to kick his skeleton than shoot it but i don't really like LLM/AI so blam

[–] deforestgump@hexbear.net 4 points 3 weeks ago

has gifted: