this post was submitted on 05 Feb 2024
35 points (90.7% liked)

Futurology

1801 readers
88 users here now

founded 1 year ago
MODERATORS
all 12 comments
sorted by: hot top controversial new old
[–] Ottomateeverything@lemmy.world 18 points 9 months ago (1 children)

Why even bother testing this? LLMs are not how you would want to run anything strategy related. And they're trained of internet text data - pretty much all YouTube/Reddit comments about conflicts are full to the brim of people screaming about exterminating and nuking the "enemy" so of course that's what LLMs would do.

It's scary how many people, even those in pretty high up positions of power, have no fucking clue how this shit works or what it can/will do. I used to think that AI wouldn't be as dangerous as it is portrayed in books/movies because we would just keep it in line and have proper guard rails... But when people in the military etc seem to have no fucking clue as to how LLMs work..... Yeah, we're fucking doomed.

[–] CanadaPlus 6 points 9 months ago

Exactly. I bet they escalate because they're more instinctively drawn to a dramatic story than to pragmatic victory or desirable outcomes.

Western militaries, at least, have made it doctrine to always keep a human in the loop, so that's good.

[–] cyborganism@lemmy.ca 12 points 9 months ago

They WERE trained on human behaviour after all.

[–] theodewere@kbin.social 2 points 9 months ago

i really doubt you're going to successfully train a chatbot to care about future generations

[–] runner_g@lemmy.blahaj.zone 1 points 9 months ago* (last edited 9 months ago)

Must have been trained on Civilization's Gandhi