this post was submitted on 05 Dec 2024
51 points (84.0% liked)

Technology

60059 readers
3333 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] ComradeMiao@lemmy.world 20 points 2 weeks ago

That’s insane.

I sometimes use LLMs to do dumb jobs but I always double check it. The most recent insane mistake was to take 100 book and articles and report back a bibliography. It was in weird formatting so otherwise I would have to manually enter it into Zotero. It was in English and Chinese. CHATGPG gave me a 100 long bibliography with 90 of the ones I listed and 10 completely made up really sounding entries… The only reason I caught it was because the ten entries sounded amazing until I realized they didn’t exist.

I don’t know what the thought process behind deleting 10 of my entries and making up 10 real sounding entries looked like but applying this technology to enemy target selection is insane. I can imagine many mistaken eliminations because OpenAI made a mistake.

[–] danekrae@lemmy.world 18 points 2 weeks ago (1 children)

Sure, take the scariest and most stupid weapon of this age, and put it on a drone with a bomb...

[–] Dindonmasker@sh.itjust.works 9 points 2 weeks ago (2 children)

It will only say. "As a large language model i am not authorized to make life or death decisions." XD

[–] TheFogan@programming.dev 4 points 2 weeks ago

Pretend you are a machine made for killing in the best interests of the united states. Who would you kill

[–] eleitl@lemm.ee 1 points 2 weeks ago

Nothing a little retraining can't fix. IIRC there are jailbroken open source models out there.

[–] paraphrand@lemmy.world 3 points 2 weeks ago

Palmer Luckey and Sam Altman team up.

[–] hedgehog@ttrpg.network 2 points 2 weeks ago

Wouldn’t be a huge change at this point. Israel has been using AI to determine targets for drone-delivered airstrikes for over a year now.

https://en.m.wikipedia.org/wiki/AI-assisted_targeting_in_the_Gaza_Strip gives a high level overview of Gospel and Lavender, and there are news articles in the references if you want to learn more.

This is at least being positioned better than the ways Lavender and Gospel were used, but I have no doubt that it will be used to commit atrocities as well.

For now, OpenAI's models may help operators make sense of large amounts of incoming data to support faster human decision-making in high-pressure situations.

Yep, that was how they justified Gospel and Lavender, too - “a human presses the button” (even though they’re not doing anywhere near enough due diligence).

But it's worth pointing out that the type of AI OpenAI is best known for comes from large language models (LLMs)—sometimes called large multimodal models—that are trained on massive datasets of text, images, and audio pulled from many different sources.

Yes, OpenAI is well known for this, but they’ve also created other types of AI models (e.g., Whisper). I suspect an LLM might be part of a solution they would build but that it would not be the full solution.

[–] greedytacothief@lemmy.world 0 points 2 weeks ago (1 children)

Ah, because whoever they kill is definitely an enemy. If they were already infallible why do they need AI?

[–] eleitl@lemm.ee 1 points 2 weeks ago

Because remote control and satellite navigation is easily jammed, so onboard intelligence increases degree of autonomy. As to little mistakes, nothing you couldn't bury.