this post was submitted on 30 Oct 2023
193 points (96.6% liked)

Technology

59641 readers
2611 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The executive order comes after a series of non-binding agreements with AI companies.

The order has eight goals: to create new standards for AI safety and security, protect privacy, advance equity and civil rights, stand up for consumers, patients, and students, support workers, promote innovation and competition, advance US leadership in AI technologies, and ensure the responsible and effective government use of the technology.

all 42 comments
sorted by: hot top controversial new old
[–] jsdz@lemmy.ml 26 points 1 year ago* (last edited 1 year ago)

AI that is used to monitor cameras and identify our faces to track everywhere everyone goes: Why would that concern you? Do you have something to hide, citizen?

AI that might be used to generate agitprop, competing with conventional advertising: HOLY SHIT we need a new international treaty right away!

[–] mojo@lemm.ee 15 points 1 year ago

More and more of these "safety" proposals just serve to kill open source AI, only allowing a few mega corps deem what we can do with them while remaining advertising friendly of course. Freedom dies in the name of "safety", especially technology that is governed by people who have zero concept of how it works besides a scary ambiguous buzzword.

[–] DarkThoughts@kbin.social 11 points 1 year ago (1 children)

What's the point if they're non-binding? AI should be privacy friendly and open, otherwise we end up with some serious problems down the line.

[–] donuts@kbin.social 16 points 1 year ago (1 children)

The President has limited authority and cannot make laws unilaterally. For sensible AI regulations and laws we will certainly need Congress to do its job, and clearly they're pretty damn bad at that.

[–] Rouxibeau@lemmy.world 7 points 1 year ago (1 children)

Dress codes are more important.

[–] halcyoncmdr@lemmy.world 1 points 1 year ago

We can joke about it all we want but the reason is that things like that, naming of post offices, etc. are basically not political and easy to pass quickly. Real legislation takes time.

[–] Fafner@yiffit.net 9 points 1 year ago (1 children)

Plenty of unemployed AI ethics folks around to ask.

[–] fubo@lemmy.world 6 points 1 year ago* (last edited 1 year ago)

Unfortunately this doesn't seem to address the "takeoff" problem: the use of AI to build more-capable AI, the creation of autonomous AI systems that can develop self-protection drives (see Omohundro 2008), etc.

AI systems should not be allowed to control economic resources until alignment is solved. As it stands, if a major company were to turn over its management to an autonomous AI system, there's a good chance that's game over for humans -- including the humans who made that decision.

The safety problem of autonomous AI systems able to (for instance) obtain their own resources or optimize their own code have been known since long before GPTs or deepfakes were a thing.

Unfortunately "AI safety" has largely been coopted to mean "stop humans from using deepfakes to bully or deceive other humans" rather than "stop fully-automated corporations from taking over the economy and running the planet with even less humane ethics even than human-run corporations do."

(Think selfishness or greed are a problem today? Consider a megacorp run by an entity that literally has no other drives but to protect and expand itself, thinks billions of times faster than any human board of directors, and cannot die. Say what you like about Bill Gates, he at least seems to enjoy curing diseases.)

[–] autotldr@lemmings.world 2 points 1 year ago

This is the best summary I could come up with:


President Joe Biden signed an executive order providing rules around generative AI, ahead of any legislation coming from lawmakers.

Several government agencies are tasked with creating standards to protect against the use of AI to engineer biological materials, establish best practices around content authentication, and build advanced cybersecurity programs.

The National Institute of Standards and Safety (NIST) will be responsible for developing standards to “red team” AI models before public release, while the Department of Energy and Department of Homeland Security are directed to address the potential threat of AI to infrastructure and the chemical, biological, radiological, nuclear and cybersecurity risks.

Developers of large AI models like OpenAI ‘s GPT and Meta’s Llama 2 are required to share safety test results.

It also orders government agencies to provide guidelines for landlords, Federal benefits programs, and contracts on how to prevent AI from exacerbating discrimination.

These were later turned into a series of agreements between the White House and several AI players, including Meta, Google, OpenAI, Nvidia, and Adobe.


The original article contains 555 words, the summary contains 168 words. Saved 70%. I'm a bot and I'm open source!

[–] foggy@lemmy.world -3 points 1 year ago* (last edited 1 year ago) (6 children)

Good thing we have a guy in office who grew up without a fucking computer.

👴🇺🇸

[–] BrianTheeBiscuiteer@lemmy.world 38 points 1 year ago (1 children)

And all of us grew up without AI. 🤨

[–] nurple@lemmy.world 29 points 1 year ago* (last edited 1 year ago) (1 children)

You do know that Biden didn't personally draft this himself, right?

It delegates the specifics to agencies with relevant expertise. That's how the executive branch works.

[–] BassTurd@lemmy.world 10 points 1 year ago

That would be relevant if he were the one writing the policy, which he's not.

[–] BearOfaTime@lemm.ee 6 points 1 year ago

Yawn.

There's plenty to criticise, but this particular take is fucking moronic.

Everyone grew up before insert current new tech.

I grew up before cell phones, PC's, internet, streaming, torrenting, etc, etc, and I bet a years salary I could explain all of those things in pretty good detail and many, many, many, many more than you, extemporaneously, (i.e. at the drop of a hat) , while drunk and high.

Would you say the same about LBJ, who is THE reason we have NASA, moon landings, and all the derivative tech (including the internet you're currently using)?

Pick a dozen other valid reasons to criticise him, this take isn't.

[–] teft@startrek.website 2 points 1 year ago (1 children)

Grew up without one? He’s 80. He probably didn’t even use a computer before he was 50.

[–] Zorque@kbin.social -2 points 1 year ago (1 children)

Pretty sure there were computers in the eighties.

[–] tcely@fosstodon.org 2 points 1 year ago (1 children)

I can confirm. I have touched several of the computers sold before 1990 in my lifetime.

https://youtube.com/watch?v=ErwS24cBZPc

To be fair, many people didn't use computers much, back then.

@Zorque
@teft

[–] teft@startrek.website 2 points 1 year ago

I'm basing my comment on my life experience of growing up in the 80s and 90s. Most people in the time period did not use computers. A lot of people thought they were nerdy. So someone like Joe Biden who is a "cool guy" politician almost certainly never touched one until he needed one for work. Total supposition on my part but I would put money on it.

[–] Sabata11792@kbin.social 2 points 1 year ago

Do you know how little that narrows it down?