this post was submitted on 27 Jan 2024
116 points (84.5% liked)

Technology

59666 readers
2743 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Microsoft CEO calls for tech industry to 'act' after AI photos of Taylor Swift circulate X::Satya Nadella spoke to Lester Holt about artificial intelligence and its ability to create deepfake images of others. After pictures of Taylor Swift circulated, he called for actions

all 48 comments
sorted by: hot top controversial new old
[–] EfreetSK@lemmy.world 133 points 10 months ago* (last edited 10 months ago) (1 children)

AI generation can be used for disinformation which can literally destabilize or right away end the world as we know it.

But fake Taylor Swift pictures, this is where we draw the line ...

[–] Subtlysubtle@sffa.community 18 points 10 months ago (1 children)

You know, whatever it takes to get people to act. Sometimes it'd something dumb that motivates prople.

[–] grayman@lemmy.world 6 points 10 months ago (1 children)

No matter the law, exactly zero powers will stop using it. Any law will only hamper the regular citizen in their endeavors.

[–] porkchop@lemm.ee 4 points 10 months ago (1 children)

Or help create technology to detect it? Put some money behind that cat-and-mouse game since there’s evidently no way of stopping it.

[–] Buddahriffic@lemmy.world 1 points 10 months ago

Better detection tools will directly lead to better generative AIs that can avoid detection by these tools. You literally hook up the detector to the AI and use its output to train it. Eventually it will get to the point where it is impossible to tell. Holding back on the detection tools will keep it possible to do that detection for longer. Generative AI has the advantage in this arms race.

Not that that will stop it, it will just force those who want to improve generative AI to develop their own detectors in addition to developing the generative AI itself.

[–] bloopernova@programming.dev 63 points 10 months ago (2 children)

"TAYLOR SWIFT WAS A LINE THAT SHOULD NOT HAVE BEEN CROSSED.

PREPARE TO REAP THE WHIRLWIND!"

-- The Whitehouse, apparently.

[–] xor@infosec.pub 39 points 10 months ago (1 children)

i will never understand how taylor swift became this super duper billionaire royalty who i have to hear about every day now...

[–] deweydecibel@lemmy.world 19 points 10 months ago* (last edited 10 months ago) (3 children)

So we're just going to pretend this is only about Taylor Swift, are we? Makes the jokes easier, I guess.

The subject being Taylor Swift just made the issue more visible than normal. It's not specifically about being upset it happened to her.

The press secretary literally said it was about women in general being the targets of abuse. All that happened here was that this got the attention of more people than normal, so the white house used that opportunity to make a statement on it.

[–] SkyNTP@lemmy.ml 25 points 10 months ago

The fact that a celebrity was the line being crossed is a symptom of a major societal sickness.

[–] SkyezOpen@lemmy.world 16 points 10 months ago

There was an incident where a high school girl had ai porn of her passed around the school. It made national news. That wouldve been good enough, but no, Taylor swift is when we make a stand.

[–] Cosmicomical@lemmy.world 34 points 10 months ago* (last edited 10 months ago) (1 children)

Isn't this something that could have been done with photoshop in 30 minutes? What's the difference when the result could have been almost perfect just as easily?

Ps. Haven't seen the images being discussed, and this is even more alarming given legislation could be passed based on images you're not even morally allowed to review. It could all be fictional and I would never even know.

[–] beebarfbadger@lemmy.world 3 points 10 months ago

WON'T SOMEBODY THINK OF THE CHILDREN!!! I am clearly loudly screaming about the children, so anybody who is against me with their logic or reasoning or well-founded objections must then obviously be AGAINST the children because these are the only two (2) options that exist. So you better shut up and take whatever abuse I can dish out because you don't wanna look like a terrorist! Or a pedophile? I forget what boogeyman we're currently using as an excuse to get people to agree to taking away their rights.

[–] Buttons@programming.dev 31 points 10 months ago* (last edited 10 months ago) (1 children)

"Allowing entities other than us to control AI is dangerous. We must act!"

-- Microsoft probably

I have no problem using the law to stop abusive deep fakes, but I do have a problem using the law to take AI away from regular people. Regular people need to be able to run their own AIs. All the worst outcomes involve taking AI away from regular people first.

[–] randomaside@lemmy.dbzer0.com 11 points 10 months ago

MFW they make owning and operating AI illegal before guns

:: surprised Pikachu face ::

[–] AlmightySnoo@lemmy.world 31 points 10 months ago* (last edited 10 months ago) (1 children)

Didn't we all see this coming? Porn deepfakes were already a thing, and even before generative AI we already had people photoshop women in explicit situations.

I'd even say that right now we have much better tools to deal with the fakes than before AI, and all that is required is legislative action.

The tech is already capable of doing automatic facial recognition at scale and we could give victims the tools to automatically send take-down notices and have them enforced.

[–] shalafi@lemmy.world 8 points 10 months ago

We saw it coming, we Technorati. The average citizen knows exactly enough tech to perform their job, Facebook and email, nothing more.

You and I are on lemmy. My page is flooded with Linux posts and memes. The vast majority of people have, at best, a nebulous understanding of what it is. I'll even back up and say the majority have heard of it, but can't identify it as an operating system. Hell, I back up further! Most people can't give an ELI5 as to what on OS is, let alone give an example.

This is news to most. And because it hit a wildly popular star, a star who's shown herself to, at least, seem like a great person, no drama, no immorality, no bullshit, this thing seems so much more unfair and worthy of attention.

I'm not sure how we legislate this sort of thing. Sounds like you got ideas?

[–] Grimy@lemmy.world 22 points 10 months ago* (last edited 10 months ago)

I think this kind of stuff should be treated as revenge porn, and Twitter should absolutely get sued for letting it go on.

An other article mentioned it was up for 17 hours and had 45 million view. I find it hard to believe twitter didn't know about it 15 minutes after it was posted. We also have the tech to know when an image is NSFW and when it includes certain celebrities.

That being said, it shouldn't matter if it was generated, photoshopped or drawn. This is going to get muddied and Microsoft absolutely has a horse in the race.

Distribution must be legislated, not how it's created. Microsoft and company want to cut our access to free AI and replace it with a subscription model. They are banking on an emotional response.

[–] andrewrgross@slrpnk.net 20 points 10 months ago* (last edited 10 months ago) (1 children)

I think everyone's still trying to get some handle on this, but it seems like it's mostly an issue with scale.

People have been making and sharing photo manips of celebrities naked for about 20 years. This is just noteworthy because there's so much of it, so quickly produced, so publicly present.

I think we should all have the right to create images -- including sexual ones -- freely. And I think the subjects of those images (especially Emma Watson and Natalie Portman, I don't think anyone has been photoshopped into porn as much as those two) deserve to live their lives without having to see it or thick about it.

I don't think it's necessary a problem that has an easy legal solution, but a good start might be just recognizing this social contact. If you want to fantasize about anyone -- whether it's a celebrity or someone you take classes with -- understand how gross it is for them to have two know about it, and keep it discrete.

[–] fidodo@lemmy.world 4 points 10 months ago

Not just scale but also accessibility. Now anyone can make these without having a specialized skill.

I don't think any laws targeting deep fakes should treat them as prohibited material, that would be invasive on freedom of speech and privacy rights for something like possession to be made illegal.

Instead it should be treated as harassment. At a bare minimum we should make the situation where someone targets someone they know and distributes pictures of them to other people they know illegal. For example, if someone were to create and distribute deepfake porn pictures of a classmate to other classmates, that situation should not be allowed to happen.

When it comes to a person of note where it's happening more in the background I feel that's more of a grey area since it's not necessarily a direct target since the person neither knows them nor their social circle, but I think there's a big difference if it's being distributed on a social media platform that the celebrity is on or discussed, vs a porn site.

[–] kerobaros@lemmy.world 20 points 10 months ago

my brother in christ, you're the tech industry

[–] deweydecibel@lemmy.world 16 points 10 months ago

Well of course he did. The way to fight this will involve letting tech companies implement more invasive data harvesti- sorry, "verifying methods" on users.

[–] Waldowal@lemmy.world 15 points 10 months ago* (last edited 10 months ago) (1 children)

Absolutely disgusting! Where would you even find that sort of filth? Be specific.

[–] EightLeggedFreak@lemmy.world 8 points 10 months ago (2 children)

Fuck, this joke is getting old.

[–] Dkarma@lemmy.world 2 points 10 months ago

At least the people who are like Mac irl show you who they really are like this...

[–] 6xpipe_@lemmy.world 1 points 10 months ago

So old. Like 12 years old.

[–] b3an@lemmy.world 15 points 10 months ago

And ofc only do something because someone rich and wealthy had it happen. Not when we’ve said deepfakes were a prescience danger for literally years now.

[–] AshMan85@lemmy.world 12 points 10 months ago (3 children)

Sure how about we ban ALL deepfakes

[–] CaptainSpaceman@lemmy.world 16 points 10 months ago

Except that ban would only affect citizens who could use such technology for profit, and big companies will absolutely be allowed to use them

[–] xor@infosec.pub 3 points 10 months ago (1 children)

yeah! and let's ban all feelings of sadness too!
(banning types of ai won't do much)

[–] AshMan85@lemmy.world 3 points 10 months ago (1 children)

Would love to hear what you think AI has to do with emotions.

[–] xor@infosec.pub 7 points 10 months ago

ok:
it would be equally as effective to attempt to ban either of them.

[–] Daxtron2@startrek.website 10 points 10 months ago (1 children)

It's quite literally impossible to stop the creation of them at this point, doesn't mean you can't criminalize the distribution of it under revenge porn laws.

[–] Cosmicomical@lemmy.world 1 points 10 months ago (2 children)

How is that a solution given you can just as easily circulate a text prompt to generate them directly?

[–] Daxtron2@startrek.website 7 points 10 months ago

That wouldn't be illegal though, you'd have to criminalize any brief description that includes nudity and a real person which is a clear violation of free speech.

[–] IHawkMike@lemmy.world 5 points 10 months ago

There are no perfect solutions so we might as well do nothing.

[–] KingThrillgore@lemmy.ml 7 points 10 months ago (1 children)

The correct moment would have been to act when Drip Pope happened.

[–] Ghyste@sh.itjust.works 1 points 10 months ago

I missed this one.

[–] romamix@lemmy.ml 2 points 10 months ago

I guess nobody learned from the Russian troll factories: you spam everyone with fakes too much, and when a real thing is leaked nobody believes in it. I mean, let's say , a 2025 iCloud leak has no chance to happen because the celebs call always tell it's all the AI generated images, and there will be less efforts to distribute them.

[–] SpaceNoodle@lemmy.world 2 points 10 months ago

Cat's out of the bag, dipshit.