this post was submitted on 08 Jun 2025
52 points (91.9% liked)

AI

4992 readers
115 users here now

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

founded 4 years ago
 

Title, or at least the inverse be encouraged. This has been talked about before, but with how bad things are getting, and how realistic goods ai generated videos are getting, anything feels better than nothing, AI generated watermarks, or metadata can be removed, but thats not the point, the point is deterrence. Immediately all big tech will comply (atleast on the surface for consumer-facing products), and then we will probably see a massive decrease in malicious use of it, people will bypass it, remove watermarks, fix metadata, but the situation should be quite a bit better? I dont see many downsides/

top 32 comments
sorted by: hot top controversial new old
[–] Tangentism@lemmy.ml 4 points 3 hours ago

No, just legislate that all AI companies have to publish every single source they used for their training models and proof they have permissions/licenses to do so. If its later shown that they used a source and didnt list it, they can be fined & sued for a % of the companies revenue.

Then all the copyright holders of those sources then sue the AI companies for infringement/retrospective licenses.

[–] Sandbar_Trekker@lemmy.today 4 points 7 hours ago (1 children)

Legally mandating watermarks on any AI generated watermarks is a bad idea.

It's good practice for these companies to add a watermark, but when you add a "legal" requirement, you're opening up regular artists/authors to getting dragged through the legal system simply because someone (or some corporation) suspects that an AI tool was used at some point in the work's creation.

[–] Aradia@lemmy.ml 1 points 4 hours ago

watermark would help me to stop reading and real and actual expert on the matter

[–] valium_aggelein@hexbear.net 0 points 3 hours ago (1 children)

There should be a law that imprisons anyone who uses AI. Seriously why do you people need this shit.

[–] Sandbar_Trekker@lemmy.today 1 points 2 hours ago (1 children)

Do you even realize how broadly AI is used today?

If you pass that law, you're throwing all gamers in jail simply because they enabled NVIDIA's DLSS.

Might as well throw researchers in jail because they're using AI to find treatments for rare diseases or to find planets in other solar systems:

Used AI to detect what kind of bird/animal/insect you came across? Believe it or not, straight to jail:

[–] valium_aggelein@hexbear.net 1 points 1 hour ago

Yes you should absolutely go to jail for all of these things. Gamers should go to jail simply for being gamers. Researchers should go to jail because finding new planets means we might find aliens and I don’t fuck with that. Identifying birds and insects isn’t something we should be able to do. Keep nature mysterious.

[–] zkfcfbzr@lemmy.world 11 points 13 hours ago* (last edited 11 hours ago) (1 children)

No, mostly because I'm against laws which are literally impossible to enforce. And it'll become exponentially harder to enforce as the years pass on.

I think a lot of people will get annoyed at this comparison, but I see a lot of similarity between the attitudes of the "AI slop" people and the "We can always tell" anti-trans people, in the sense that I've seen so many people from the first group accuse legitimate human works of being AI-created (and obviously we've all seen how often people from the second group have accused AFAB women of being trans). And just as those anti-trans people actually can't tell for a huge number of well-passing trans people, there's a lot of AI-created works out there that are absolutely passing for human-created works in mass, without giving off any obvious "slop" signs. Real people will get (and are getting) swept-up and hurt in this anti-AI reactionary phase.

I think AI has a lot of legitimately decent uses, and I think it has a lot of stupid-as-shit uses. And the stupid-as-shit uses may be in the lead for the moment. But mandating tagging AI-generated content would just be ineffective and reactionary. I do think it should be regulated in other, more useful ways.

[–] umbrella@lemmy.ml 2 points 10 hours ago

what other, more useful ways?

[–] Zarxrax@lemmy.world 9 points 13 hours ago (2 children)

I'm not against such a law in theory, but I have many questions about how it would be implemented and enforced. First off, what exactly counts as AI generated? We are seeing more and more that AI features are being added into lots of areas, and I could certainly envision a future in few years time that nearly all photos taken with high end phones would be altered by AI in some way. After that, who exactly is responsible for ensuring that things are tagged properly? The individual who created the image? The software that may have done the AI processing? The social media site that the image was posted on? If the penalties are harsh for not attributing ai to an image, what's to stop sites from just having a blanket disclaimer saying that ALL images on the page were generated by AI?

[–] zkfcfbzr@lemmy.world 3 points 12 hours ago

If the penalties are harsh for not attributing ai to an image, what’s to stop sites from just having a blanket disclaimer saying that ALL images on the page were generated by AI?

Just like what happens with companies slapping Prop. 65 warnings on products that don't actually need them, out of caution and/or ignorance

[–] howrar@lemmy.ca 1 points 11 hours ago

Regarding your last point, you could in theory also penalize for marking non AI generated images as AI generated.

[–] jjmoldy@lemmy.world 4 points 11 hours ago (1 children)

How would such a law be enforced? What agency would enforce it? What penalty would one face for breaking this law?

[–] queermunist@lemmy.ml 1 points 11 hours ago* (last edited 11 hours ago) (2 children)

Force the AI models to contain some kind of metadata in all their material. Training AI models is a massive undertaking, it's not like they can hide what they're doing. We know who is training these models and where their data centers are, so a regulatory agency would certainly be able to force them to comply.

In the US this could be done with the FCC, in other countries the power can be invested into regulatory bodies that control communications and broadcasting etc.

The penalty? Break them on the fucking wheel.

[–] vrighter@discuss.tchncs.de 3 points 9 hours ago (1 children)

yes, the ones you pay for and are the publicly scrutinized might. Privately trained models just, you know, won't

[–] queermunist@lemmy.ml 1 points 3 hours ago

If they can find cannabis grow ops from power usage, they certainly can find people using massive amounts of data and processing power and public water and investor cash to train AI. You expect me to believe this could be done in secret?

[–] jjmoldy@lemmy.world 2 points 11 hours ago (1 children)

Medieval torture in response to what is essentially copyright infringement. Very sane!

[–] queermunist@lemmy.ml 0 points 11 hours ago (1 children)

Or just nationalize their companies I guess.

[–] jjmoldy@lemmy.world 1 points 11 hours ago (1 children)

Well that's certainly less extreme than breaking on the wheel, I'll give you that, but it doesn't seem very realistic in most countries, where nationalization is rare and done mainly for strategic purposes.

[–] queermunist@lemmy.ml 2 points 11 hours ago* (last edited 11 hours ago) (1 children)

Well the most realistic thing is that there will be no regulations or if there are regulations they're toothless fines or something.

I didn't realize we were limiting ourselves to our backwards political system where the rich and powerful write their own regulations.

Nothing will be done, realistically.

Nothing is ever done about anything.

[–] jjmoldy@lemmy.world -2 points 10 hours ago (1 children)

I gather from your username that you consider yourself a communist? How do you suppose your ambitions could be put into reality when the movement is so devastatingly weak and disorganized?

[–] queermunist@lemmy.ml 1 points 10 hours ago (1 children)

Things only look that way when you're a Western Marxist and reject actually existing socialism around the world. China is hardly weak or disorganized.

Or do you mean AI regulation? I think it's probably best to just focus on AI being used for war and struggle against that (No Tech For Apartheid comes to mind), rather than try and tackle all AI everywhere all at once.

[–] jjmoldy@lemmy.world 2 points 10 hours ago* (last edited 10 hours ago) (1 children)

To be clear I am not a socialist or communist, Western or otherwise. Yes, China is ascendent on the world stage and likely will continue to be, but they have shown no willingness to aid communist movements abroad. That can change of course but it would seem to me that the CPC is more concerned with maintaining international relationships and economic agreements than fomenting the global revolution. I also somewhat doubt the party's commitment to eventually 'withering away' as Marx put it. To be fair, I know that couldn't happen unless the whole world was on board or they'd get promptly steamrolled by one adversary or another.

[–] queermunist@lemmy.ml 1 points 3 hours ago* (last edited 3 hours ago)

China's commitment to peaceful internal development certainly means they won't help revolutionary communism abroad directly, at least for now, but they still raise the contractions. Normal people will look at their growing economy and compare it to our stagnant economy and become agitated.

And then there's BRICS destroying the reserve currency status of the US dollar and bringing about a multipolar world.

Closer to home, there's the internal decline of the US that will soon make it impossible for it to meddle in countries with socialist and anticolonial movements. Imagine a world where the US couldn't kill leaders like Sukarno or Lumumba or Allende, nor could it invade Korea or Vietnam. It will try, of course, but the age of hegemony is over.

That's the future I see, and it makes me optimistic.

Quite off topic for an AI thread, though!

[–] hexthismess@hexbear.net 4 points 11 hours ago

Yes. I've seen youtube channels even tag cgi effects, which i appreciate for space content

[–] SubArcticTundra@lemmy.ml 9 points 14 hours ago* (last edited 14 hours ago)

I definitely agree with this. If this does not happen then I can at the very lease see the journalism industry develop its own opt-in standard for image signing.

[–] fishos@lemmy.world 2 points 10 hours ago* (last edited 10 hours ago)

Hell no. There's ZERO reason. Any case you can put forth for why it would be needed is already covered by current slander, libel, defamation of character, copyright, etc laws. The only remaining ones are puritan "it's not real art" reasons, and frankly those are just gatekeeping assholes.

[–] tane6@lemm.ee 6 points 14 hours ago
[–] venusaur@lemmy.world 6 points 14 hours ago (1 children)

Yup. There should also be a law requiring all photography, specifically of people, that have been altered/photoshopped to be tagged to remind us that the beauty standards that are being shoved down our throats are unrealistic.

[–] Goten@piefed.social 0 points 13 hours ago

Na, we should strive to be perfect.

[–] yogthos@lemmy.ml 1 points 12 hours ago (1 children)

I wonder how long people are going to keep perseverating over AI generated content...

[–] PolandIsAStateOfMind@lemmy.ml 3 points 10 hours ago* (last edited 10 hours ago) (1 children)

Until they can no longer tell, slide into completely baseless vibes based identification and them most people will just bore and move on and small but vocally online group of tinhat equivalents will base their entire personality on "tracking" the AI

[–] yogthos@lemmy.ml 2 points 4 hours ago

that sounds about right, so in a year or two most people will finally be able to move on then :)