this post was submitted on 23 Aug 2024
415 points (90.9% liked)

Technology

59578 readers
3233 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] conciselyverbose@sh.itjust.works 11 points 3 months ago (2 children)

I think this is a good thing.

Pictures/video without verified provenance have not constituted legitimate evidence for anything with meaningful stakes for several years. Perfect fakes have been possible at the level of serious actors already.

Putting it in the hands of everyone brings awareness that pictures aren't evidence, lowering their impact over time. Not being possible for anyone would be great, but that isn't and hasn't been reality for a while.

[–] reksas@sopuli.xyz 9 points 3 months ago* (last edited 3 months ago) (1 children)

While this is good thing, not being able to tell what is real and what is not would be disaster. What if every comment here but you were generated by some really advanced ai? What they can do now will be laughable compared to what they can do many years from now. And at that point it will be too late to demand anything to be done about it.

Ai generated content should have somekind of tag or mark that is inherently tied to it that can be used to identify it as ai generated, even if only part is used. No idea how that would work though if its even possible.

[–] conciselyverbose@sh.itjust.works 11 points 3 months ago (1 children)

You already can't. You can't close Pandora's box.

Adding labels just creates a false sense of security.

[–] reksas@sopuli.xyz 1 points 3 months ago (1 children)

it wouldnt be label, that wouldnt do anything since it could just be erased. It should be something like invisible set of pixels on pictures or some inaudible soundpattern on sounds that can be detected in some way.

[–] conciselyverbose@sh.itjust.works 2 points 3 months ago (1 children)

But it's irrelevant. You can watermark all you want in the algorithms you control, but it doesn't change the underlying fact that pictures have been capable of lying for years.

People just recognizing that a picture is not evidence of anything is better.

[–] reksas@sopuli.xyz 1 points 3 months ago (1 children)

Yes, but reason why people dont already consider pictures irrelevant is that it takes time and effort to manipulate a picture. With ai not only is it fast it can be automated. Of course you shouldnt accept something so unreliable as legal evidence but this will spill over to everything else too

[–] conciselyverbose@sh.itjust.works 1 points 3 months ago (1 children)

It doesn't matter. Any time there are any stakes at all (and plenty of times there aren't), there's someone who will do the work.

[–] reksas@sopuli.xyz 1 points 3 months ago (1 children)

It doesnt matter if you cant trust anything you see? What if you couldn't be sure if you weren't talking to bot right now?

[–] conciselyverbose@sh.itjust.works 2 points 3 months ago* (last edited 3 months ago) (1 children)

Photos/video from unknown sources have already been completely worthless as evidence for a solid decade. If you used a random picture online to prove a point 5 years ago, you were wrong. This does not change that reality in any way.

The only thing changing is your awareness that they're not credible.

[–] reksas@sopuli.xyz 1 points 3 months ago (1 children)

What about reliable sources becoming less reliable? Knowing something is not credible doesn't help if i can't know what is credible

[–] conciselyverbose@sh.itjust.works 1 points 3 months ago (1 children)

They are not reliable sources. You cannot become less reliable than "not at all", and that has been the state of pictures and videos for many years already. There is absolutely no change to the evidentiary value of pictures/video.

Making the information more readily available does not change the reality that pictures aren't evidence.

[–] reksas@sopuli.xyz 1 points 3 months ago (1 children)

I'm not talking about evidence, i'm talking about fundamendal being able to trust anything digital at all in any context. What if you couldnt be sure if phonecall from your friend was actually from your friend or if you cant be sure about any picture shown to you if its actually about some real thing.

Things you need to be able to trust in daily life dont have to be court-level evidence. That is what abuse of ai will take from us.

[–] conciselyverbose@sh.itjust.works 1 points 3 months ago (1 children)

It's the exact same thing. You're drawing a distinction between two identical things.

Pictures have not been credible for a long time. You shouldn't have "trusted" a picture for anything 5 years ago.

The only thing that's in any way different is that now you know you can't trust it.

[–] reksas@sopuli.xyz 1 points 3 months ago (1 children)

I suppose the conclusion is that we need better ways to verify things

[–] conciselyverbose@sh.itjust.works 1 points 3 months ago

The conclusion is to learn to be comfortable with uncertainty.

The world is inherently uncertain, all the way down to the possibility of measuring subatomic particles.

[–] Hacksaw@lemmy.ca 4 points 3 months ago (1 children)

I completely agree. This is going to free kids from someone taking a picture of them doing something relatively harmless and extorting them. "That was AI, I wasn't even at that party 🤷"

I can't wait for childhood and teenage life to being a bit more free and a bit less constantly recorded.

yeah, every time you go to a party, and fun happens, somebody pulls out their smartphone and starts filming. it's really bad. people can only relax when there's privacy, and smartphones have stolen privacy from society for over 10 years now. we need to either ban filming in general (which is not doable) or discredit photographs - which we're doing right now.