this post was submitted on 01 Oct 2023
1124 points (97.5% liked)

Technology

59666 readers
2624 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Omega_Haxors@lemmy.ml 0 points 1 year ago* (last edited 1 year ago) (2 children)

Because it literally is. If you knew the exact terms to get the the AI to recreate something in its training data, it could, 1:1. And if you ask it to create you something new, no matter what parameters you use it will look like a mess of garbage data. Generative AI is literally just art laundering just like how Language Models are writing laundering. We tend to use humanizing language but ultimately it's a machine which uses a bunch of dials and levers to determine how much % a work should resemble one piece in its training at a particular point of the work and how much it should resemble another in another. There's a reason why a lot of modern image bots have literal fucking watermarks all over their outputs. Because the images were flat out stolen.

The tech itself is pretty neat, you're essentially making a virtual brain and having it do useful work, but ultimately all the capitalists running these tools see it as is another method to bring the public under their exclusive and totalitarian control. We could have had a cool roboartist putting out new and unique works but instead we get people losing their job because an inept system hyped up by silicon valley fart huffers claimed it could do their work for free and it only gets worse as these AIs use their own garbage outputs as training data.

[–] regbin_@lemmy.world 5 points 1 year ago* (last edited 1 year ago) (1 children)

If you knew the exact terms to get the the AI to recreate something in its training data, it could, 1:1.

That's because you told it to. Don't make it recreate existing art then.

And if you ask it to create you something new, no matter what parameters you use it will look like a mess of garbage data.

This is not always true. You can train it on a certain style and a photo of a random object, then have it generate an image of the random object in that style. It will "understand" the concept of a style and an object.

ultimately all the capitalists running these tools see it as is another method to bring the public under their exclusive and totalitarian control.

Exactly why I'm not supporting the closed source paid services (Midjourney, ChatGPT, Bing Chat, DALL-E etc.) and instead advocate for open source projects like Stable Diffusion and LLaMA.

[–] Omega_Haxors@lemmy.ml -2 points 1 year ago* (last edited 1 year ago) (1 children)

That’s because you told it to. Don’t make it recreate existing art then.

If you took a random concept and explained it to a person they could using their existing knowledge set, draw it somewhat competently. That is because people are able to apply knowledge to make something new. If you told someone to recreate something that already exists, even if they're a professional, would never be able to recreate it no matter how much time and effort the put into it. AI can do the latter because it's basically copying, and it can't do the former because there's nothing to copy from.

[–] regbin_@lemmy.world 5 points 1 year ago (1 children)

If you took a random concept and explained it to a person they could using their existing knowledge set, draw it somewhat competently. That is because people are able to apply knowledge to make something new.

Theoretically it can, but it would involve meticulous and proper labeling of each training data. Currently most of the trained data are automatically labeled and they're not descriptive/verbose enough. I believe the improvements from the latest version of DALL-E is due to OpenAI's use of a more advanced image labeler.

[–] Omega_Haxors@lemmy.ml 0 points 1 year ago (1 children)

Theoretically it can, but it would involve meticulous and proper labeling of each training data.

OK so throw more Kenyans at it. Got it!

[–] regbin_@lemmy.world 1 points 1 year ago (1 children)
[–] Omega_Haxors@lemmy.ml 0 points 1 year ago

Well how do you think tagging was done? Because that's what they did.

[–] not_gsa@lemm.ee 1 points 1 year ago (1 children)

Not reading all that, i'll assume you are wrong

[–] Omega_Haxors@lemmy.ml 0 points 1 year ago

This is the way.