AI is based on humans, so yes.
Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
!globalnews@lemmy.zip
!interestingshare@lemmy.zip
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @brikox@lemmy.zip.
If there’s a bias in the training data, you’ll find the same bias in the generated output.
The image generation portion of this is not the biggest long term problem because there genuinely very dumb. Good training data can mediate this a lot but more importantly.
Image generation does not reason like llms can.
Once the tech is properly matured where fine tuning of details is possible i expect a true llm reasoning component build in that will always specify to the image generation module exactly how the intended image is supposed to look. Including gender and age if those where not user specified.
This does not solve the problem of bias in llm but i want to highlight that the bias in llm-reasoning module of ai is the single most important part that needs to be bias aware, image generation will smooth itself out.
This is somewhat a reactionary rant on “researchers” and people addressing image gen and text gen under the same rules. And as a worst offender judge gpt models based on the output off dalle outputs. However faulty and hallucinatory they all are they are not the same thing.
Thanks for reading.