this post was submitted on 17 Mar 2025
13 points (78.3% liked)

Technology

2375 readers
205 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 1 year ago
MODERATORS
 

AI is increasingly a feature of everyday life. But with its models based on often outdated data and the field still dominated by male researchers, as its influence on society grows it is also perpetuating sexist stereotypes.

top 3 comments
sorted by: hot top controversial new old
[–] Alexstarfire@lemmy.world 8 points 2 weeks ago

AI is based on humans, so yes.

[–] chaosCruiser 8 points 2 weeks ago

If there’s a bias in the training data, you’ll find the same bias in the generated output.

[–] webghost0101@sopuli.xyz 1 points 2 weeks ago* (last edited 2 weeks ago)

The image generation portion of this is not the biggest long term problem because there genuinely very dumb. Good training data can mediate this a lot but more importantly.

Image generation does not reason like llms can.

Once the tech is properly matured where fine tuning of details is possible i expect a true llm reasoning component build in that will always specify to the image generation module exactly how the intended image is supposed to look. Including gender and age if those where not user specified.

This does not solve the problem of bias in llm but i want to highlight that the bias in llm-reasoning module of ai is the single most important part that needs to be bias aware, image generation will smooth itself out.

This is somewhat a reactionary rant on “researchers” and people addressing image gen and text gen under the same rules. And as a worst offender judge gpt models based on the output off dalle outputs. However faulty and hallucinatory they all are they are not the same thing.

Thanks for reading.