this post was submitted on 06 Jul 2025
40 points (100.0% liked)
chapotraphouse
13935 readers
617 users here now
Banned? DM Wmill to appeal.
No anti-nautilism posts. See: Eco-fascism Primer
Slop posts go in c/slop. Don't post low-hanging fruit here.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That pregnant belly on that wolf is crazy fetishistic, not sure if it is because the people behind this stuff have a fetish for it, or just because so many people with that fetish use AI to indulge in it that the AI can't actually make a pregnant wolf without giving it a human belly.
They're probably using some unmodified corporate model, so it's just a weird failing in the way that the AI works. They don't really know or model things, they're sort of making an inference based on image or video tags and captions which can yield weirdly flexible and broad concepts that synthesize neatly with other things, or it can yield absolute nightmare nonsense.
Ironically a fetish-based image model would probably make it furry or get it sort of close to correct (probably not, though) because of all the furry art and wide range of pregnancy fetish art those sorts of models have pulled from danbooru or wherever, but I guess the video model is probably trained on like AI captioned youtube videos and commercials that are only going to show "pregnant" in a way that basically makes the concept in the model equate to like "big pink blob in middle of thing?" visually.
I was assuming these models train themselves on their previously produced data, so a bunch of dudes with a pregnancy fetish would've used the model and in turn "trained" it to something like this, getting further and further away from what it "should" look like due to it sampling AI generated stuff instead of actual photo and art references. Though I think I might be misunderstanding a problem these AI models have.
No, once a given checkpoint is made it's completely static. Even the problem of AI generated material being used as training data for subsequent checkpoints is overblown: uncurated and incorrectly tagged it's the same as mixing in bad data of any source, but you've got a bunch of hobbyists making LoRAs or finetuning checkpoints using hand-curated AI images that meet whatever criteria they set (like what I mentioned about the models can effectively synthesize new things by combining concepts with mixed success? Some people, particularly fetishists, basically try to make a LoRA or something that reinforces the cases of a model getting the concept right because they don't have a lot of separate art). I've also seen an example of someone actively trying to make a LoRA that was as messed up and full of common AI defects as possible to be used with a negative weight to drive the output away from those concepts.