this post was submitted on 13 Apr 2025
30 points (82.6% liked)

Stable Diffusion

4606 readers
8 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 2 years ago
MODERATORS
 

It's been a while since I've updated my Stable Diffusion kit, and the technology moves so fast that I should probably figure out what new tech is out there.

Is most everyone still using AUTOMATIC's interface? Any cool plugins people are playing with? Good models?

What's the latest in video generation? I've seen a lot of animated images that seem to retain frame-to-frame adherence very well. Kling 1.6 is out there, but it doesn't appear to be free or local.

you are viewing a single comment's thread
view the rest of the comments
[–] higgsboson@dubvee.org 5 points 2 weeks ago (2 children)

Im still stuck in the past with SD1.5 on A1111, because my GPU is dogshit and all the other UIs Ive tried are either too complicated or too dumbed down.

[–] 474D@lemmy.world 2 points 1 week ago (1 children)

SDXL can be very accommodating with TeaCache and the current most popular checkpoints (Illustrious) are based on SDXL. They have photo-realistic branches of it now, worth checking out

[–] Oberyn@lemmy.world 3 points 1 week ago

Don't forget to check out illustrious's derivative model nꝏbai , it can also do furries (if you care for that sorta thing) !

[–] swelter_spark@reddthat.com 2 points 1 week ago (1 children)

There are some really good sd1.5-based models even by current standards, though. Nothing wrong with that.

[–] higgsboson@dubvee.org 2 points 1 week ago* (last edited 1 week ago) (2 children)

It's all about getting a good workflow set up. That's why i wish I could make sense of comfyui, but alas it still eludes me.

[–] swelter_spark@reddthat.com 2 points 1 week ago

InvokeAI lets you use an A1111 style interface or nodes-based workflow. Unfortunately, it isn't compatible with ComfyUI workflows. I haven't really done much with nodes, but I want to experiment more and figure it out.

[–] 474D@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

The basic design is to create a small image of what you want, upscale that image, then run that image through the model again to fill in the details (all in one workflow). You can also just copy someone else's and change the prompt lol. Can usually just drag and drop an image into ComfyUI if it was created with it, it retains the whole workflow. I cannot stress how important the upscaling is, it's pretty amazing the details that it creates