Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(1 edit)

Well dang. My base image is 2:3. I guess I can add empty space to the sides and fill it with white to match background so its 1216x1216 and scale down to 1024x1024 in Gimp. Is the 1:1 ratio have something to do with generation the image or is it a clipvision/tiling the image thing? Or is it just for the lowered buzz cost or model training preferring square images over tall ones and it starts guessing?

I've been keeping the LoRA weight for Smooth 2 at 0.7 - 0.8, CFG at 6.5, and Steps at 30, also leaving the CLIP Skip at 2, but I've been having issues getting the denoise to affect the image much so I've been playing with it. I didn't realize you were running it through multiple times, so this might be the answer I was looking for. 


I've been using the DPM++ 2M Karras Sampler, since I can't find the 3M SDE anymore, unless I'm generating in the wrong function? I should ask just to be sure, you're using the Image Variation and not Image to Image, right? I'm not sure what the difference is, but the latter seems more for photos. 


Thanks again for the advice.

You need to use Stable Diffusion WebUI Forge locally or on e.g. Runpod. Inpainting is done in multiple 1:1 sections (head, chest, waist, legs), that have been scaled to Pony's native 1024x1024 resolution. Again, I might write a guide on this next update.