Which AI engine was used to generate the pictures used for the npcs and the playable charactors?
All portraits use this style . The base images were created using various other stable diffusion models. Once custom NPCs are done, I might write a guide on how to use it to stylize your base images without mangling them.
thanks! looks like there's a version for noobai too which is what modern checkpoints like yiffymix are based on so i'll have to mess around with it
You only need Smooth Style 2 and default Pony. It's important that you pass your base image through Img2Img multiple times instead of using a high denoising value. Use the following settings: DPM++ 3M SDE, CFG 6.5-7, Denoise 0.35-0.5, Steps 25-30.
The crop of your base image has to be square so you can use a resolution of exactly 1024x1024. Pass it through Img2Img with a random seed, then take the output and put it through Img2Img again. Do that about 4 times and it should match the style pretty closely.
Well dang. My base image is 2:3. I guess I can add empty space to the sides and fill it with white to match background so its 1216x1216 and scale down to 1024x1024 in Gimp. Is the 1:1 ratio have something to do with generation the image or is it a clipvision/tiling the image thing? Or is it just for the lowered buzz cost or model training preferring square images over tall ones and it starts guessing?
I've been keeping the LoRA weight for Smooth 2 at 0.7 - 0.8, CFG at 6.5, and Steps at 30, also leaving the CLIP Skip at 2, but I've been having issues getting the denoise to affect the image much so I've been playing with it. I didn't realize you were running it through multiple times, so this might be the answer I was looking for.
I've been using the DPM++ 2M Karras Sampler, since I can't find the 3M SDE anymore, unless I'm generating in the wrong function? I should ask just to be sure, you're using the Image Variation and not Image to Image, right? I'm not sure what the difference is, but the latter seems more for photos.
Thanks again for the advice.