Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
Tags

Xanthius

61
Posts
1
Topics
189
Followers
1
Following
A member registered Jul 31, 2022 · View creator page →

Creator of

Recent community posts

GuideImg just means that it uses the Guide Image for that frame, which is the same image passed to the ControlNet

What are you using for the controlnet settings?

I havent tested with any 2.X since ControlNet was released for it, but it works great on every 1.5 model I have tested with although I havent tested pix2pix at all

I appreciate the desire to tip but unfortunately Paypal is the only thing I have setup that can accept tips like that (unless you deal with crypto or something where I could make a keypair quickly). No need to concern yourself with tipping me though. People have been more generous than I expected already (especially the people that came here after the PromptMuse video dropped)

I have only seen that error when guide frames aren't uploaded from the interface

I havent tested it with pix2pix and I dont know how inpainting works for that model, have you tried it with any other models?

what prompt and model are you using? I havent seen an issue like this and I wonder if it's maybe a model that particularly doesnt work well with this or something. The only other issues I could imagine would be prompt-based or some settings in your UI that aren't default.

I do have the inpainting model for RV1.4 and I used that version a couple times, but I generally just used the normal version for the tests and I haven't compared the inpainting vs non-inpainting models yet. As for the inpainting mask I think I misunderstood what you meant. It does use an image mask in order to do the processing, but there isn't currently a way to upload custom masks for the frames themselves and I thought you were asking about that.

An inpainting-mask layer isn't implemented into the script yet but it's planned for an upcoming update so you can do stuff like only modifying areas of the video. As for how ControlNet works on various models, it seems to work on dreamboothed models just fine, so all the ones people download should work. I ran my tests on RealisticVision1.4 just because I like that model in general but I haven't tested if inpainting-specific models do better or worse.

Try with either denoising strength at 1 or LoopBackSource as previousFrame, you dont want denoising below 1 with any other value for LoopBackSource. I also find that Euler (not Euler a) seems to perform the best for samplers and trying both with or without color correction may help

what are all the settings and prompt you used?

I usually use 1.0 denoise strength but for most images I'd suggest going with 0.75-1.0, so keep it high. With the black background it also could be more prone to darkening from either the color correction or the loopback (i.e. denoising below 1)

What are the settings that you're using? I suspect it might be too low of a denoising strength but it would be helpful to see the settings as a whole in order to help

probably nothing as complex or computationally expensive as that but I'm looking to perhaps use some of the control_net preprocessors to help reduce details like stray hairs from manifesting or disappearing randomly, as well as some color checking to keep color-flickering to a minimum

Looks fantastic!

As for the batches, I have been debating how they should work with it. I'm working on a system where you can have it generate multiple versions of each new frame then have it try to pick the best one at each iteration to get better consistency. I believe that would probably be the most useful way to use batch processing for it but it's not quite ready yet.

what happens when you run it?

unfortunately that's for the API to standalone applications and not for scripts. There is a bit of a hacky way discussed on GitHub for allowing multiple controlnet layers to be controlled independently with a script, but I haven't tested it and I expect native support for it will be officially added soon enough anyway

Looks great! I think the best method would be if you used something like blender and had it render perfect depthmaps instead of using the preprocessor, but I haven't actually tested that yet. I have heard a lot about https://toyxyz.gumroad.com/l/ciojz for that kind of thing but I have never tried it myself and I don't know how it would work for animations. I would like to implement the ability to add multiple controlnet inputs rather than just the guiding frames but the issue is that, as far as I'm aware, I can only change the "control_net_input_image" property to impact them all at once and I cannot set them individually with a script.

Thanks! That's big praise, especially for a project that has currently only taken 3 and a half days to develop. There's still a long way to go with this

Even I struggle with speed on it. I plan to upgrade my GPU but I dont see why someone couldnt bring this to their google colab or runpod or anything else like that to run it remotely on faster hardware

Now that's looking great!

I look forward to seeing your results!

as someone else stated: 

"The problem was in the settings tab for ControlNet. There is a checkbox:

Allow other script to control this extension"


If you dont have this enabled on the A1111 settings page under the ControlNet Tab then the script wont be able to update the controlnet image

oh yeah, without that my script can't change the ControlNet image. Glad you got that fixed and if someone else has the issue I know what to suggest

(1 edit)

the animations should just be in alphabetical order but it doesnt need a specific naming scheme. It could be causing your problem though. I never put anything into the controlNet input manually. I just leave the image sections blank on those and let it get auto-filled

I use HED, openpose, depth, or canny depending on the image but I nearly always test on RealisticVision1.4

how many frames did you upload for it? and what did they look like?

There are a few youtubers who have said they are working on tutorials for it. It has only been published for around 2 days, so many things aren't fully worked out or understood yet, even by me, so they may take a bit of time to experiment and figure it out for the models and embeddings they may use it with then publish their video. Each model seems to need different settings

I see. I havent really been testing much with multi-ControlNet for the sake of speed, but I'm guessing that it would come down to the weights for the two controlnet layers. This tool is still extremely early so a lot of testing needs to be done to figure out the best way to get the right results.

That looks like far too low of a ControlNet strength. You can see the comparison in the post using Snape

Are you talking about the upload of the guide frames? At the moment the UI doesn't give visual feedback on that, but I would like to add something to show that the files were successfully uploaded (perhaps a number indicating how many frames are currently uploaded)

As for the image outputs, the individual frames should appear in the "stable-diffusion-webui\outputs\img2img-images" folder and should be named along the lines of "Frame-0001" and the spritesheets should be saved to "stable-diffusion-webui\outputs\img2img-grids" with names such as "grid-0001" although the filename for this is subject to change in future updates.

Both the spritesheet and the individual frames should properly output to the GUI and do on my end. Does nothing get returned for you or what exactly are you getting?

What does the original video look like? it's hard to keep a consistent background unless the original background has enough detail to be picked up with ControlNet. For that reason I expect many people will just generate with a greenscreen or something then superimpose it onto a background.

(1 edit)

New Version (0.72) should fix the 3-panel issue.

The individual files should be named along the lines of Frame-XXXX and will be located in your default folder for images. The spritesheet will be in the default folder for grids

Here's a gif I generated while writing and testing the fix:

Shelon Musk

New Version (0.72) should fix it

New Version (0.72) should fix it

New Version (0.72) should fix it

It should directly output the spritesheet and the frames. You're the second person to have it output the 3-panel images instead and I'm not sure why some people have that happen

(2 edits)

the image you linked requires permission to view but that issue largely comes down to settings such as denoise strength, ColorCorrection, and Third Frame Image.


If you have the frames from a guiding video then you can put those in. It's just the input for ControlNet, so you could also put in a processed set of images for your choice of model (openpose, hed, depth, etc...) with the preprocessor disabled.

the "object of type 'NoneType' has no len()" error usually occurs if you forgot to give it the animation guide frames.

To provide the guide frames for the script, press the "Upload Guide Frames" button located just above the slider and select the frames of your animation