Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
Tags

finefin

31
Posts
8
Followers
7
Following
A member registered Sep 22, 2015 · View creator page →

Creator of

Recent community posts

when you upload something to gradio, it creates a temporary copy of the file and works with that, see https://www.gradio.app/docs/file#behavior so maybe it's just lack of disk space on your system partition? 

However, I never tried the script with that many frames. And I also wouldn't recommend it because of color degradation that WILL happen over time.

I did not change the log output at all. Maybe you need to update your SD install? I use SD.next by Vladmandic and don't have any problems.

You could try to comment out that function call on line 69: put a "//" in front of the line like this:

// shared.log.info (....

If you do that, you wont see any progress in the UI, tho.

You downloaded the wrong thing. You need to get the raw file: https://raw.githubusercontent.com/finefin/SD-scripts/main/multi_frame_render-bet...

find my modified script here:
https://github.com/finefin/SD-scripts

  • Use every Nth frame: skip guide frames (for preview or ebsynth)
  • Render grid: enable to render the grid
  • Rows in grid: how many horizontal rows the grid should have
  • Fixed file upload

Hey Xanthius! I did some changes to the script that I want to share with the community - are you okay with that?

looks like you're running out of memory. you can try to lower the resolution, use less ControlNets and enable low vram mode.

there are many reasons why this can happen - take a look at the console output to narrow them down ;)

oh yes, I had a "move to left by half the width" issue once or twice and I don't really know why that happens. I think it was caused by the checkpoint or a LoRA that I used. And did you take a look at the console output? It sometimes gives you a hint on what is maybe missing or not working correctly.

First of all make sure you select "pixel perfect" in your ControlNet(s).

I use the A1111-fork by Vlad and in some cases I have to de- and reactivate ControlNets and LoRAs after sending my first frame from txt2img. Sometimes only a complete re-start of the GUI helps. I usually do a few test renders before I activate the video script in order to see if it's working.

You should set the initial denoise to 0 if you want to keep your 1st frame, otherwise it will be re-rendered.

The normal 'denoise' setting should be something between 0.9 and 1, tho. If you set it below that you will generate garbage and if you set this to 0 you get the same frame over and over again.

oh, you're trying to run in on Collab? sorry, I can't help you with that, I only run this script locally.

you can fix this error by editing the python script as described here: https://itch.io/post/7576730

(1 edit)

here's another short animation that I made :)

Warning: loud noise! turn down audio volume!

Thanks for this quick fix!
In case you want to upload single files instead of a whole folder change the line to this:

reference_imgs = gr.File(file_count="multiple", file_types = ['.png','.jpg','.jpeg'], label="Upload Guide Frames", show_label=True, live=True)

take a look at the console output, it will give you a hint. For me it mostly fails because of memory issues. In that case use less ControlNets and/or enable the Low Vram mode.

This works pretty well in the beginning. But inside the tunnel it is too dark to give the ControlNet enough info for a consistent animation. I will try again with another sequence inside the tunnel, after the camera adapted to the darkness.

Here's what the source material looks like: http://twitter.com/finefingames/status/1388535835985948681?cxt=HHwWksC93Y3_iMUmA... 

And here you can download a 614 frame sequence of this clip as 512x512px PNGs and try for yourself: http://finefin.com/tmp/TunnelSequence-614frames-512x512.zip

understood. we'll wait for native support, then ;)

thank you, I will have a look at the Character bones thing.

and sd-webui-controlnet has an API that allows you to target individual ControlNet models:

https://github.com/Mikubill/sd-webui-controlnet/wiki/API

example is here: https://github.com/Mikubill/sd-webui-controlnet/blob/main/scripts/api.py

combining 'depth' with 'hed' is the best approach so far...

In this experiment I used the Dreamshaper model that generates pretty nice illustrations. The ControlNet 'depth' alone, however, is not so great. As we can see it struggles with the details when the fingers are near the mouth and so the AI starts to create a horrible act of chewing finger nails.

(1 edit)

here's another one 

Did you allow the script to control the extension in Settings/ControlNet? You need to enable the option and do a complete restart of A1111 UI, as it seems to set the permissions on startup. Had the same problem, doing that made it work for me.

Here's another test with a more complex scene. I used 50 frames of myself, smoking on the balkony. I mixed SDv1.5 with this LoRA: https://civitai.com/models/13998/valerian-and-laureline-comic-style

Had the same issue. It was working after I restarted A1111 completely (not just 'reload ui') and checked 'Allow other script to control this extension' again - it was unchecked after the restart. So maybe your script just does not have the permission (yet) to drop the frames into ControlNet.

Loopback Source=Previous Frame

I edited my post above for completeness ;)

(2 edits)

Hi!

I sent my img2img render of the first frame to img2img.
- img2img Denoising 0.95
- ControlNet 'depth' (+preprocessor depth) with default settings
- script settings: init denoise=1, Third Frame Img=FirstGen, Color Correction On, Loopback Source=Previous Frame
- prompt: a chimpanzee walks through the desert

I upload a sequence of 512x512 images - there's no visual feedback, yet. So it's just a 'trust me bro' thing for now ;)

Don't put anything in the ControlNet image upload - this is where the script puts your frames automatically.

Make sure that the mentioned ControlNet setting is enabled that allows the script to control the extension. 

Here's my first working test. I used ~50 frames of my little brother walking on all fours towards the camera.

Thank you so much for this :)


For me it worked after shutting A1111 off completely and I also rebooted my machine.

(1 edit)

ah ok, no visual feedback for the upload, yet. noted.

the single images+grid are saved, but they all are based on the first frame. I checked 'allow other scripts to control' in the settings as well.

EDIT: hah! it works! I just rebooted my machine and now it just works. Will upload a sample gif later ;)

The images are not uploading and not showing up in the gui. I end up with 'animations' of one single image. Images are 512x512 and named image001.png, image002.png etc. What am I missing?

oh great! I remember playing this, totally hooked, at AMAZE festival while surrounded by crazy party people :D