Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Rife-App 3.35

Sucessor of Dain-App. Interpolate videos using AI · By GRisk

Referenced QT chapter track not found

A topic by VR360 created Sep 03, 2021 Views: 765 Replies: 4
Viewing posts 1 to 5

Hi,
How can I fix this error?
Thanks!

Developer

Hi there, it must be something with the input file. Did it generated a crash_log.txt on the app folder?

Changed the video to a mp4 and it started working but now I get this: 

['E:/testfps/Take_30fps.mp4']

FPS: 30000/1001

FPS Eval: 29.97002997002997

G:/60fps/Take_30fps

Using Benchmark: True

Batch Size: -1

Input FPS: 29.97002997002997

Use all GPUS: False

Scale: 1.0

Render Mode: 0

Interpolations: 2X

Use Smooth: 0

Use Alpha: 0

Use YUV: 0

Encode: libx264

Using Half-Precision: True

Loading Data

Using Model: 3_1

Selected auto batch size, testing a good batch size.

Resolution: 7680x3840

Setting new batch size to 1

Resolution: 7680x3840

RunTime: 95.317333

Total Frames: 2857

  0%|                             | 4/2857 [00:02<26:36,  1.79it/s, file=File 2]Exception ignored in thread started by: <function queue_model at 0x000002302F3E9940>

Traceback (most recent call last):

  File "my_DAIN_class.py", line 407, in queue_model

  File "my_DAIN_class.py", line 135, in make_inference

  File "model\RIFE_HDv3.py", line 391, in inference

  File "model\RIFE_HDv3.py", line 298, in predict

  File "torch\nn\modules\module.py", line 1051, in _call_impl

    return forward_call(*input, **kwargs)

RuntimeError: The following operation failed in the TorchScript interpreter.

Traceback of TorchScript, serialized code (most recent call last):

  File "code/__torch__/model/RIFE_HDv3.py", line 20, in forward

    flow0 = torch.mul(_1, 0.5)

    f1 = __torch__.model.warplayer.warp(x1, flow0, )

    x2 = (self.conv2).forward(x1, )

          ~~~~~~~~~~~~~~~~~~~ <--- HERE

    _2 = _0(flow0, None, 0.5, "bilinear", False, True, )

    flow1 = torch.mul(_2, 0.5)

  File "code/__torch__/model/RIFE_HDv3/___torch_mangle_27.py", line 11, in forward

    x: Tensor) -> Tensor:

    x0 = (self.conv1).forward(x, )

    return (self.conv2).forward(x0, )

            ~~~~~~~~~~~~~~~~~~~ <--- HERE

  File "code/__torch__/torch/nn/modules/container/___torch_mangle_26.py", line 12, in forward

    _0 = getattr(self, "0")

    _1 = getattr(self, "1")

    input0 = (_0).forward(input, )

              ~~~~~~~~~~~ <--- HERE

    return (_1).forward(input0, )

  def __len__(self: __torch__.torch.nn.modules.container.___torch_mangle_26.Sequential) -> int:

  File "code/__torch__/torch/nn/modules/conv/___torch_mangle_25.py", line 21, in forward

  def forward(self: __torch__.torch.nn.modules.conv.___torch_mangle_25.Conv2d,

    input: Tensor) -> Tensor:

    _0 = (self)._conv_forward(input, self.weight, self.bias, )

          ~~~~~~~~~~~~~~~~~~~ <--- HERE

    return _0

  def _conv_forward(self: __torch__.torch.nn.modules.conv.___torch_mangle_25.Conv2d,

  File "code/__torch__/torch/nn/modules/conv/___torch_mangle_25.py", line 27, in _conv_forward

    weight: Tensor,

    bias: Optional[Tensor]) -> Tensor:

    _1 = torch.conv2d(input, weight, bias, [1, 1], [1, 1], [1, 1])

         ~~~~~~~~~~~~ <--- HERE

    return _1

Traceback of TorchScript, original code (most recent call last):

  File "C:\Users\Gabriel\Downloads\torch19\lib\site-packages\torch\nn\modules\container.py", line 139, in forward

    def forward(self, input):

        for module in self:

            input = module(input)

                    ~~~~~~ <--- HERE

        return input

  File "C:\Users\Gabriel\Downloads\torch19\lib\site-packages\torch\nn\modules\conv.py", line 443, in forward

    def forward(self, input: Tensor) -> Tensor:

        return self._conv_forward(input, self.weight, self.bias)

               ~~~~~~~~~~~~~~~~~~ <--- HERE

  File "C:\Users\Gabriel\Downloads\torch19\lib\site-packages\torch\nn\modules\conv.py", line 439, in _conv_forward

                            weight, bias, self.stride,

                            _pair(0), self.dilation, self.groups)

        return F.conv2d(input, weight, bias, self.stride,

               ~~~~~~~~ <--- HERE

                        self.padding, self.dilation, self.groups)

RuntimeError: CUDA out of memory. Tried to allocate 7.91 GiB (GPU 0; 24.00 GiB total capacity; 2.66 GiB already allocated; 18.15 GiB free; 3.82 GiB reserved in total by PyTorch)

I really need urgent help

Developer

RuntimeError: CUDA out of memory. Tried to allocate 7.91 GiB (GPU 0; 24.00 GiB total capacity; 2.66 GiB already allocated; 18.15 GiB free; 3.82 GiB reserved in total by PyTorch)


This is the real problem. Your computer is running out of memory, a 7680x3840 input is to big of a resolution for your computer. You can try to set Inner Scale to 0.25 (it will generate worse results) and turn on "Try to save memory" at the botton to improve memory a little, but at such high resolution, it might still create this problem.