Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
Tags

Worked really well up until I got the following error. Now it occurs every time, regardless of my chosen settings. I know it's not my RAM or VRAM, as I'm running 32gb of RAM and my 3070 has 8gb of VRAM. Any known workarounds/fixes?

Error message:

Traceback (most recent call last):

  File "start.py", line 363, in OnRender

  File "torch\autograd\grad_mode.py", line 27, in decorate_context

    return func(*args, **kwargs)

  File "diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 141, in __call__

  File "torch\nn\modules\module.py", line 1130, in _call_impl

    return forward_call(*input, **kwargs)

  File "diffusers\models\unet_2d_condition.py", line 150, in forward

  File "torch\nn\modules\module.py", line 1130, in _call_impl

    return forward_call(*input, **kwargs)

  File "diffusers\models\unet_blocks.py", line 505, in forward

  File "torch\nn\modules\module.py", line 1130, in _call_impl

    return forward_call(*input, **kwargs)

  File "diffusers\models\attention.py", line 168, in forward

  File "torch\nn\modules\module.py", line 1130, in _call_impl

    return forward_call(*input, **kwargs)

  File "diffusers\models\attention.py", line 196, in forward

  File "torch\nn\modules\module.py", line 1130, in _call_impl

    return forward_call(*input, **kwargs)

  File "diffusers\models\attention.py", line 254, in forward

RuntimeError: CUDA out of memory. Tried to allocate 2.25 GiB (GPU 0; 8.00 GiB total capacity; 4.40 GiB already allocated; 0 bytes free; 6.60 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Oddly, as soon as I posted here it began working again, no settings changed. Neat feature!

it seems like this thing is very unstable for the most part.