It shouldn't take more than a few seconds with your specs. Which model are you using? Is your GPU AMD or Nvidia? Are you using the new version 1.6.6b with optimized memory usage?
Three Eyes Software
Creator of
Recent community posts
Doing this breaks the AI in many ways because the entire pipeline outside of the LLM requires everything to be in English to work properly. For example, the AI won't be able to properly retrieve memories anymore, and the NPC action system stops accurately translating RP actions to game actions. The LLM will also be dumber in general because it went through English RL training. I specifically added a language check to the first user input to deter people from doing what you describe.
Try recording a video of the command prompt and getting a screenshot of the error that way. Sadly the game can't capture the error by itself because of how Windows handles capturing process outputs.
One reason I think this might be happening is because, at least on Linux, or perhaps specifically the GPU I use for testing, the backend sometimes crashes when loading a large model multiple times. This is fixed by restarting the PC. Other than that, I can only imagine that the model file itself got corrupted while downloading somehow. You can redownload it by deleting the model file in Silverpine 1.6.5d\Silverpine_Data\StreamingAssets.
Basically, if I understand right the model uses 16.7 GB (gigabyte? gibibyte?), and it all has to fit into your VRAM + RAM. The backend uses an algorithm that's not transparent to me to fit a certain amount of that into your VRAM. 6 + 16 GB = 22 GB - the OS and other software using a certain amount - your laptop's iGPU using a certain amount, is making it a really tight fit, if it's even possible.
Like I said, you need 8 GB of VRAM for this model. It might fit if you update to the "c" version I just uploaded, since it uses an updated backend that better squeezes the layers of the models into VRAM. Try closing your web browser too, and make sure the iGPU has as little VRAM as possible assigned to it in the BIOS, since it draws from your normal system RAM which most of the model spills into.
https://itch.io/t/6024201/release-162-bug-reports#post-15693568
Soon there will be an update with a new model that's much better than Nemo, yet also runs fairly fast on systems with 8 GB of VRAM. I think anyone can reasonably acquire that kind of hardware.
You only need Smooth Style 2 and default Pony. It's important that you pass your base image through Img2Img multiple times instead of using a high denoising value. Use the following settings: DPM++ 3M SDE, CFG 6.5-7, Denoise 0.35-0.5, Steps 25-30.
The crop of your base image has to be square so you can use a resolution of exactly 1024x1024. Pass it through Img2Img with a random seed, then take the output and put it through Img2Img again. Do that about 4 times and it should match the style pretty closely.
All portraits use this style . The base images were created using various other stable diffusion models. Once custom NPCs are done, I might write a guide on how to use it to stylize your base images without mangling them.
There are two ways of doing this.
The first one is having the model output various languages directly. This works decently, but the traditional NLP pipeline this game uses for various critical things requires everything to be in English. Rewriting it would not only be a monumental effort for each supported language, but it would also require me to have a native speaker level understanding of each language's grammar.
The second one is simply translating the player's input to English, proceeding as normal, then translating the model's output to the player's language using the model itself. This works decently, but it means the model has a context window of the one message it translates, which leads to a very inconsistent experience, and also requires two additional API calls, making it very slow. It's feasible, but I wasn't happy with it the last time I tried it.


