Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Three Eyes Software

403
Posts
6
Topics
519
Followers
A member registered Sep 07, 2024 · View creator page →

Creator of

Recent community posts

It shouldn't take more than a few seconds with your specs. Which model are you using? Is your GPU AMD or Nvidia? Are you using the new version 1.6.6b with optimized memory usage?

Are you on Linux?

If you mean the RP scene transitions, it sounds like the game ran into an error during the processing it does during the black screen. What NPCs were involved, and where was the location supposed to change to?

It's a WIP.

Does this also happen with the example custom character? Are you on Windows or on Linux?

There's a smidge of a chance that it can load on your system with the new version now, but I have no way of testing this.

Doing this breaks the AI in many ways because the entire pipeline outside of the LLM requires everything to be in English to work properly. For example, the AI won't be able to properly retrieve memories anymore, and the NPC action system stops accurately translating RP actions to game actions. The LLM will also be dumber in general because it went through English RL training. I specifically added a language check to the first user input to deter people from doing what you describe.

You can repair leg armor at the metalworking bench, as long as it isn't made from linen.


You need 16 or more GB of this.

You need 16 GB of normal system RAM in addition to the 8 GB of VRAM to run Gemma-4-Sparse. You can probably just add another stick if you're currently using one 8 GB stick in single-channel mode.

What model are you trying to run and what are your system specs?

I believe you get around 7,000 dialog turns for $10 using DeepSeek,

Glad that solved it.

(1 edit)

Try recording a video of the command prompt and getting a screenshot of the error that way. Sadly the game can't capture the error by itself because of how Windows handles capturing process outputs.

One reason I think this might be happening is because, at least on Linux, or perhaps specifically the GPU I use for testing, the backend sometimes crashes when loading a large model multiple times. This is fixed by restarting the PC. Other than that, I can only imagine that the model file itself got corrupted while downloading somehow. You can redownload it by deleting the model file in Silverpine 1.6.5d\Silverpine_Data\StreamingAssets.

They just have different flavors. I personally prefer the 8 GB Gemma model. It's much faster too.

Basically, if I understand right the model uses 16.7 GB (gigabyte? gibibyte?), and it all has to fit into your VRAM + RAM. The backend uses an algorithm that's not transparent to me to fit a certain amount of that into your VRAM. 6 + 16 GB = 22 GB - the OS and other software using a certain amount - your laptop's iGPU using a certain amount, is making it a really tight fit, if it's even possible.

Like I said, you need 8 GB of VRAM for this model. It might fit if you update to the "c" version I just uploaded, since it uses an updated backend that better squeezes the layers of the models into VRAM. Try closing your web browser too, and make sure the iGPU has as little VRAM as possible assigned to it in the BIOS, since it draws from your normal system RAM which most of the model spills into.

It seems that the link the game uses to download the backend got removed just after uploading the patch. Replace v1.111.1 with v1.111.2 in AppData\LocalLow\Three Eyes Software\Silverpine\settings.json, or wait for me to rebuild the game and upload another patch.

I've uploaded a patch that might solve it, but it's a bit of a shot in the dark.

Specs?

Report bugs here. Ideally, your report should include the steps necessary to reproduce the bug.

Post general game related discussions and feedback here.

Both. But there will be some limitations when replacing existing ones.

https://itch.io/t/6024201/release-162-bug-reports#post-15693568

Soon there will be an update with a new model that's much better than Nemo, yet also runs fairly fast on systems with 8 GB of VRAM. I think anyone can reasonably acquire that kind of hardware.

It works flawlessly for me on Qwen 3.5. Are you trying to talk to the NPCs in a language other than English? Some players seem to be doing that, and it completely breaks the AI because it's not a supported feature.

I assume you're using Nemo, which has some trouble translating RP actions into game actions because it's ancient. Nemo will be replaced by something much better that works on 8+ GB VRAM cards soon.

There are currently 10 NPCs. You'll also be able to create custom ones in 1.7.

Simply tell Gareth that you would like to buy the shed, and a dialog like this will pop up.

I've written a post about this here. While the LLM can technically output other languages, it breaks the AI because all the other parts of it require everything to be in English. I might add that translation feature next update, if I can make it work reasonably well.

You need to use Stable Diffusion WebUI Forge locally or on e.g. Runpod. Inpainting is done in multiple 1:1 sections (head, chest, waist, legs), that have been scaled to Pony's native 1024x1024 resolution. Again, I might write a guide on this next update.

You only need Smooth Style 2 and default Pony. It's important that you pass your base image through Img2Img multiple times instead of using a high denoising value. Use the following settings:  DPM++ 3M SDE, CFG 6.5-7, Denoise 0.35-0.5, Steps 25-30.

The crop of your base image has to be square so you can use a resolution of exactly 1024x1024. Pass it through Img2Img with a random seed, then take the output and put it through Img2Img again. Do that about 4 times and it should match the style pretty closely.

What's your GPU? The difference is that having the backend automatically determine the correct amount of layers to offload to the GPU doesn't always work, which can then results in very slow processing.

You can change it by checking "Show Local AI Model Selection on Startup" in the settings. It's hidden unless a local model is currently loaded.

There are no plans to support the free APIs on OpenRouter, because they're too unreliable.

Qwen 3.5 27B is simply superior in every way.

All portraits use this style . The base images were created using various other stable diffusion models. Once custom NPCs are done, I might write a guide on how to use it to stylize your base images without mangling them.

There are two ways of doing this.

The first one is having the model output various languages directly. This works decently, but the traditional NLP pipeline this game uses for various critical things requires everything to be in English. Rewriting it would not only be a monumental effort for each supported language, but it would also require me to have a native speaker level understanding of each language's grammar.

The second one is simply translating the player's input to English, proceeding as normal, then translating the model's output to the player's language using the model itself. This works decently, but it means the model has a context window of the one message it translates, which leads to a very inconsistent experience, and also requires two additional API calls, making it very slow. It's feasible, but I wasn't happy with it the last time I tried it.

For some reason it works completely fine in the editor (Mono), but is broken in the final build (IL2CPP). I'll upload a patch in a minute. Sorry for this.

Post general game related discussions and feedback here.

Report bugs here. Ideally, your report should include the steps necessary to reproduce the bug.