Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Three Eyes Software

421
Posts
8
Topics
550
Followers
A member registered Sep 07, 2024 · View creator page →

Creator of

Recent community posts

Report bugs here. Ideally, your report should include the steps necessary to reproduce the bug.

Post general game related discussions and feedback here.

4096 is the bare minimum the game requires to work. Any 8 GB Nvidia card should give you 400 t/s prompt processing speed with Sparse.

You'll be able to do this very soon.

There are currently 43 different actions that the game can take based on what happens in RP, but not everything is accounted for.

Currently a sort of game master reads the conversation and translates RP actions into game actions.

You can use (parentheses) after your message for OOC comments, but the NPC will probably be confused because, barring a few exceptions, the NPC itself isn't informed of what game actions it can take and what game actions have been taken.

NPCs can't place status effects on you as of now.

This feature will be added for player-made NPCs as part of a minor update after 1.7.0, but not for the default NPCs.

Both servers are up for me. It's a problem with your internet connection.

Either Hugging Face is down, or GitHub is down, or your connection isn't working.

Maybe, but there will be a focus on modding support for now.

The game communicates with the locally hosted KoboldCPP server in the same way it would with the demo server or OpenRouter.

To me it seems like the backend is failing in some way without the game noticing, and then the request errors out.
KoboldCPP outputs are currently sent to /dev/null on Linux because I had some worries about compatibility with different distros, so there's no way to see why this is happening either. Since it's model specific, maybe the download got corrupted somehow. Try deleting the Gemma-4-Sparse .gguf and .meta files in Silverpine 1.6.6b\Silverpine_Data\StreamingAssets to redownload it.

Another thing it could be is that Gemma 4 uses SWA, which might be an issue with AMD somehow?
If anything, I will look into adding better logging for the outputs on Linux in 1.7.

The game uses classic internet RP formatting: *asterisks for actions* and plain text for dialog.

I've already asked you about your specs in the other thread you created. If you don't have at least a 24 GB Nvidia RTX 3090, it's normal for it to take that long with Gemma-4-Dense-Large. Use OpenRouter if you don't have a GPU, or use Gemma-4-Sparse if you have an 8 GB GPU.

After some research, it seems that A770 is simply not well-supported by llama.cpp. You could try running Gemma-4-Sparse to see if a sparse model is faster, if you aren't already doing that.

Which model are you running? Did you update to 1.6.6b yet?

Are you trying to load Gemma-4-Sparse or Gemma-4-Dense-Large? What are your system specs? If the command prompt just opens and doesn't close/crash by itself, you might just have to wait a minute or two.

Previously, Unity was loading all the gigantic portrait and expression textures when you opened the game. Now there's manual memory management that only loads the textures of characters that are part of the current conversation.

I imagine Gemma-4-Sparse should be very fast on any Vulkan GPU. If you're using Qwen 3.5 with a version before 1.6.6b, it might be slow because of that.

It shouldn't take more than a few seconds with your specs. Which model are you using? Is your GPU AMD or Nvidia? Are you using the new version 1.6.6b with optimized memory usage?

Are you on Linux?

If you mean the RP scene transitions, it sounds like the game ran into an error during the processing it does during the black screen. What NPCs were involved, and where was the location supposed to change to?

It's a WIP.

Does this also happen with the example custom character? Are you on Windows or on Linux?

There's a smidge of a chance that it can load on your system with the new version now, but I have no way of testing this.

Doing this breaks the AI in many ways because the entire pipeline outside of the LLM requires everything to be in English to work properly. For example, the AI won't be able to properly retrieve memories anymore, and the NPC action system stops accurately translating RP actions to game actions. The LLM will also be dumber in general because it went through English RL training. I specifically added a language check to the first user input to deter people from doing what you describe.

You can repair leg armor at the metalworking bench, as long as it isn't made from linen.


You need 16 or more GB of this.

You need 16 GB of normal system RAM in addition to the 8 GB of VRAM to run Gemma-4-Sparse. You can probably just add another stick if you're currently using one 8 GB stick in single-channel mode.

What model are you trying to run and what are your system specs?

I believe you get around 7,000 dialog turns for $10 using DeepSeek,

Glad that solved it.

(1 edit)

Try recording a video of the command prompt and getting a screenshot of the error that way. Sadly the game can't capture the error by itself because of how Windows handles capturing process outputs.

One reason I think this might be happening is because, at least on Linux, or perhaps specifically the GPU I use for testing, the backend sometimes crashes when loading a large model multiple times. This is fixed by restarting the PC. Other than that, I can only imagine that the model file itself got corrupted while downloading somehow. You can redownload it by deleting the model file in Silverpine 1.6.5d\Silverpine_Data\StreamingAssets.

They just have different flavors. I personally prefer the 8 GB Gemma model. It's much faster too.

Basically, if I understand right the model uses 16.7 GB (gigabyte? gibibyte?), and it all has to fit into your VRAM + RAM. The backend uses an algorithm that's not transparent to me to fit a certain amount of that into your VRAM. 6 + 16 GB = 22 GB - the OS and other software using a certain amount - your laptop's iGPU using a certain amount, is making it a really tight fit, if it's even possible.

Like I said, you need 8 GB of VRAM for this model. It might fit if you update to the "c" version I just uploaded, since it uses an updated backend that better squeezes the layers of the models into VRAM. Try closing your web browser too, and make sure the iGPU has as little VRAM as possible assigned to it in the BIOS, since it draws from your normal system RAM which most of the model spills into.

It seems that the link the game uses to download the backend got removed just after uploading the patch. Replace v1.111.1 with v1.111.2 in AppData\LocalLow\Three Eyes Software\Silverpine\settings.json, or wait for me to rebuild the game and upload another patch.

I've uploaded a patch that might solve it, but it's a bit of a shot in the dark.

Specs?

Report bugs here. Ideally, your report should include the steps necessary to reproduce the bug.