Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Three Eyes Software

372
Posts
4
Topics
501
Followers
A member registered Sep 07, 2024 · View creator page →

Creator of

Recent community posts

What's your GPU? The difference is that having the backend automatically determine the correct amount of layers to offload to the GPU doesn't always work, which can then results in very slow processing.

You can change it by checking "Show Local AI Model Selection on Startup" in the settings. It's hidden unless a local model is currently loaded.

There are no plans to support the free APIs on OpenRouter, because they're too unreliable.

Qwen 3.5 27B is simply superior in every way.

All portraits use this style . The base images were created using various other stable diffusion models. Once custom NPCs are done, I might write a guide on how to use it to stylize your base images without mangling them.

There are two ways of doing this.

The first one is having the model output various languages directly. This works decently, but the traditional NLP pipeline this game uses for various critical things requires everything to be in English. Rewriting it would not only be a monumental effort for each supported language, but it would also require me to have a native speaker level understanding of each language's grammar.

The second one is simply translating the player's input to English, proceeding as normal, then translating the model's output to the player's language using the model itself. This works decently, but it means the model has a context window of the one message it translates, which leads to a very inconsistent experience, and also requires two additional API calls, making it very slow. It's feasible, but I wasn't happy with it the last time I tried it.

For some reason it works completely fine in the editor (Mono), but is broken in the final build (IL2CPP). I'll upload a patch in a minute. Sorry for this.

Post general game related discussions and feedback here.

Report bugs here. Ideally, your report should include the steps necessary to reproduce the bug.

This is either the model itself, or a problem on OpenRouter/SiliconFlow's side. If it's only Mirel, reload to before it happened.

I was forced to phase out the extended server access on short notice. As mentioned in the original post, this was going to happen eventually. Please use your own GPU or OpenRouter from this point on.

10 GB. But there's currently a bit of an issue with 10 GB cards specifically, which will be patched soon.

Gemma 3 is exceptionally good at roleplay for its size(s), but it's also extremely prude/censored on a pre-training level, so it's not viable for this project.

NPCs have fixed relationship levels with other NPCs. E.g. Mirel is a friend of Oriana.

The game has been out for over a year now, and AI models have more data and have changed how their algorithms process that data.

Sadly, this isn't true for small creative writing focused LLMs. When I previously said something along the lines of more modern models handling Gareth differently, I was talking about Mistral Small 3.2  (June 2025) and DeepSeek V3 0324 (March 2025). If you're using Nemo, which I assume you are if you're complaining about personalities bleeding together, it is still the same old model the game has been using since day one. There has been an experiment with a finetune of Nemo, which is probably what made you complain about Gareth's personality being different, since it characterizes him similar to how Small and Deepseek do.

Nemo is a model from July 2024, and nothing has beaten it at creative writing at the 10 GB VRAM footprint since then. A large amount of things related to the harness the model has to work inside of have been improved, so it might feel like it's not the same model as back then, but it is.

For reference, the bottom line is the chance of Nemo correctly retrieving information from a 6,000 word corpus, and the top line is the same for Mistral Small. This affects everything, from recalling the NPCs personality to remembering that it already pestered you with the same question twice.

You really should not expect brilliant memory or output variety from it. I highly recommend you try a fresh save file with DeepSeek. I'm confident you will be blown away by how varied the characters can be.

This is due to the save file being from a fairly old version.

For some clarity, indoor zones in this game are designated like this:

Rosalyn's Shop

But if the indoor zone has subzones:

Rosalyn's Shop - Rosalyn's Bedroom

Rosalyn's Shop being the base zone here.

When Mirel first moves in, she will pick a random base zone on the shed plot to move into, that usually being the shed's base zone itself. You can create your own base zones and subzones by following the same formatting. E.g. "XXX's House - Bedroom" or "XXX's House - Storage Room". Really, there should be a custom UI that exposes this when starting a new zone, but currently there isn't. 

What probably happened is that you deconstructed all the tiles of the base zone she was previously assigned to, and there isn't really a great fallback behavior for that right now. You can ask Mirel to move to any base zone you've created, which is what you should do to fix her.

It's difficult to see the whole picture by just looking at the API calls, because you're not seeing the decision trees that trigger them in the first place. There are many hours of trial and error, and sometimes performance implications, behind why these things are worded in the strange way they are.

As for your suggestion, it opens up other creative ways of misinterpretation. "Sure, I'll take the road to the capital with you later tonight." -> Clearly she agreed! -> Following you! You could add more layers of various checks to account for this kind of misinterpretation here, but at some point you have to, again, start thinking about performance. I still believe it's a decent suggestion, and I will try replacing the already existing double checks with it.

Starting at 1.6.2d, this also no longer forcibly ends the dialog, and works the same as it does when talking to multiple NPCs.

The default OpenRouter provider for this game has a ~90% uptime, so it might spit mystery errors at you. Update to 1.6.2d to use the server.

You need to update to 1.6.2d.

What GPU are you using? It shouldn't take more than a few seconds to generate dialog.

Report bugs here. Ideally, your report should include the steps necessary to reproduce the bug.

Post general game related discussions and feedback here.

I'm not sure what you mean by wrapper script, but you can change the binary the game downloads by replacing koboldUrl in AppData\LocalLow\Three Eyes Software\Silverpine\settings.json. In your case the Cuda11 + AVX1 version should work.

"koboldUrl": "https://github.com/LostRuins/koboldcpp/releases/download/v1.103/koboldcpp-oldpc.exe",
"linuxKoboldUrl": "https://github.com/LostRuins/koboldcpp/releases/download/v1.103/koboldcpp-linux-x64-oldpc",

If you're on Windows, the game should also detect any already running Kobold process before the setup dialog and start communicating with it on port 5001. In that case, the game will assume the Kobold process it detected is running the last selected local model, so confirm that modelName in settings.json is correctly set to "Nemo" or "Mistral-Small-3.2".