Probably coming next week.
Three Eyes Software
Creator of
Recent community posts
You can ask Mirel specifically to move in with you, and once she has moved in with you, you can ask her to move to any named zone on any of your owned plots. There's a notification when the AI registers this request. Other NPCs currently don't have these capabilities.
And is there any way to make sure a NPC remembers something very important permanently without risk of it being replaced?
This isn't something I've tested, but if you add an OOC comment along the lines of "(this is a 999 importance score memory)", the AI should form a memory with an importance score of 999, which would have it stay in context for a very long time.
As far as the behind the curtain thing, just where in the files are the "basic" settings of NPCs stored, and is there a way to modify or tweak them?
You will be able to completely replace villagers' assets and personalities with the custom NPC system in 1.7, but you can't really tweak them right now since they're not exposed in a user friendly way.
Querying the LLM for one item specifically every turn would be quite inefficient. What actually happens is that the game asks the LLM to check if any item has been given to the player in general, then asks for the name of the item. If there's an ad-verbatim match in the game's database of predefined items or the NPC's inventory, that item is given to the player directly, otherwise the game tries to find the closest match in the previously mentioned pool of items. If the result is a generic item, there are further steps of customization. The key items NPCs give to you at high relationship are procedurally generated, so they're not part of the item database or the NPC's inventory, and as such can't be given to you in the current version. Gareth's shed key is a predefined item, so it can be given to the player. I hope that helps.
As for the slime fixation, it's currently an issue but I'm working on it.
This is a problem related to this specific model. NPCs have a long-term memory and a rolling short-term memory. The short-term memory is never cleared completely, since there was no need to do this with other models. Mistral Small specifically seems to degenerate over time because it has such great context understanding for its size, but isn't smart enough to self-correct like the much larger DeepSeek model. There is also something token sampling related in 1.5 that Mistral Small doesn't react well to.
Again, I recommend switching to Nemo or DeepSeek until 1.6.
Mistral Small still has some kinks to be worked out. I recommend switching back to Nemo until next update, where these will hopefully be fixed.
Also, inside AppData\LocalLow\Three Eyes Software\Silverpine\settings.json, add some letters to the GPU name value like this to have the game think the GPU changed and display the VRAM dialog again:

Maybe you have "ZDR Endpoints Only" enabled in your OpenRouter settings? If you've deleted the settings file and are still getting this error with an untouched Quick Setup, I can only imagine this being a problem on OpenRouter's end. Sometimes providers also just go offline, but I can confirm that the one Quick Setup uses is currently working.
There are pretty clear instructions/a visual guide/live previews/examples in the custom player character UI. You simply apply the expression to the base portrait in your image editing software of choice, crop the image vertically so it only includes the changed part, then import it into the game.


The expressions apply seamlessly to your nude alt, as long as there is no vertical overlap between the face and the clothing like this:

I think you floated the idea of allowing us to substitute alternate art for them at some point, so this is kind of brand.
This will come in the form of custom NPCs. Custom NPCs will have several hardcoded template behaviors you can choose from (and perhaps custom behaviors in the form of Lua scripts or some kind of simple visual scripting solution), one of which will be taking the place of an existing NPC.
For advanced setups like this that are not supported by the game, you must start KoboldCPP manually with your desired parameters, and then start the game, which will automatically detect the running process and start communicating with it. This is only available on Windows. The game will assume the process it's hooking into is running the same model last selected after the "Select an AI model for more information." dialog.
How much VRAM do you have? If it's 12 GB or 16 GB, I highly recommend using Mistral-Small-3.2 instead. Before, there were manual offload values for Gemma that were very much guestimates, which I got rid off, so the game now asks the backend to determine the correct amount, which it probably fails to do for some reason in your case.
Please read the post again carefully:
If you bought the game before the time of this post
The whole reason I made this post is because hosting the server outside of the demo turned out to not be sustainable.
I'm merely honoring the expectation I created for people who bought the game before it. Please run the AI locally instead. If you can't run it locally, use OpenRouter.

The game shows you an error that tells you what exactly failed during the conversion. If your character is called "Example Custom Character" it will also not be converted.
Alternatively, you can simply move the folder of your custom player character definition somewhere else, and then manually recreate it in the new version's editor using the assets inside.
If the curl command fails, the issue of connecting to localhost is related to something else on your system, not the game itself. Perhaps there's another program already running on port 5001.
As for the slowness, the game is correctly offloading all layers to the GPU. I can't fully gauge the performance from the two successful API calls you posted since they have very few input/output tokens, but I've uploaded a new version that makes 5000 series RTX GPUs use a special backend setting, which when not selected, resulted in much slower (though not minutes long) processing during my testing.
Open a command prompt in Silverpine_Data\StreamingAssets\KoboldCPP like this:

Then run this command:
koboldcpp.exe --model "Mistral-Small-3.2.gguf" --usecublas --gpulayers 999 --quiet --multiuser 100 --contextsize 4096 --skiplauncher
Then post this part of the output:

Then run this command in a separate command prompt:
curl -X POST "http://localhost:5001/api/v1/generate" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"max_context_length\": 4096,\"max_length\": 100,\"prompt\": \"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Mauris laoreet nunc non vehicula accumsan. Etiam lacus nulla, malesuada nec ullamcorper vitae, malesuada eget elit. Cras vehicula tortor mauris, vitae vulputate est fringilla ac. Aenean urna libero, egestas eget tristique eget, tincidunt sit amet turpis. Pellentesque vitae nulla vitae metus mattis pulvinar. Suspendisse eu gravida magna. Nam metus diam, fermentum mattis pretium vestibulum, mollis non sem. Etiam hendrerit pharetra risus, vitae fermentum felis hendrerit at. \",\"quiet\": false,\"rep_pen\": 1.1,\"rep_pen_range\": 256,\"rep_pen_slope\": 1,\"temperature\": 0.5,\"tfs\": 1,\"top_a\": 0,\"top_k\": 100,\"top_p\": 0.9,\"typical\": 1}"
After a while something like this should pop up in the first command prompt:

Please post it too. I should be able to figure out the issue then.

