Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
The game has been out for over a year now, and AI models have more data and have changed how their algorithms process that data.

Sadly, this isn't true for small creative writing focused LLMs. When I previously said something along the lines of more modern models handling Gareth differently, I was talking about Mistral Small 3.2  (June 2025) and DeepSeek V3 0324 (March 2025). If you're using Nemo, which I assume you are if you're complaining about personalities bleeding together, it is still the same old model the game has been using since day one. There has been an experiment with a finetune of Nemo, which is probably what made you complain about Gareth's personality being different, since it characterizes him similar to how Small and Deepseek do.

Nemo is a model from July 2024, and nothing has beaten it at creative writing at the 10 GB VRAM footprint since then. A large amount of things related to the harness the model has to work inside of have been improved, so it might feel like it's not the same model as back then, but it is.

For reference, the bottom line is the chance of Nemo correctly retrieving information from a 6,000 word corpus, and the top line is the same for Mistral Small. This affects everything, from recalling the NPCs personality to remembering that it already pestered you with the same question twice.

You really should not expect brilliant memory or output variety from it. I highly recommend you try a fresh save file with DeepSeek. I'm confident you will be blown away by how varied the characters can be.

(9 edits)

I'm not using Nemo, I am using Mistral. That said, I'm not asking for brilliant memory or perfect output or anything like that, just a pass over the prompts, refine them a little bit. I don't know if whatever system you're using lets you blacklist terms, but that might be a good place to start. I mess with local LLM's all the time because I compiled my own private server for world of warcraft, and have it filled with AI controlled bots that mimic players, who use LLM's to decide their chat and actions, and I've managed to actually get them fairly diverseish with their prompts, and this is with a much smaller LLM since I usually run ~1,000 bots on the server, and my comptuer has to simultaenously handle 1000 bots all questing, pvp'ing, grinding etc on top of the LLM running that handles their chat and decision making all on the same PC.  I think I use Gemma? I can't remember cuz I haven't used it in a while (found out it was DESTROYING my SSD with how much read/writes were happening due to my reliance on SDK databases.) That said, I have no idea what prompts each NPC is running, so I can't offer any other advice then my experiences and the request that they get a prompt pass. 


If you're willing, you could DM me what each character's prompt is, and I can DM you back my feedback on each character? Thats kind of a lot of work though, so I feel ya if you're not up to it. Having different prompts for different models could probably improve things a lot--I generally prefer locals models because I kind of hate the commercialization and spread of data centers, and also because I don't want personal data transmitted.

Gemma 3 is exceptionally good at roleplay for its size(s), but it's also extremely prude/censored on a pre-training level, so it's not viable for this project.