Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(9 edits)

I'm not using Nemo, I am using Mistral. That said, I'm not asking for brilliant memory or perfect output or anything like that, just a pass over the prompts, refine them a little bit. I don't know if whatever system you're using lets you blacklist terms, but that might be a good place to start. I mess with local LLM's all the time because I compiled my own private server for world of warcraft, and have it filled with AI controlled bots that mimic players, who use LLM's to decide their chat and actions, and I've managed to actually get them fairly diverseish with their prompts, and this is with a much smaller LLM since I usually run ~1,000 bots on the server, and my comptuer has to simultaenously handle 1000 bots all questing, pvp'ing, grinding etc on top of the LLM running that handles their chat and decision making all on the same PC.  I think I use Gemma? I can't remember cuz I haven't used it in a while (found out it was DESTROYING my SSD with how much read/writes were happening due to my reliance on SDK databases.) That said, I have no idea what prompts each NPC is running, so I can't offer any other advice then my experiences and the request that they get a prompt pass. 


If you're willing, you could DM me what each character's prompt is, and I can DM you back my feedback on each character? Thats kind of a lot of work though, so I feel ya if you're not up to it. Having different prompts for different models could probably improve things a lot--I generally prefer locals models because I kind of hate the commercialization and spread of data centers, and also because I don't want personal data transmitted.

Gemma 3 is exceptionally good at roleplay for its size(s), but it's also extremely prude/censored on a pre-training level, so it's not viable for this project.