I am running my models locally, so I am very much constricted by the specs on my system, but the models still do well.
Gultch9585
Recent community posts
If it's online, sorry, you are limited by the free model being used, and you should probably switch to a locally run model.
If it is a locally run, you need to set your AI Context length to be longer on your AI model, then ingame, go into settings>Endpoint, and change your "Max Memory" to 2-3,000 *less* than your AI Context length preferably, but at least 1,000 below. You will still get the "Fatal Memory Error" but so long as you have that 1,000 buffer, it won't crash or destroy your game, it will allow the AI to start "forgetting" older passages without damaging your game. The AI will try to remember the important things, but how well it does that, and what it deems "important" depends on the AI.
Top One I have found, but it is a bit slow, is "Nemomix Unleashed 12B" by Bartowski , it does good work in catching any specific prompts you use, and acting on them within the limits of the game world. Again, main issue is that it is slower
Close Second would be either "Starcannon-Unleashed" or "Magnum V4 12B"
So Not wanting to speak for the Dev, which i know already replied, but it does help to run your own AI locally if you have the computer to do it, there is a thread on how to do it pinned in the community here. It saves a lot of headache if you can, and if you can I can get about 13k context tokens (IE roughly 52,000 Characters, 1 token = roughly 4 letters) on 32 Gb or RAM and a RTX 3070 laptop. I am aware it's a bit higher spec than what most people run, but the 13K is all run on the 3070, which is about 12 GB in RAM. Would I suggest flooding your RAM for it, no, but the game, at least when going through the story, seems to only log and send roughly 2K context memory, so 8K characters, and the rest is used for things like System Prompt Additions, Stat entries, Entity entries, locations, and traits. so that is, the story itself, and agin, this maybe wrong, it's just what I have noticed when I created my own scenerios, only keeps about 2K in context memory, and if it does, Dev man, you are amazing, I love you, all I want for christmas is a way to adjust how many game context tokens you can tell the game to send to the AI. Within reason, I know you're already down a kidney.
You kinda have to realize the majority of "good" AI's nowadays are those that recieve constant updates, which tends to mean large corporate AI's that have a lot of restrictions because said corporations really don't want their AI's doing anything that might get them sued, and even more so if kids can easily get around it the blocks put in place. So you just kinda have to accept, large AI models like llama are going to constantly have blocks being put up, and it takes time for people to get around those blocks. So genuinely, don't be surprised if your experience is very intermittent due to said updates, it's just kinda part of the ecosystem.
That's.... very much a limit of the AI. I might be speaking for the creator a bit, but this is just what I have noticed after several hours of tinkering with the worlds, the AIs, prompts, and how AI context works. Genuinely, there's a few things that can happen, 1 is just that the AI ignores the descriptions of the stats, 2 might be that AI using "Rolling Window" or "Truncated Middle" for context Overflow (IE more data present than the AI can handle) and the AI just starts throwing data out, and inadvertantly throws out the fact that you had the trait, especially if you use Rolling Context, where unless you are specifically telling the AI you have X trait every other passage 9Most rolling windows only have a 2k Context limit) the AI will completely forget you have X trait. Or it could just be that the AI doesn't see the trait as important or have an effect on the story, and again, it just throws it out or ignores it.
Genuinely, at least what I do, is I type out a description, role, goal, and general capabilities of a creature, take it to the AI I plan on using, and tell it to try an minimize the amount of context tokens used while keeping the theme, goals and capabilities of the creature intact for a Storytelling AI to use, it tends to work fairly well, you do get an AI occasionally going off the rails with it though.
Tried it, keep getting "The AI model was unable to produce the correct JSON format. Try a different model." error with the model, any guidance? I did use the instruction on Step 6 pasted into the System prompt under the Interface tab of the Developer, and I did change the ingame system prompt ain step 8. I can also confirm I am using the Instruct Model.