Pretty sure their opinion changes are evaluated every cleared floor based on who you assigned to take care of the cuddle lounge.
edgewerk
Recent community posts
FieryLion summed it up pretty well. I've played with models under 20b parameters mostly, and they repeat themselves *frequently*. They'll basically blueprint or template part of a message and just keep pasting it back in like it's boilerplate. I had pretty good success with a 30b parameter Gemma fork running locally (I'm on a 2070 Super with 8GB VRAM), although I only get 3-5 tokens per second (super, super slow). But the dialog quality was absolutely stunning compared to the 8b and 14b models. Qwen 2.5 is listed in several places as a "potato-friendly" model, but it's very unsophisticated: it can't really handle lewd/suggestive content in anything that would titilate or suspend disbelief, it is very repetitive and subject to "GPT-isms", and even if you set it for high temperature and sampling (to add a lot of randomness and creativity), it's still heavily constrained by how small it is.
I _personally_ don't like running AI through a hosted endpoint, even though I could be using some really good hardware and getting really fancy models out of it. My two main reasons are privacy and cost. But if you want better quality responses, you're going to have to either pay for much better hardware or hosting solutions (such as OpenRouter).
I looked at the source code on Lion's github. The stats, their thresholds, and the description for the relevant stat threshold get sent when your prompt includes the <STATS DESCRIPTION> text. The game basically tokenizes each stat and includes it with the prompt with the description of its current threshold ("current" measured as "current value <= threshold value"). For example, if you define "Stamina" as a stat with thresholds at 33 (you are tired), 66 (you are winded), and 90 (you feel great), when the current game state thinks the player's "Stamina" is 70, the prompt will include the stat and the description from the "90" threshold as "Stamina: You feel great".
I don't think "rules" work in the stats descriptions or thresholds. If you want them to do something, you should probably put that as instructions in the System Prompt
If you're running a local AI now in LM Studio, there is a gear icon in the "Chats" tab of LM Studio. If you go into the "Developer" tab in the left panel and select your model in the center panel, the right panel has a "Load" tab where you can adjust the token length the local server will allow. Changing this value does require you to reload the model in LM studio (it gives you a button prompt for this), and bigger values require more CPU or GPU.
As far as resolving your entity behavior, I haven't looked _that_ deeply into the source code. I think the game only sends entities based on the player's location, and this goes out with the <LOCATION JSON DATA> placeholder in the various prompts. So if all 67 of your entities are in the same area, it might be part of why you're hitting some token limits.
I've had the "best" success by trying to define behaviors and states in the System prompt (I think I pasted an example for the Slime Core and Player Excretion above) and then mentioning those behaviors in the entity's AI instructions (second box on the Entity setup), but I've never had more than 4-5 "weird" behaviors in a scenario that I needed to explain as part of the instructions.
I've used Qwen2.5 7B Instruct Uncensored. Many others have recommended this in various community posts. It's a fast, capable all-rounder that does an excellent job and can be very creative with the right prompts.
I've also used
- Llama 3.1 8B Lexi Uncensored V2 : Close contender, but it's a lot slower, especially as history tokens increase. Also has a tendency to convert huge parts of game text into choice lists because of formatting
- DarkIdol Llama 3.1 8B Instruct 1.2 Uncensored : Performance was a lot slower, and it had a heavy bias towards suspense/horror genres even when instructed otherwise by the system prompt
- L3.2-rogue-creative-instruct-uncensored-7b : Performance was just bad, even standalone in LM Studio (64GB RAM, 16-core processor, 8GB VRAM, loaded on SSD, and this thing runs at typewriter speed). But, it was hands down the best at long-form story...so good, in fact, that it regularly ignored my attempts to be concise (text prompt was 2-5 paragraphs and it routinely shot for 30).
- llama-3.2-8x3b-moe-dark-champion-instruct-uncensored-abliterated-18.4b : This one ran like the DarkIdol llama model (decent performance), but even more biased to suspense/horror
And a handful of others whose performance was just not good or recommended draft models
This is why running a local AI is so good. The downside is that you're hardware and instruction limited--AIs behind web endpoints are generally more "up to date" and may have "guard rails" in what they're allowed to generate depending on laws in your locale, even if you pick one that is labeled "uncensored".
In the System Prompt section. It's not the description box, but the box lower down on the first tab when you edit a world.
You should *also* either define your terms there, or use the Dictionary tab. You are using terms that I am not familiar with (and I don't really need a definition). You should define these in some way that the instructions to the AI knows what your intent is.
Short definitional phrases are _usually_ enough to inform the AI, but when in doubt, also provide examples in your prompt so the LLM can decode how you want it to conduct your game. I had to do this with a corruption system because the AI was adding ludicrous amounts of corruption every turn, instead of only in the specific scenarios where it was supposed to be added.
For the "Digestion" mechanic, you can *probably* just keep the instruction simple. Something along the lines of "Threshold: if > 100%, the player starts losing health" might work.
And yeah, if you don't want certain creatures to gain certain traits, you should be able to just clearly state that in their description on the Entities tab. "This creature can never have wings" or something to that effect.
I'll rephrase.
I've personally found the game runs more reliably when I set up my scenarios in a way where the AI can just resolve effects as part of the game text. This means disabling the Stats prompt entirely. I've had way too many situations where a stat ends up doubling, then doubling again, then getting multiplied by 10, before doubling again, until the stat values are totally meaningless anyway. I had one scenario where the "Belly" stat was over 5000 and the VRML (model viewer Lion is using to draw the avatar in scenarios that use it) loses its mind.
So I just let the System Prompt tell the AI when it's generating game text that it needs to track player stats and I generally don't worry about numbers. The AI does a pretty good job of just advancing the story and incorporating player actions.
I'd like to fiddle more and give you some examples, but Windows has suddenly decided to treat this game as a virus and refuses to let me run it.
This is my "best guess" for trying to adjust the System Prompt for your scenario after poking at my local QWEN-2.5 AI on this. This assumes the "Digestion" stat is being tracked on the prey (in this case, the player) and not the predator (the NPC that has swallowed the prey).
Text to add to your prompt:
### Detailed Instructions
#### Digestion
- **Initial Value:** The digestion timer starts at zero.
- **Increment:** Each time a creature swallows another, digestion begins.
- **Decay:** Digestion decreases by 10 every action that does not involve swallowing.
- **Threshold and Effect:** Once the digestion reaches or exceeds 100, damage Health by 25 with each subsequent action.
You can probably track this with a stat and let the Stat Changes prompt handle it like a counter, but my experience with Stat Changes has been almost universally bad, so I write scenarios to be able to let the AI narrator handle it.
Maybe try and offload this to the AI as part of the System prompt. I have a modified version of the slime city scenario where I added instructions by defining terms in the actual System Prompt on the "World" tab. This has worked out pretty well, but I'm still refining it. I added instructions that a slime invading the player implants a slime core, and that core takes a certain amount of "time" to mature. I have instructions with my Game Text prompt that tell the AI that when it generates Game Text, it should assume 2-3 minutes of "time" passes.
Example:
### Slime Core
**Description**:
- A concentrated, gelatinous protoplasm that is implanted in various body parts (belly, womb, breasts).
**Behavior**:
1. **Implantation**: The Slime Core attaches to a chosen orifice and fertilizes immediately.
2. **Growth**: Over 10-15 minutes, the affected region swells significantly as the core grows inside.
3. **Excretion**: The player secretes colored fluid continuously from the implanted area.
4. **Birth**: After 10-15 minutes of growth, the Slime Core is expelled as a mature slime, which wanders off.
### Player Excretion
**Description**:
- When the player is impregnated by a Slime Core, she experiences intense physical discomfort and pain as the core grows and tries to push through her body.
- The fluid from the affected region (belly, womb, or breasts) seeps out uncontrollably.
First thing to check is if your scenario still works when you remove the new stat. If you continue to get the error, that means you have instructions somewhere that are causing problems.
I would check the "name" for the actual stat, as I didn't see any kind of key sanitization in the source code. If you've added a double-quote in the name for your stat, it might be interfering with some behind-the-scenes code.
Next place to check would be the description for the stat, as well as the ranges. If those are clear, check if you're using the custom stat progression box (the one that lets you add Javascript).
And if all of *that* is good, then you should export your scenario and try it on the desktop version. You would do this by trying to export somewhere like Google Docs so that you can import it from the desktop client.
Beyond that, FieryLion may have more ideas. Those are mine as a web developer. I see this error frequently with my day job when someone doesn't pair quotes or tags correctly.
Yeah, it's definitely a pain point for me. I've observed when running a local AI on qwen2.5-7b-instruct-uncensored, the performance is *slightly* more reliable, but it's definitely not perfect. Qwen is about the most stable I've used (out of over a dozen _instruct_ tagged models). I mostly just disable stat prompts now and try to construct scenarios in a way that they aren't required. The AI is actually surprisingly good at conjuring skill checks and resolving them without including the details in its response, in a way that keeps the game immersive.
I have found that Telekinesis is such an indispensible spell school that it is always a first pick for me. Sure, the "zero accuracy" line seems like it's a handicap, but the actual perk of TK is that you are able to use knives, scissors, and tools while fully bound. If you have a magic knife or sword this dramatically expands your escape options. Metal restraints end up being the only significant handicap until the heavy magic and curses start showing up.
It seems to work like a tile bonus, similar to the "corner" effect if you are adjacent to a chest, door, or goddess statue. I believe if you have a knife equipped, but your hands are bound, you get a bonus to the "cut restraint" action that isn't quite as big as if your hands were free, similar to how the corner gives you a struggle bonus when your hands or arms are bound.
I generally like the characters, but there's significantly less development of the characters than there is the actual puzzles. I'm still trying to figure out how to get out of the lab, and now that I've been turned squidbunny I fear I'm going to have to load an earlier save because it soft locks you out of the scanners.
I feel like you get the most face time out of Yuri since she's doing the class rep thing to get your registration all squared away, but honestly I can't say more until I figure out how to get one of the girls out of the dream world.