Report any bugs here, NOTE THAT Connection issues go into the other thread
Every choice transforms your body and shapes your adventure · By
I don't believe resetting the AI settings resets the system prompt but I feel like it should. If you've modified it you don't really have anyway of getting back to the default. It's easy enough to wipe out the APPDATA entry and let it reconfigure but I feel like a lot of people won't know to do that.
Major bug discovered:
- Not entirely sure whether or this one's intentional, but traits that raise a stat's maximum value ALSO raise the starting value. Picking Iron Stomach on Veilwood increases max stomach to 120 as normal, but also increases the starting value to 40/120 instead of 0/120. It's also different from other worlds, as picking Iron Stomach on any of the other worlds raises the starting Stomach value to 20/120.
- Traits that reduce maximum values don't seem to work. Picking Busty says it reduces max Stamina by 5 but you still start with 100.
- Zen Master is unable to reduce the starting value of Stress below 0, so does nothing in that regard.
the new ai soft resets the prompt every 2 or 3 responses it is like it doesn't know how to keep on topic if the user is a bit creative.
For example I was starting to fight a chocolate dragon that could breath caramel and when I input to throw my spear at it the prompt resets to me fighting a cookie brawler on strawberry shores.
ah thats because the new setup allow the AI to write a lot more, and the default memory limit is very small at 2k which means after 2-3 pages it will fill up the memory limit and forget about earlier pages faster. You should increase this memory limit by using your own AI (either with OpenRouter or setup local LLM).
You can also limit the AI output by reducing the max output from 1024 to maybe 256 to avoid the AI writing too much.
I think it was a bad idea to improve the memory, because now the ai will leave sentences unfinished, have incorrect grammar in their responses and will even include a jumble of letters and numbers no matter what ai you use or how many tokens you limit it to, can you fix this because it wasn't a problem before?
It's not a bug, more like a suggestion: I think you should change the rules in basic worlds to accommodate sexual/deviant scenarios, sometimes some AI's are not allowing it. I know I can edit worlds manualy, but still, not everyone would like to do that
I'm mainly using Qwen2.5-7B-Instruct with LMstudio
Sometimes I have to go through several hoops to explain to AI that I want this scenario to have explicit content. It's kinda funny actually, it can describe birth sceen and think that it's not an explicit content
So, there is actualy alot what I can say right now, I dont even know where to start.
First of all, it would've been cool if you can make a toggle to "force-feed" AI every system prompt and world setting before it outputs an answer.
And then also: maybe you can create a way to input player's character name before game starts, because i did the prompt myself and AI is ignoring it half of the time, I tested multiple prompts with multiple Ai's and every time is diffrent.
Here are the prompt:
#Always READ FIRST:
{
-WHEN"START GAME" You MUST ASK Player TO INPUT"[PCname]" before you output anything else, -[PCname]definition{Player's character name}, -Always refer to the player's character as [PCname]
}
Also AI cant work with "choice list" properly, outputing content there half of the time. Stats and how Ai manage them is just random number generator, even when I use stat descriptions to tell AI when and how to manage them
I don't think that's exactly a "bug" here so much as a missing feature. But I would also appreciate a name system.
Personally, I use the same name most of the time, so when I'm editing the world definitions for whatever reason, I also edit them to include my character name as one of the "rules" in the system prompt (though I replaced it with my username for this example). For example, here's my customization of the slime world:
The player must navigate the slime-infested city. With every step, she fights to maintain control, avoiding the green slimes' attempts to invade her body and turn her into a breeding ground. World Rules: - Player is named BestUserNameEver1. - Player is always naked - The city is overrun with green slimes that try to force their way into mouths, vaginas, and anus of their victims to reproduce - The player's stomach will grow overtime if she gets invaded by slime - Once her slime invaders grow to term, she will **give birth**! - Giving birth will reset stomach size to 0!!
I did that too, but I think that it would be better if Ai ask you for player's character name at the start of the game, it's more immersive that way. Also it can be creative at times, for example, in my custom world there is a school and player is a new student there. Last time Ai gave me questionnaire in Headmaster's office about my character where I inputed character's name, gender and age. That was cool and nicely integrated in the narrative.
And yea, my post is not about bugs, it's more like a feedback on the project
For "stop", you can sort of accomplish this right now using the edit button that was added in 1.1.3. If the AI produced "too much" text, you can edit the whole passage to stop at where you would have preferred it to stop.
As for "continue", have you tried making your prompt be "continue" or "please continue"? Since the default system prompts are about the AI being a narrator for the game, interacting with a player, it would be perfectly reasonable for a player in that context to ask the narrator to continue on with describing the scene / actions without it being off the rails. That sort of thing would be common in some sort of "play-by-post" choose-your-own-adventure or DnD campaign. Or if that doesn't work, perhaps with extra notation like "(Out-of-character: Please continue)" or "(Out-of-character: I have no particular response, please continue)" since things like that are also common.
Edit button does not stop output generation. When you press edit, you still need to wait until output is fully done before you can edit anything. And as I said, this is too much work to edit and ask AI to continue every time until I see passable result. That's why I said, that it would be great for QoL
Based on some Google searches, it is most likely a false positive. HEUR detections are generally features of your antivirus that scan the code of a program and, if the code doesn't meet certain criteria, it will flag it as malware to be on the safe side. This is mostly used to find Adware or PUPs (Potentially Unwanted Programs). However, this can produce false positives since not all code that gets flagged by this is necessarily malicious. I also think that the dev wouldn't suddenly turn it malicious, especially since the source code is now made available by them.
This detection is also more prone to apply to you if you are using Avira as your antivirus software. It seems to be more common with them.
Sources that I found:
https://www.reddit.com/r/antivirus/comments/1dde6hm/got_a_heur_virus_on_my_lapto...
https://support.avira.com/hc/en-us/articles/360000819265-What-is-a-HEUR-virus-wa...
I'm a different user (made a itch account to reply) and you probably already know this but I think that save file is MUCH larger than 100 mbs. I've noticed a new bug where it seems like the save files are ballooning in size massively for no reason. like in the current version, playing for 13 in game "hours" (13 prompts) the save files are for some reason over 50 megabytes and a brief glance in it shows that for some reason it seems to be constantly repeating data in it. I made an example save in Valentine Survival here for you to download just in case you have issues reproducing the issue. But yeah, if I had to make a guess, that save file is crashing their browser because the save file is in the gigabytes for some reason. Also im using Firefox Focus (Android) in case that helps.
Umm Hello, (I'm sorry for disturbing you, I've just read that you are having a hard time right now so you can take your time with my issue, I don't mind) I discovered your update that would allow us to edit and create our own worlds to mess around in, so I've made an edited version of your Candyland to test it out. You should know that I'm not a programmer whatsoever, so I just used the settings that were given to me and told the AI what I wanted to see. but I've noticed that the Candyland got replaced entirely and I didn't intend to do that. At first, I thought it was fine since you've mentioned that the data could reset and I know now that was presented as a warning, but I took it as a permission to edit one of the other worlds, given that I could download the copy of my world and Candyland would reset.
And now I can't give any commands to the AI or save my world because it has a hard time processing my requests, I read from another commenter that had a similar issue, and I think it could be because the servers shut down again. and I don't know if I was involved in that or not.
Can you tell me what I did wrong?
First off, thank you for this game FieryLion. Huge fan! I'm reaching out as I'm unable to get responses since the latest update yesterday.
I can reset my setting to the default server and the game works, but I had been following the instructions on this page (https://fierylion.itch.io/formamorph/devlog/885513/quick-setup-guide-free-openro...) and found an even better experience. However, now that's not working; stopped mid gameplay and hasn't come back. Do this setup no longer work in the browser mode, and if not, are there new instructions?
this one is good https://openrouter.ai/sophosympatheia/rogue-rose-103b-v0.2:free
the model for the Shadow Raptor is missing in Veilwood. Lacks a 3D render like the Dread Crawler or various fungi, leads the AI to be rather ambiguous about what exactly a Shadow Raptor is in comparison to the Crawler. The renderless Shadow Raptor seems to randomly develop fur, feathers, batlike wings, and other bizarre features as the AI tries to make sense of it without a visual representation, and at one point while I was playing a Shadow Raptor got so…’excited’ that it randomly turned into an honest-to-goodness dragon.
This is probably just a me problem, but with 1.1.6, I cannot load older save files. It freezes, then eventually crashes to a white screen. Tried letting it wait for 10+ mins for "conversion", but yeah pretty sure it just RIPed. Idk if this would have anything to do with it, but the save(s) are 300-400 'turns', and only played on a mildly edited Veilwood.
Ok I think i figured out the causes ..? It doesn't lag in the normal worlds, only in my modified version of Braduhsley's Colossus world and it seems to happen most often when I try to go into one of the 2 new locations I added.
Unsure how I broke it or how to fix it but it seems to be the cause of the lag and error messages. (I'm on mobile if that helps)
Before it kept crashing i had just fixed a problem with my personal entities not spawning by checking the boxes in the location tab on the main locations. I then added 2 new areas so that the location menu could scroll allowing me to choose the location. I added full descriptions using Braduhsley's as guides and made sure to spellcheck everything.
(A summary of what lead to this problem of mine in case it helps)
I mean during very high demand the server may drop some requests because its overwhelmed. Again the best solution is to use your own AI, you can use free options from OpenRouter
https://fierylion.itch.io/formamorph/devlog/885513/quick-setup-guide-free-openrouter-setup
"Request failed (400). Either model name is wrong or memory limit exceeded model limit."
That's the error message, and it happens around page 16 to 20 no matter the world
Edit:(Tried to pull an "america education system" in the chocolate default and ate a nuke as a giantess in the other default world, both time the game wouldn't let me go past 17ish pages)
Side note for this: Why the heck is the AI more descriptive about gore than the chocolate !!? 🤣
When i tested the page memory glitch in the valentines world, the ai went full gory detail when i jokingly brought pew-pews into the game. Like it was listing stuff like brains splattering on the wall, how it landed, how the eyes popped and etc.
Honestly I'm used to gory movies and games but even I almost threw up my breakfast from the extremely detailed descriptions. 😅
it doesn't work at all, tried from 3500 to 2000, it just acts like I'm starting a new game every time...
I don't understand why cause I've been using the mobile web version for about a week or so without this big of an issue (other than the usual server bugs)
I like this game due to how I can basically bounce my own thoughts back at me with more flare so I tend to play upwards of an hour cause boredom.
Here but the problem happens in all worlds, even the default ones... like I mentioned before
Oh err... WARNING VORNY CONTENT AHEAD!!!
https://drive.google.com/file/d/1G5scqsrBbV7MHudYPfJyBTx-MvDoGkDI/view?usp=shari...
Update, after 40 to 50 ish pages it starts to struggle alot more, sometimes acting like it's starting a new game, the location menu is still broken but on the bright side, I can finally download my saves!
Edit: It broke again ... now it acts as if every page is the start of the game, even if i make a new game. 😓
it should work, if you get error 400, reduce memory limit, you can also reduce max output tokens as that also take memory. The memory usage isn’t perfect and it could underestimate the actual memory used (as this is different for each AI model)
Also as you can see only 8 out of 86 pages are kept in memory, if you want the AI to remember more you need to use a better AI with more memory limit
Dunno if it happened in 1.1.10 or 1.1.11, but now the rollback feature seems to break utterly if you rollback more than 2 responses.
Looks like responses remain in memory as stale memory even after rollback, and gets loaded if you try rolling back again after having rolled back more than 2 responses. (sorry if that's hard to understand, it's hard to explain it well)
I just HAD to say something about a bug ... tried to start a new game and i keep getting this error (i have the default settings on at it had been working just fine until now)
"This model's maximum context length is 4096 tokens. However, you requested 4171 tokens (3147 in the messages, 1024 in the completion). Please reduce the length of the messages or completion."
Tried and it's still broken but acts as if i had set it to 800 when it's really at 1024 (aka every page acts like it's the start of the game)
For reference, i had a save at page 160 ish earlier and there was no problem at all. The classic games seem to be working sort of so i honestly don't know why mine is acting out. 🤷♂️
Yeah, it's definitely a pain point for me. I've observed when running a local AI on qwen2.5-7b-instruct-uncensored, the performance is *slightly* more reliable, but it's definitely not perfect. Qwen is about the most stable I've used (out of over a dozen _instruct_ tagged models). I mostly just disable stat prompts now and try to construct scenarios in a way that they aren't required. The AI is actually surprisingly good at conjuring skill checks and resolving them without including the details in its response, in a way that keeps the game immersive.
Small "bug" on mobile, it appears we cannot save the game past a certain amount of pages... i had a save at 90ish, and one at 60ish but i tried to save my latest which is at 171 and well ... it doesn't work.
Not sure if this can be classified as a glitch though cause everything else works fine.
(Side note: I found out that removing the default worlds and only using one helps with memory and processing)
First thing to check is if your scenario still works when you remove the new stat. If you continue to get the error, that means you have instructions somewhere that are causing problems.
I would check the "name" for the actual stat, as I didn't see any kind of key sanitization in the source code. If you've added a double-quote in the name for your stat, it might be interfering with some behind-the-scenes code.
Next place to check would be the description for the stat, as well as the ranges. If those are clear, check if you're using the custom stat progression box (the one that lets you add Javascript).
And if all of *that* is good, then you should export your scenario and try it on the desktop version. You would do this by trying to export somewhere like Google Docs so that you can import it from the desktop client.
Beyond that, FieryLion may have more ideas. Those are mine as a web developer. I see this error frequently with my day job when someone doesn't pair quotes or tags correctly.
WARNING!!! VORNY TALK AHEAD!!!
The stat was "Digestion" the description: "Represents how long it'll take before the prey is digested after being Vored. If the player is the prey, damage Health by 25 every page once Digestion reaches 100."
Or something like that ... as for if it works, the world works perfectly (sort of) without said stat.
I removed the stat after a few tries, currently ensure how to make it work as intended since it wouldn't increase unless i said "i watch as Digestion meter increased" or something like that.
I'm not the best at this type of stuff ... I mean heck, my world is an altered version of Braduhsley's named player Colossus world... (with alot of new junk and less furries and more monster girls) 😅
I'd like to fiddle more and give you some examples, but Windows has suddenly decided to treat this game as a virus and refuses to let me run it.
This is my "best guess" for trying to adjust the System Prompt for your scenario after poking at my local QWEN-2.5 AI on this. This assumes the "Digestion" stat is being tracked on the prey (in this case, the player) and not the predator (the NPC that has swallowed the prey).
Text to add to your prompt:
### Detailed Instructions
#### Digestion
- **Initial Value:** The digestion timer starts at zero.
- **Increment:** Each time a creature swallows another, digestion begins.
- **Decay:** Digestion decreases by 10 every action that does not involve swallowing.
- **Threshold and Effect:** Once the digestion reaches or exceeds 100, damage Health by 25 with each subsequent action.
This is why running a local AI is so good. The downside is that you're hardware and instruction limited--AIs behind web endpoints are generally more "up to date" and may have "guard rails" in what they're allowed to generate depending on laws in your locale, even if you pick one that is labeled "uncensored".
Well, the good thing is it worked, the bad thing is it also didn't because now there's too much text... I'm using the basic/default due to me preferring it's role-playing capabilities but this also means 3900 cap... and apparently describing the different types of Vore is too much for that in some locations that have alot of entities... (even without the Vore description, the digestion stuff by itself is a bit too much for some areas) 😅
Lemme go count my entities rq
Edit: 67 ... I have 67 entities if you don't count the 3 actions, minus 2 if you don't count the location specific ones. 😅
Edit The Sequel: Wait would it be better if I added the vore variants into the description of the entity Vore (action)? Since it's an action and not an entity present in every area, would it smooth out the kinks? (Pun intended)
If you're running a local AI now in LM Studio, there is a gear icon in the "Chats" tab of LM Studio. If you go into the "Developer" tab in the left panel and select your model in the center panel, the right panel has a "Load" tab where you can adjust the token length the local server will allow. Changing this value does require you to reload the model in LM studio (it gives you a button prompt for this), and bigger values require more CPU or GPU.
As far as resolving your entity behavior, I haven't looked _that_ deeply into the source code. I think the game only sends entities based on the player's location, and this goes out with the <LOCATION JSON DATA> placeholder in the various prompts. So if all 67 of your entities are in the same area, it might be part of why you're hitting some token limits.
I've had the "best" success by trying to define behaviors and states in the System prompt (I think I pasted an example for the Slime Core and Player Excretion above) and then mentioning those behaviors in the entity's AI instructions (second box on the Entity setup), but I've never had more than 4-5 "weird" behaviors in a scenario that I needed to explain as part of the instructions.
I tried it today and it's fine for now... not sure what it was but it made bug testing my world very annoying lol.
Edit: It appears I have accident found the root cause, it's the xenomorph world in the shared worlds. Downloading it causes the worlds to get Thanos snapped when you exit world editing or the world you're playing in. I would assume it also causes the character bug though I'm not sure due to that world's lack of model. 🤷♂️
my memory usage keeps drastically rising more and more and the dictionary isn't helping anymore, what is going on? my notes are small, i'm using the dictionary to save memory, the world is a little big but i don't think it's too big to just suddenly spontaniously use 1000 tokens over max, is there something wrong i'm doing? i feel like this isn't intended, i'm using the default AI for the game
This is funny mostly because I accidentally posted a response to this game to you on a different game that you also make haha so I'm not sure what happened but I played a little bit earlier this morning it worked fine I come back this afternoon I play a little bit and I realized the AI is making all kinds of errors that weren't present before and I'm a little bit confused it's losing cohesion it's losing relative interactions between for example companion is being attacked by a slime and I am fighting another slime that has just appeared the AI completely forgets about what's going on with the companion and the slime and focuses purely on the player and the slide disregarding the other two and their existence, and when the battle is over it sometimes forgets that I even had a companion with me I'm pretty sure I broke something by mistake but I'm not sure what so I'm going to have to reinstall it and try again, but I am enjoying it at least what little I have already tested oh and it's been crashing for god knows what reason quite frequently recently at least on my end
default AI has very little memory to work with, you should use your own models or make a free OpenRouter account to use better AI with more memory https://fierylion.itch.io/formamorph/devlog/885513/quick-setup-guide-free-openrouter-setup
here is virus total scan results of the game file, 64/64 scanners passed it https://www.virustotal.com/gui/file/5b8a6e3f61dc6e64ec00fb24e05da14ad4d9d2068ab054454e3f35efc383a906
Not running in wine (Linux). I get a black window and the crashdump, with error messages in console:
wine: Unhandled page fault on read access to FFFFFFFFFFFFFFFF at address 00007078D5E87AAD (thread 0170), starting debugger...
020c:fixme:dbghelp:elf_search_auxv can't find symbol in module
020c:fixme:dbghelp:elf_search_auxv can't find symbol in module
020c:fixme:dbghelp_dwarf:dwarf2_get_cie wrong CIE pointer at 0 from FDE c098
Wine setup: newest staging with vcrun2022 and DXVK
Just a note that 90% of Steam games run in Proton (wine + newest vcrun + DXVK).
When I press rollback, for example, from page 8 to 5 and then continue my story, for example, to page 7, if I press rollback again, I am transferred to page 8 with the original story, which in theory should have been erased. I don't know how else to describe it, EN is not my first language, sry. You can see how it looks on the gif at the link below
https://drive.google.com/file/d/1WrdbeZ0qK4VzT3kRvgghoR0Ea11cu48E/view?usp=shari...
so yeah, this error is because the new AI is too fast (I improved AI writing speed by 5-10 times). I discovered a bug in the game code that caused it to fail when parsing AI responses that are too fast. I have patched the bug, but I cannot patch old versions that you have downloaded. You’ll need to use the latest version unfortunately. I can’t just slow down the AI response for older unsupported game versions.
The default AI is currently producing malformed words and sentences such as: "In the heart of the sprawling, bustling Seisyō High School nestled between the chattering crowds and the echoing halls, lies the sanctuary of knowledge and tranqu, the library You, a Senior student, find yourself ensced in this bast of academia, engrossed in your, or so. eyes flit over pages of your textbook, but your mind wanders, lost in theyrinth of teenage dreams and curiosities."
I've played around with Max Memory and Max Tokens, but the results remain unchanged. Apologies, if this doesn't qualify as a bug report.
Not sure if it's a bug, but it certainly didn't happen for me in the last version I had. I keep getting failed endpoints even at the initiation of the game. It's saying invalid URL and whatnot, but I'm not certain what that means or how to remedy this.
Edit: Failed to process AI request, and possible wrong model name are two more parts of the errors. I can't even start a playthrough.
Unsure whether to count this as a bug or not but the AI likes to repeat... like alot. Here are the 3 main examples of what I mean:
- The AI will often start with "As you, *insert player name here*, stand in *insert location here*,..." and will do so every page. If you're with an NPC, it'll also add their name and rank or species in said intro.
- Oftentimes the AI will start the next page with a partially summarized version of the last page, or just add the entirety of the last page in the new one.
- The AI will sometimes explain what is happening twice or more. Example: "As you stab NPC in the hand, you stap her hand... *4 lines later* ... and you stab her right in the hand, piercing her flesh..." (This is not a new one but still figured i should say it)
this is a common issue with smaller AI models like mistral nemo, you should use a bigger model such as one of the free models on OpenRouter. A quick fix is to edit the AI message to remove the redundancies (if you leave the repeating stuff in, the AI will look at your message history and likely write more repeating stuff later on…)
FieryLion summed it up pretty well. I've played with models under 20b parameters mostly, and they repeat themselves *frequently*. They'll basically blueprint or template part of a message and just keep pasting it back in like it's boilerplate. I had pretty good success with a 30b parameter Gemma fork running locally (I'm on a 2070 Super with 8GB VRAM), although I only get 3-5 tokens per second (super, super slow). But the dialog quality was absolutely stunning compared to the 8b and 14b models. Qwen 2.5 is listed in several places as a "potato-friendly" model, but it's very unsophisticated: it can't really handle lewd/suggestive content in anything that would titilate or suspend disbelief, it is very repetitive and subject to "GPT-isms", and even if you set it for high temperature and sampling (to add a lot of randomness and creativity), it's still heavily constrained by how small it is.
I _personally_ don't like running AI through a hosted endpoint, even though I could be using some really good hardware and getting really fancy models out of it. My two main reasons are privacy and cost. But if you want better quality responses, you're going to have to either pay for much better hardware or hosting solutions (such as OpenRouter).
Will the rollback/abort issues ever get fixed, or will I be stuck using version 1.0.9? As I'm heavily relying on rollbacks for getting certain prompts to show properly, the fact its constantly resetting basically everything is making using newer versions impossible for me unless i make hard saves for basically every response, which is super cumbersome.
EDIT: So I think I found the cause of the issues, and how to trigger it relatively consistently. It seems to happen if you rollback a second time when the current page is less than the furthest you've been in that save. Basically, if your furthest page is page 5, roll back to 4, then roll back to 3, then sometimes (but not always) it resets to page 5 due to a negative underflow. It throws an error as described below:
Detected mismatch in old save: totalPages (3) != gameStates.length (4), using offset: -2
Basically, to me it seems that because memory isn't truly cleared when rolling back, its causing direct issues with the gameStates.length the game thinks exist. Solution could be to try actually deleting the pages that are rolled back from memory? Since technically the current behavior is also a quite severe memory leak.
I've been using formamorph again, it's been a few weeks and I've come across a few bugs and issue
I've found that it's not always reporting an error properly in the LM Studio when the AI's context length just barely isn't enough, and it goes off trying to generate a response anyway. This is what's caused spontaneous scenario resets for me, mainly on very big/detailed worlds.
Try increasing the context length a little bit and see if that changes it.
Yeah, basically what FieryLion said. Even the web version's AI is dependent on a Context Length set on the ai host, though in the web version you'll obviously not have access to it. (For reference, Context Length is simply how much data the AI is configured to be able to handle at a time in a single request, action prompt + system prompts + world rules/stats/entities/location data etc)
The fact it happens on all worlds for you seems unusual though, so my hypothesis is uncertain there as I've only experienced it on heavy worlds.
Basically, my hypothesis was that if say, the AI was configured to 4000 token context length, and a world required 3950, the buildup of memory containing history, or the Notes, could push it past 4000 in an edge case the AI isn't configured to react to since, just maybe, it only checks token limit BEFORE taking into account history, causing it to discard all past history and reset the scenario to fit within the configured 4000 token limit. But eh, I don't really know the inner workings enough to really say anything certain, which is why I'm just hypothesizing something that could logically explain what's happening, since increasing context length HAS worked for me.