Skip to main content

On Sale: GamesAssetsToolsTabletopComics
Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(1 edit) (+1)

I've been using formamorph again, it's been a few weeks and I've come across a few bugs and issue

  • The default AI is degraded from what it used to be, it keeps restarting the scenario, ignoring my prompts half the time and seemingly forgetting previous prompts. reducing memory does not work for me.
  • The windows build is out of date, not having custom VRM support
  • Running the web build on itch.io on windows has the 3d models not show up, only a big black square (Firefox)

I've found that it's not always reporting an error properly in the LM Studio when the AI's context length just barely isn't enough, and it goes off trying to generate a response anyway. This is what's caused spontaneous scenario resets for me, mainly on very big/detailed worlds. 

Try increasing the context length a little bit and see if that changes it.

I'm talking about the default AI formamorph uses, not LM STUDIO or openrouter. Apologies if i didn't state that clearly. Also this happens on all worlds.

hmm it could be the exact same issue, sadly the game can’t estimate the exact amount of memory used so sometimes it may exceed…. I’m looking into it

(1 edit)

Yeah, basically what FieryLion said. Even the web version's AI is dependent on a Context Length set on the ai host, though in the web version you'll obviously not have access to it. (For reference, Context Length is simply how much data the AI is configured to be able to handle at a time in a single request, action prompt + system prompts + world rules/stats/entities/location data etc)

The fact it happens on all worlds for you seems unusual though, so my hypothesis is uncertain there as I've only experienced it on heavy worlds.

Basically, my hypothesis was that if say, the AI was configured to 4000 token context length, and a world required 3950, the buildup of memory containing history, or the Notes, could push it past 4000 in an edge case the AI isn't configured to react to since, just maybe, it only checks token limit BEFORE taking into account history, causing it to discard all past history and reset the scenario to fit within the configured 4000 token limit. But eh, I don't really know the inner workings enough to really say anything certain, which is why I'm just hypothesizing something that could logically explain what's happening, since increasing context length HAS worked for me.