Yeah, basically what FieryLion said. Even the web version's AI is dependent on a Context Length set on the ai host, though in the web version you'll obviously not have access to it. (For reference, Context Length is simply how much data the AI is configured to be able to handle at a time in a single request, action prompt + system prompts + world rules/stats/entities/location data etc)
The fact it happens on all worlds for you seems unusual though, so my hypothesis is uncertain there as I've only experienced it on heavy worlds.
Basically, my hypothesis was that if say, the AI was configured to 4000 token context length, and a world required 3950, the buildup of memory containing history, or the Notes, could push it past 4000 in an edge case the AI isn't configured to react to since, just maybe, it only checks token limit BEFORE taking into account history, causing it to discard all past history and reset the scenario to fit within the configured 4000 token limit. But eh, I don't really know the inner workings enough to really say anything certain, which is why I'm just hypothesizing something that could logically explain what's happening, since increasing context length HAS worked for me.