Hi, thanks for playing and the feedback! You're right, the dialogue is indeed generated by an LLM. Due to cost constraints, there was a daily usage limit on LLM. Once that limit was hit, the system would fall back to cached responses, which might have felt less dynamic. I've increased this daily limit and also optimized the system to reduce dialogue repetition. Hopefully, this will provide a more consistently dynamic experience now.