Hi Ayush, I've fixed the issue you found. It was a finicky bug related to the tokenizer's encoding and decoding. This update also includes an improved 15M parameter LLM for a more consistent storytelling experience.