Hi XenoCow, it's good to hear from you again, thanks so much for playing!
I'm glad you like the minigame and understand its purpose; other players have been confused about why it. The idea is to simulate a state of nervousness and the player to say something silly and wait to see how she reacts.
It loads faster because LLM no longer conflicts with Unreal Engine for resources, haha. The videos are random, although when she gets really upset, you start seeing videos of her being upset. Associating videos with states isn't a technical challenge; the problem is that the game's file size gets too large. About speech synthesis; since it's pre-rendered, there's no way to adapt the lip movements, so I decided to remove it.
I've seen the video; it's exactly the same concept, nice work! Just a side note, he's loading MP4 videos. If you wanted to combine it with a local LLM, you'd run into problems... In my case, video decoding is done offline, and JPEGs are packaged into a binary file (let's say a custom video format like .bin). Then the process only involves loading the video into memory, not decoding it. The downside is that .bin videos are larger than MP4s :(.