this is very promising and i myself have been testing and messing with connecting LLM's to a game world. I am wondering about though, when using local LLM does the game tell the LLM about the robots stats, body and what it can perform in terms of actions? i read that currently there is a limitation with function calling but would love to hear more about how this works :)
do you parse the actions from the text generated, or perhaps make the llm generate both speech and actions seperately or something like that?