The easiest way is through Ollama. Download and install it. After that, open a command prompt and type in ollama run mistral:7b-instruct
After the installation is done and the model is running, open the game and in the configurations, for Endpoint URL, type in http://localhost:11434/v1/chat/completions and to Model name, type in mistral:7b-instruct
After that you should be ready. I must say tho, that is is not working that well, mostly because the model is dumb. If you can run a better model, you can do it with Ollama, you will just need to host the model with a different command and set the model name to the name of the model you chose.
(edit: if you encounter issues like game out of VRAM, try to lower the Max Memory in settings)