Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(2 edits)

Sorry, I'm pretty silly, but how do I connect Oobabooga to the game? I'm able to run the model (Gemma 2 2B) just fine in the web UI, but when I try connecting it with the game, it says <Not connected> in the AI Chat menu. Do I have to enable something in Oobabooga?

https://imgur.com/a/NfFD8MV

(+1)

yes you should enable the API in your ooba

(+1)

Thanks! That helped, but after that I've been still getting the "Failed to connect to LLM" error, but for another reason, and I've found the issue. For self-hosted LLMs, it takes a lot of time to generate a reply, and judging by the logs, the timeout in the game is around 10-15 seconds, so you just need a lighter model that can generate the answer faster than that. In my case (GTX 1050 Ti), the heaviest model I could run had 360M parameters (SmolLM), which is way too stupid. I was wondering if it's possible for you to make the "subtitles" in the game update in real time, word by word (for local LLMs at least)? This would avoid the timeout, and text-to-speech for self-hosted isn't implemented anyway, so that shouldn't be an issue.

Also, is support for free online LLMs coming anytime soon? Something like Google Gemini or, preferably, Mistral, as they both have free APIs.

And another thing, are settings going to be implemented? Currently, the mouse sensitivity is way too low for me, and also it appears that the game has no frame cap/V-Sync, because my GPU is constantly being loaded at 100%, and the game still runs at 60+ FPS even if it uses just 40-60% of the GPU, so it clearly runs at a much higher frame rate, which is unnecessary.

Thank you and good luck with the development!

(1 edit) (+1)

Sure Ill look into it! The october release has a VSYNC fix and caps framerate to 60 but yes a graphics settings is planned!

Deleted 210 days ago