Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Hello!
Currently local hosting LLM via llama.cpp is not working
Requests do come in, being processed, but then they are cancelled client-side.
Is there a way to open console/debug mode to look why exactly?

yes take a look at the logs in the game's files data/logs/

okay, so there are a lot of errors
1. Player2 is not responding
2. Health request sent to unknown port on localhost, i suppose it tries to determine health of AI inference endpoint (:4315/v1/health)
3. ChatManager gives "Could not fetch token usage! 401"
4. and when game determined that "Local LLM: OFFLINE" - it gave "operation timed out" error (it did send request ignoring previous statement)
fourth error is 10 seconds after request, while my endpoint is still generating (i tested with CPU inference, GPU is unavailable at the moment)

And after that, repeated error of this kind, with few missing animations

Ah.
Also
Half of the file is error about destroying without calling SafeDestroy

Make sure your ai model is not a reasoning type as it's not supported right now. also double check with the in-game guide. You can ignore the SafeDestroy calls.

Double checked, it still makes calls to unregistered port that does not have a backend to check for heartbeat of inference server, and still times out - do you not use streaming responce?

And there is a lot of errors...

(1 edit)

Okay i'll look at it sometime this week. Thanks for the heads up! Btw most of those errors are benign (mostly debug calls accidentally left in)