Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(1 edit) (+1)

There’s an AI platform project that can be fed with custom information and make decisions based on predefined contexts. It can run entirely locally, complete with TTS, STT, and everything.

To make this work with hotscreen, we’d need an API or WebSocket connection. As a starting point, it would be useful to access the current body-part configuration and the latest detection data. Ideally, we’d also be able to change the configuration via the API.

Here are a couple of possibilities this could unlock:

  1. Dynamic AI speech: Let the AI speak (via TTS) directly to the current on-screen context, without relying on pre-recorded voice lines. This would make responses fresh and unpredictable, eliminating the need to prepare new samples. The dialogue would always be context-aware, and the tone or style could be shaped by configuring the AI’s character profile (e.g., dominant, teasing, sarcastic, etc.).
  2. Content control via AI “mood”: If body-part activation could be controlled through the API, the AI could decide when certain content is shown, adding an element of randomness. We could even have to “request” to see specific content, and based on its mood or personality, it could allow or deny access, again, heavily influenced by the configured character traits.

I could develop a plugin to act as the interface between the AI platform and hotscreen, but I’d need a way to both read and set parameters as described above. If this sounds feasible, we could move forward with building a proof of concept.

(+1)

That'd be huge. Just thinking about it, the possibilities... oof! I want it now :p