Posted June 08, 2025 by Aleksandr Unconditional
#bugfix #build #dina #local model #log #recognize #openrouter
✨ log: editing and deleting entries, button for quick scroll to bottom, when opening the log the latest entries are always visible
✨ added support for openrouter. free models: qwen/qwen3-30b-a3b:free, tngtech/deepseek-r1t-chimera:free, meta-llama/llama-4-scout:free, meta-llama/llama-4-maverick:free, deepseek/deepseek-chat:free, deepseek/deepseek-chat-v3-0324:free
✨ Dina: new images, start locations changed, now u can give her cunnilingus, squeeze her breasts, + images of blowjob
✨ added models: claude-sonnet-4, claude-opus-4, claude-3-5-haiku-20241022, command-r7b-12-2024, gpt-4.1-mini; deprecated gpt-4-0613, gpt-4o-mini and gpt-3.5-turbo (16k version remains)
✨ local speech recognition model (windows, linux)
✨ now speech recognition settings are saved; at game start it launches the chosen solution (browser/local)
✨ internal ru proxy: lv proxy server added (Latvia)
✨ default local model is now deepcogito_cogito-v1-preview-qwen-14B-Q4_K_M, the list of recommended models has been updated
🛠 fixed location behavior: now a random one from the saved set is chosen instead of the first
🛠 fixed ui-ux animations and minor interface bugs
🛠 koboldcpp updated to version 1.92.1
🛠 now for windows, if u have a cuda 12 gpu u can put koboldcpp_cu12.exe into resources/koboldcpp (it will be slightly faster); the game will use it instead of standard koboldcpp.exe
🛠 default context trimming for local models increased to 9900 characters
🗝️ changed the passwords for base, person, visionary
log: now u can edit dialogue history - entries for both character and yourself - which can help fight api censorship of large models and also just to have the history exactly as you want
openrouter currently without streaming output, as not all their models support it and the syntax is a bit different; I’ll add streaming later
internal lv proxy: unfortunately it didn’t help for gemini since it’s also blocked in Latvia, but works fine for chatgpt and claude. thus for gemini the internal proxy works only through server 2 (sf). if that is blocked for you, the only option currently is your own vpn
local speech recognition: uses model ggml-large-v3-turbo-q5_0. it works a bit differently than the browser one: it can’t stream (but this is solvable; I’ll improve it), it waits until recording ends, so without a gpu it may be slow with high delay. to speed it up, you can try replacing the local model file in resources/whisper with a simpler model (e.g. ggml-tiny-q8_0.bin), just rename it to ggml-large-v3-turbo-q5_0.bin so the game picks it up'
hug 🖤 •ᴗ•