Skip to main content

On Sale: GamesAssetsToolsTabletopComics
Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Thanks for implementing this. I can confirm the basic functionality works. I get valid translations back from the API and they appear ingame. Unfortunately, there's a problem.

After about 10, captures, the capture functionality softlocks. GameTranslate doesn't crash (no crash report prompt), but no more text is captured. I can select a new capture region, but no text is actually captured when I do. This happens in both Internal and Attached modes. I tried it three times just to make sure it's reproducible, and it happened every time. Switching back to the internal translation model fixes the issue.

I did try reproducing the issue after turning on debug mode for the tool, but I don't know how to get the information you need to investigate. I expected debug mode to write a log somewhere in GameTrranslate's folder, but I don't see anything obvious.

If you need me to do anything on my side to get a fix going, let me know.

(1 edit)

Thank you for being so quick to test!

That's unfortunate, but no worries, we will get that sorted. Did you use an online API or self-hosted? If self-hosted, can you confirm that you still can post to it and get expected results back?

Yes, about the logs - they are written to the appdata/Roaming/GameTranslate/crashdump folder. You can easily get there by going into the Configuration inside the app and scroll all the way down inside of the 'General' tab.

The logs are to be improved and a 'Bug report' window will be added in the future to make all of this much more convenient.

Thank you, I'm heading to sleep, will investigate this in the morning.

Gday'

I believe that the cause of this softlocking is that the app is waiting for a response from the server. I should certainly add a timeout, but the real issue here is that the models are massive and slow. I tried this 2GB model https://huggingface.co/lmg-anon/vntl-gemma2-2b-gguf and while it works just fine for one or two text lines, it quickly becomes very slow and sometimes fails to return English output. I only have an old 980Ti to play with, so it is pretty much unrealistic for me to self-host LLM's.

What one can do in this case is to add a max_tokens value so that you can never select so much text that the server becomes unresponsive, but instead returns before the entire translation is finished. (The key max_tokens may be called something else for different setups)

Side note;
I noticed that numbers are currently treated as strings in the JSON body. I have fixed it, but the full 0.4.9 version won't be released yet for a few hours. If you want to try it out before that, you'll have to go to user/appdata/roaming/GameTranslate/config/configuration.toml and remove the max_tokens quotes around the number. I'd recommend setting it to 128 just to make sure it works.

Fixed!


You were on the right track, but the magic parameter I needed to set was num_predict. Setting that to 128 forces the model to terminate if it gets stuck. Longer translations are cut off, though. I'm going to experiment with some other translation models to see if this is an issue with the model or ollama itself.

Thanks again for working through this with me.

Nice!

Thanks yourself for testing it out :)

I've had a bit of a timeout and holiday - sorry for the slow reply.
I experimented a lot with different local models as well as online models before leaving. If I remember correctly, the main reason as to why the model got stuck was due to improper prompts. Of course setting a limit stops the model from looping forever, but it is only really masking the real issue.

I intend to do some more rigorous testing and go through a few examples with specific models soon so that it is easier to understand where issues may crop up when we use LLM's for translations.