Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

GameTranslate

In-game translator at your disposal · By Godnoken

[Feature Proposal] Send Translation Requests to Local Language Model API

A topic by AmishTechBro created 5 days ago Views: 80 Replies: 6
Viewing posts 1 to 3

Problem: The internal Japanese-English translation model provides poor translations. DeepL is limited on the free tier, and very expensive on the Pro tier.

Possible Solution: Allow users to send translation requests to an arbitrary API endpoint to be processed by GameTranslate and overlaid onto the game window in automatic mode. Most people would use this functionality to route requests to a local webserver, but it could in principle be used to connect to a remote endpoint if it had acceptable performance.

I would want to be able to specify:

  • An endpoint URL
  • Metadata (request headers, etc.)
  • A request format, with a special token to represent the text captured by GameTranslate (e.g. %text%)
  • The JSON path of the field in the response object that represents the translated text
  • Optionally, a way  to filter unwanted text in the response

Example: This is an API call to the hf.co/lmg-anon/vntl-llama3-8b-v2-gguf:Q8_0 language model. I would like to pipe the response field into GameTranslate.


This software is pretty rad so far. I look forward to watching it evolve!

Developer

Hi mate,

This is a brilliant suggestion. I will have a look at it promptly.

Thank you!

Developer

Hey pal,

I have just implemented this! If you have the time to test this out, there is now a 0.49_beta version available for download.

If you have any feedback or suggestions, please shoot! I am not super familiar with using APIs so maybe I have missed something obvious. However, I can confirm that I got it working with the model you linked as well as LibreTranslate.

Cheers!

Thanks for implementing this. I can confirm the basic functionality works. I get valid translations back from the API and they appear ingame. Unfortunately, there's a problem.

After about 10, captures, the capture functionality softlocks. GameTranslate doesn't crash (no crash report prompt), but no more text is captured. I can select a new capture region, but no text is actually captured when I do. This happens in both Internal and Attached modes. I tried it three times just to make sure it's reproducible, and it happened every time. Switching back to the internal translation model fixes the issue.

I did try reproducing the issue after turning on debug mode for the tool, but I don't know how to get the information you need to investigate. I expected debug mode to write a log somewhere in GameTrranslate's folder, but I don't see anything obvious.

If you need me to do anything on my side to get a fix going, let me know.

Developer (1 edit)

Thank you for being so quick to test!

That's unfortunate, but no worries, we will get that sorted. Did you use an online API or self-hosted? If self-hosted, can you confirm that you still can post to it and get expected results back?

Yes, about the logs - they are written to the appdata/Roaming/GameTranslate/crashdump folder. You can easily get there by going into the Configuration inside the app and scroll all the way down inside of the 'General' tab.

The logs are to be improved and a 'Bug report' window will be added in the future to make all of this much more convenient.

Thank you, I'm heading to sleep, will investigate this in the morning.

Developer

Gday'

I believe that the cause of this softlocking is that the app is waiting for a response from the server. I should certainly add a timeout, but the real issue here is that the models are massive and slow. I tried this 2GB model https://huggingface.co/lmg-anon/vntl-gemma2-2b-gguf and while it works just fine for one or two text lines, it quickly becomes very slow and sometimes fails to return English output. I only have an old 980Ti to play with, so it is pretty much unrealistic for me to self-host LLM's.

What one can do in this case is to add a max_tokens value so that you can never select so much text that the server becomes unresponsive, but instead returns before the entire translation is finished. (The key max_tokens may be called something else for different setups)

Side note;
I noticed that numbers are currently treated as strings in the JSON body. I have fixed it, but the full 0.4.9 version won't be released yet for a few hours. If you want to try it out before that, you'll have to go to user/appdata/roaming/GameTranslate/config/configuration.toml and remove the max_tokens quotes around the number. I'd recommend setting it to 128 just to make sure it works.

Fixed!


You were on the right track, but the magic parameter I needed to set was num_predict. Setting that to 128 forces the model to terminate if it gets stuck. Longer translations are cut off, though. I'm going to experiment with some other translation models to see if this is an issue with the model or ollama itself.

Thanks again for working through this with me.