Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

norby777

98
Posts
5
Topics
1
Following
A member registered Jun 16, 2025

Recent community posts

Hi, I came across a project similar to what you’re looking for: LLPlayer. It’s designed for videos rather than gaming, but it works really well. It’s especially useful if you watch anime, since it generates subtitles using AI.


LLPlayer has many features for language learning that are not available in normal video players.

  • Dual Subtitles: Two subtitles can be displayed simultaneously. Both text subtitles and bitmap subtitles are supported.
  • AI-generated subtitles (ASR): Real-time automatic subtitle generation from any video and audio, powered by OpenAI Whisper. two engines whisper.cpp and faster-whisper are supported.
  • Real-time Translation: Supports many translation engines, such as Google, DeepL, Ollama, LM Studio, OpenAI.
  • Context-aware Translation: Highly accurate translation by recognizing the context of subtitles using LLM.
  • Real-time OCR subtitles: Can convert bitmap subtitles to text subtitles in real time, powered by Tesseract OCR and Microsoft OCR.
  • Subtitles Sidebar: Both text and bitmap are supported. Seek and word lookup available. Also supports incremental subtitle search.
  • Instant word lookup: Word lookup and browser searches can be performed on subtitle text.
  • Customizable Browser Search: Browser searches can be performed from the context menu of a word, and the search site can be completely customized.
  • Plays online videos: With yt-dlp integration, any online video can be played back in real time, with AI subtitle generation, word lookups!
  • Flexible Subtitles Size/Placement Settings: The size and position of the dual subtitles can be adjusted very flexibly.
  • Subtitles Seeking for any format: Any subtitle format can be used for subtitle seek.
  • Built-in Subtitles Downloader: Supports opensubtitles.org
  • Integrate with browser extensions: Can work with any browser extensions, such as Yomitan and 10ten.
  • Customizable Dark Theme: The theme is based on black and can be customized.
  • Fully Customizable Shortcuts: All keyboard shortcuts are fully customizable. The same action can be assigned to multiple keys!
  • Built-in Cheat Sheet: You can find out how to use the application in the application itself.
  • Free, Open Source, Written in C#: Written in C#/WPF, not C, so customization is super easy!
(1 edit)

You’re welcome! I’m happy to help when I have the time. :)

Okay, I've updated the tags to "paid."

It seems mostly the same as the regular one, maybe a split second slower at times, but they seemed equally fast during testing. The Pro models are definitely "hit and miss." Sometimes they're fine, other times they're slow or just get stuck. I assume it’s a reasoning issue, but even with that off, it doesn't improve much. They’re probably still fine for anyone who really needs the translation, though.

I also found a Qwen CLI recently. I managed to get it working via Cloudflare and a local Python file. Only one model is free for now, but it’s fast, similar to the Gemini Flash models. The current limit is 1,000 free requests per day through their OAuth system.

(2 edits)

It’s weird because there are two Azure versions, and the "2025-10-01-preview" one actually works. If you're determined to use Azure, you should switch to "Azure Translator 2025-10-01-preview" in the community presets for now, until the other one is fixed. It worked when I tested it.

(5 edits)

No worries, it happens! :) But like I mentioned earlier, DeepL is enough for me at the moment. I only checked because Plusnet mentioned it wasn't working. Either way, glad it’s good now. :)

Got it, all updated.

Happy to hear that! :)

Also, I have to correct myself on the Gemini Pro models. Somehow my account was on Tier 1 (Pay-as-you-go), (I might have triggered it while messing with Google Translate) which is why the Pro models were working for me. I just switched back to the Tier 0 Free tier and now I’m seeing the limits too. So yeah, the Pro models definitely aren't available on the standard API for free. Sorry about the mix-up! I can delete those presets if you want since there's no free version for the standard API right now.

*Edit: I actually found a workaround using Gemini CLI + Cloudflare Workers+OpenAI API PROXY, so I got the Pro models to work through that. The RPD is still kind of low, and it’s a bit slow, but at least it works. Even Flash was kind of slow in the beginning, but I somehow got it to be fast enough, and it’s great now. At least according to my current tests. :) I’d already burned through the Pro version limit by that point, so I don't know if the Pro model would be fast with the current settings yet, but I’ll definitely check. Also, from what I’ve gathered, the Flash models have a higher RPD in the CLI version than the basic API ones, and the quotas are separate. I might try to put together a preset for this if I have the time, although it might be a bit complicated for some users.

It’s working for me now. Thanks for the fix! :)

(2 edits)

I haven't had time to check everything yet, but it looks like the issue isn't just with that specific one. I went through a bunch and many are broken. I'm running into the same issues where it either just shows "%text%" or outputs something completely unrelated, which I’m seeing with the Gemini, Gemma models.

(1 edit)

Hi! It looks like there's some kind of bug in the new version. I just ran some tests and it’s not working on my end either. Azure still works fine on Alpha 8, for instance, but it's broken in the latest release.

*Edit: I just double-checked, and it looks like this has been happening since Alpha 12. It was working perfectly in Alpha 11, but Alpha 12 is where the problems started.

(1 edit)

I’ll update it to that then. :)
I don't know, it worked okay for me.

I mean, sure, it'd be awesome if it was that fast, but we've gotta work with what we've got. :)


*Edit: I updated as many as I could; I got the Gemini and Gemma models done, but then I hit a limit. I'll get the rest uploaded as soon as the limit is over.

Before, I was only testing the Pro and Gemma models with random text, and they all worked fine. In my last message, I only tried it with Pro, but since you brought up the Gemma models, I tested them with your text and they really were having issues. Except for gemma-3-12b-it and gemma-3-4b-it, the rest didn't do so well. It’s strange because they were working fine for me with random text and while gaming. The good news is that with your corrected prompt, the Gemma models are working now. If that works for you, I’ll go ahead and update the presets I uploaded with that prompt.
With this: 

"Translate the following %source% text to %target%. Pay attention to accuracy and fluency. You are only to handle translation tasks. Provide only the translation of the text. Do not add any annotations. Do not provide explanations. Do not offer interpretations. Correct any OCR mistakes. Text:\n\n%text%"

About this prompt: Should we stick with this prompt or should we think of something else? The Gemma models didn't have any issues once I switched to this.

Regarding the speed, gemma-3-12b-it is a bit slow for me, but it’s still acceptable. The other Gemma models are fast. The Pro version is also pretty quick; it doesn't take long to process. It’s only a few seconds slower than DeepL or your average custom API, even though we can't set the thinkingBudget to 0 anymore since they made those changes. At least, that's been my experience so far.

Okay :)

Yeah, I think I only fixed the Azure one, but I’ll look through the rest when I have a chance to see if anything else needs fixing. Like I said, there’s definitely one more coming for Cloudflare, but I haven't uploaded it yet. :(

Okay, and like always, sorry if I get something wrong! :)

(4 edits)

Not at first, I was just testing random text then and earlier today, but I just tried the text you wrote and it worked for me. I think I used this prompt basically the whole time I was using the custom API, and I don't remember it giving me any trouble. I definitely would've changed it if it did.



Thanks. I haven't had much free time lately either. :)


*Edit: Oh, wait, I forgot to add. I tested it using Gemini 3-pro-preview and I literally didn't get any limits.

I still have a couple left. I used the placeholders where they were needed, like updating Azure for the region and language code, but didn't use them anywhere else. I might’ve missed a spot or just forgot, which definitely happens. There’s the Cloudflare one that needs the account_id, but I couldn't upload it yet. Was I supposed to use them somewhere else too?

That's strange, because it didn't look broken to me. I actually just retested it and didn't have any problems with the translation, but thanks for the heads-up. Structure is definitely the way to go. I haven't had much time lately, so DeepL is plenty for me and I'm not using any custom APIs right now. That was just the last prompt I had set up, so I just rolled with it :)

Hey,

Yeah, it’s actually working now! :)

No worries at all. Gemini's limits got tanked around December, I think, but I just tested all the Gemini and Gemma models and they're literally working perfectly. Pro doesn't really have a big limit anymore, but it’s still going, at least for me. Is it maybe a region thing?

Awesome! :)

(1 edit)

Sounds good to me.

Okay, makes sense. :)

I think I get it and it actually sounds good, but I’m also super sleep-deprived. 😅

I’ll definitely take a look at it soon.

I have a question though. Since my last upload, I can't upload anything anymore. It says either I'm offline or the server is, but I'm definitely not offline. Is this a server thing or is it just me? I still have several presets that I haven't been able to upload yet.


(1 edit)

No problem, makes sense.

About the model URL thing—I meant that I have presets like Gemini 2.5 Flash that are made for that specific model, so you're good to go. But I also have presets where I just made a general template instead of making a million versions for every model, like with OpenRouter. The OpenRouter preset has a model in it, but if users want to change it, they gotta go find another one on their site. Since models change all the time and I was too lazy to make one for every single variation :(, I just went with a basic template.

Yeah, sorry about that—it’s so tiny and the color is so close to the app's theme that I totally missed it. Or maybe it just slipped my mind. Happens to me sometimes! :)

Nope, I think that’s it for what we have right now, but I’ll let you know if I find more. :)

Thank you! This app is just getting better and better. :)

*Edit

Since uploading is working again, I got as much up there as I could. I added Gemini 2.5 Flash and that OpenRouter one I mentioned. Azure is also included, where we'd need the language code in the API URL and the region in the headers.

Sorry, I don't remember. :)

Yeah, it wouldn’t let me edit it for some reason, so I just deleted it. I probably just missed something :) so I took it down, but I’ll upload it again gradually. Sorry about that! :)

(2 edits)

Yeah, the API registration URL is a solid idea. 

The Description box has a character limit, so I couldn't put in all the info I wanted for some APIs. Plus, I hit a limit while uploading, so I couldn't finish adding everything. 

I'm not sure if this is the way to go, but it would be awesome if every important thing had its own URL and Description for the setup. Like, we need it for the API Key, Models, and Pricing/Limits. The model section would be for when a preset is just for the site in general—like my Cloudflare one—instead of a specific model. If it's already for a specific model like Gemini 2.5 Flash-lite, then we don't need that section. This might even keep the info more up-to-date so people don't always have to dig through the guide. Or would that be overcomplicating it? Just an idea! :)


*Edit

I totally blew it. I wasn't looking and missed the %api_key% part and the %source%, %target%, so I just put "YOUR API KEY" instead. Sorry, I'll get those fixed.


*Edit 2


Maybe we could add %region% to the Request header and something like %country_code% to the API URL? Some presets like Azure Translate actually require it. 

URL: "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=hu
H: "Ocp-Apim-Subscription-Region": "westeurope"

Also, some presets like Cloudflare need an account ID in the URL path. Like, https://api.cloudflare.com/client/v4/accounts/CLOUDFLARE_ACCOUNT_ID/ai/run/@cf/deepseek-ai/deepseek-r1-distill-qwen-32b. It would be great if we had a placeholder for that as well.

Hey,

that’s awesome! :) Will check it soon.

You're welcome. :)

Yes, I can confirm the keybinding problem is fixed now. (੭˃ᴗ˂)੭

Thanks! 

Thanks.

Nope :) I mostly just use "One Shot" when I need it, but sometimes I accidentally hit the "Pause" button and then quickly press it again to keep the translation going. I guess that habit stuck, and that's why I occasionally try that quick-press thing I mentioned, which is how I noticed this. It's not a big problem for me at all because I hardly ever use it; I just saw it and figured I'd let you know.

(2 edits)

Hi,

The new 0.5.9-alpha 1 update has a weird little keybinding bug. My "Toggle Gametranslate" key was set to "Slash" in the app, but it stopped working with that key. If I go into the Dashboard and set it to the same key, it shows up as "Minus" instead, and then it works perfectly. All the other keys are fine.


Also, with the Stream friendly API, I noticed that hitting the "Pause" key (I use the "Period" button for "Pause") makes the translation reaction time super slow in Automatic mode. Before, when it was paused and new text showed up, quickly double-tapping Period would translate the text instantly. Now, with the Stream friendly API, the double-tap doesn't work. This seems to work fine if I use DXGI.

Just take it easy; no need to rush :) It's not that annoying.

My last comment was about the third bug. That one's fixed now. The second bug is still popping up sometimes, but not as often.

Yeah, the centering is fine now. The text hasn't been pushed to the right at all. Thanks! :) 

(2 edits)

Okay. Thanks for the explanation. (I just saw the first picture again, and those colored words are more orange than yellow, but whatever, I fixed those lol) Keep up the good work :)

Okay, I figured out what's causing the third bug (and maybe the second one too? The line breaks were actually fine in my quick check, but who knows :) ) It's the combo of Automatic font size and Attempt text centering, but this only happens in the very latest version. If I turn on Comic mode with those, it gets a little better because it doesn't shove it so far to the right. I switched back to alpha 3, and those same settings worked fine there. So, this bug is only in the newest release.

(2 edits)

Hey,

I noticed a few bugs in the new GameTranslate_0.5.8-alpha.4 update. They all showed up when I used Automatic mode.

1: If some words in the text are a different color—like orange or red—the app often fails to mask or hide them properly. The picture only shows the orange-word issue (I forgot to screenshot the red ones). It sometimes successfully hides the orange words, but sometimes it doesn't. It never hides the red words. You can see the successful and unsuccessful orange word masking in the images for the third bug down below too.



I'm not sure if this particular bug is new to this update, because I've never tested sentences that have different colored words before, or at least I don't remember doing it.

2: The translation text isn't wrapping right compared to the original text; it just outputs the translated sentences in one long line. The area selected for the translation is wide, but I don't think this was a problem before. It doesn't happen a lot yet, but it does show up sometimes.


3: This is the most annoying one. Very often, the translation just slides way off to the right. Sometimes it moves so far off the screen that I can't even read it. It's okay when the text is short, but it's a huge pain when the text is long.

I don't remember if the second and third bugs ever happened before. (Or maybe I just forgot :) )

Thanks.

(3 edits)

Nope. Everything's good; I didn't encounter any issue related to caching. Thanks :)

This update turned out pretty great. I'm especially thankful for this part: "Added a 'Show automatic border' checkbox to General config, letting users control whether they want the Automatic translation window to have a border or not." That's a super useful little feature for me—the border used to get in the way of my screen, and now I don't have to keep closing the window just to hide it. A huge thanks for this update! :)

(2 edits)

That's weird. The offline translator works well for me, too. I downloaded the en-fi language pack, and that’s also working perfectly for me. But hey, at least you got it working with Deepl.

I'm running Windows 10, and it works fine on my machine. The issue might be on your side. Have you tried switching to a different Screenshot capture API, like DXGI?

Alright, thank you. •ᴗ•

No worries at all. It will be ready when it's ready, and I'll patiently wait. Hang in there and keep up the great effort :)

I understand now :) and yes, your assumption is right. Thanks.

It's completely identical to a previous translation, but the "previous" I'm referring to isn't the one right before; it's showing the text from one that was captured, say, 10 or 20 minutes ago.
:)

Hi,

Lately, I've been using the Manual capture mode for games, and I've noticed a recurring issue: occasionally, when I select a new block of text, the app displays an old translation that was captured previously, instead of translating the new text. When this happens, re-selecting the text repeatedly still brings up the incorrect, old translation. My workaround has been to deliberately exclude about half of the first letter when making the selection, which then forces a correct translation. It's not a frequent problem, but it does pop up from time to time. For now, it's not a major inconvenience.

Just wanted to give you a heads-up about this.

I have tested it, and it's working properly now. I think this is a great start and will definitely be a useful feature down the line.