Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Well, I went ahead and named the key in the “API Key Name” field and pasted the key into the “API Key” field. Then I switched to another DeepL config, and surprisingly the same key showed up there, even though I had erased everything earlier. I kept jumping between the configs, and the pattern held: any change I made in one reflected in the other. If I deleted the key in one, the other one lost it too.

That persisted until I first switched from a DeepL config to a Custom API config, and then went to the other DeepL config. From that point on, they behaved separately. Now, if I delete the key in one and immediately jump to another DeepL config (without going via Custom API), the other one still keeps its key. I hope it’s really working independently now, and not secretly using the same key under the hood, but we’ll see over time.

By the way the UI request you did is spot on. It’s exactly what I was looking for. Thank you! :)

Hmm. I'm not able to reproduce this bug.

Just to confirm, you are naming the keys to different values in each config? They have to be unique.

If you are, it'd be great if you could do a quick recording of the process. I've ran into enough issues with these config presets that I don't doubt there are still other bugs lingering.. I really gotta redo this in another way at some point.

Perfect! Good to hear 🤝🏻

(4 edits)

Yes, I confirm that both Deepl configs have unique API keys and names.

I can no longer reproduce the bug, as the configurations now appear to be working independently. The issue only happened initially when I was switching exclusively between the two Deepl configs. However, after I switched to a Custom API config just once, it seems the problem resolved itself, and the presets are now "good" because they are functioning separately. Since the API key is no longer visible and only dots are shown, I'll only be able to verify which key is which in the future, when I check the character usage on my Deepl account.

Update: Right as I was about to finish writing this, I checked my two Deepl accounts real quick to see if the character counts had updated, and they did! So, fortunately, it looks like the issue has been resolved, as characters were charged separately to each account.

It may have been because the configs had not been properly updated yet with the new config entries. Not sure. Please do let me know if you happen to run into any more bugs like that. I'm glad it's working well now though! :)

0.5.3-alpha.3 is out now.

LLAMA again uses parallel requests per default. Should increase the speed of translations by around 40% when capturing more than a few different lines of text.

Custom API now has an option to enable parallel processing, with rate limit user configs. I managed to get translations of 30~ texts to translate 100x faster with this.

Both now have a proper way to cancel translation requests mid-inference.

Let me know if you find anyone issues with it! :)

Thank you ⭐

Well, it works now, so that's what counts :)

I don't really use LLaMA because I haven't found a model that fits my needs or that my machine can handle. However, I just did a quick test and here's what I found: it's fast at translating single words or short sentences, but it gets a "Request timeout" when trying to translate longer, multi-sentence texts. This isn't a huge problem for me, though. If I really need to, I can run the same model through LM Studio on my machine. When I do that, it handles longer, multi-sentence texts just fine and even seems faster. I've never gotten a "Request timeout" with the smaller models I've tried there.

This is really useful, and if it wasn't fast enough before, it definitely is now. Thanks for that!

Keep up the great work (👍🏻ᴗ _ᴗ)👍🏻

Interesting! I would have assumed that this is an improper request body, like no n_predict or failing end tokens. At least in my experience, not having n_predict on some models makes them continue forever until they timeout. I've never used LM Studio, but maybe it takes care of such things automatically behind the scenes?

I'm considering adding another LLM translator backend that runs on GPU only and does a better job of it. Not sure which one to go with though and I need it to be relatively small.

No worries, glad it's faster for you! I'm wondering if I could somehow add a batching option too.. it'd be so dependent on the API but there may be a good way to implement it.

Thank you ༼ つ ◕_◕ ༽つ

Well, I'm not an expert, so I can't say for sure. I'm just playing around with this stuff for fun :)

That sounds interesting! I'm sure it will work out.

Yeah, it's definitely faster now. At least, it seems that way :) To be fair, I didn't really have a problem with the speed of the Custom APIs before; I was pretty happy with them, especially the ones where you can turn off or don't have the reasoning feature.

I'm all for any useful new features, so go for it!

I found two minor bugs. The text fields are oversized relative to the content.

Ahh! I forget that it is possible to maximize the main app. This is actually not intended, but I've kept it for now since the UI for the app is pretty atrocious at this point. I need to spend a day at some point to make sure everything scales uniformly on resize..

I also noticed the tooltip bar on the bottom-right looking real bad, lol.

Oh, I completely missed that one. I didn't notice it at all, but you're right, that looks bad too. 😅

You play some visual novels, right? If you play any with fairly still backgrounds, I think you will like the new alpha update I'm dropping now.
It should make things a bit smoother looking, and it will save you translation requests if the text is continuously written out.

To use it, go to General tab and check the 'Still background' checkbox. There's also two new hotkeys, one that toggles it, and one that requests a new background in case the automatic background detection isn't updating correctly.
I've also added a 'Text removal expansion scale' user config to General tab. Increasing it will help to remove text that is too thick or in those cases that the OCR is struggling to detect the text box correctly. Decreasing should help make the text removal less destuctive.

Let me know if you try it! Thank youuu! :)

I'm trying to make VN's, Comic/Manga reading as smooth sailing as possible. Would love to know from anyone that often plays or reads these on their input. It's hard for me who only ever read physical comics and books to know what I can do to improve the user experience here.