Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(4 edits)

Yes, I confirm that both Deepl configs have unique API keys and names.

I can no longer reproduce the bug, as the configurations now appear to be working independently. The issue only happened initially when I was switching exclusively between the two Deepl configs. However, after I switched to a Custom API config just once, it seems the problem resolved itself, and the presets are now "good" because they are functioning separately. Since the API key is no longer visible and only dots are shown, I'll only be able to verify which key is which in the future, when I check the character usage on my Deepl account.

Update: Right as I was about to finish writing this, I checked my two Deepl accounts real quick to see if the character counts had updated, and they did! So, fortunately, it looks like the issue has been resolved, as characters were charged separately to each account.

It may have been because the configs had not been properly updated yet with the new config entries. Not sure. Please do let me know if you happen to run into any more bugs like that. I'm glad it's working well now though! :)

0.5.3-alpha.3 is out now.

LLAMA again uses parallel requests per default. Should increase the speed of translations by around 40% when capturing more than a few different lines of text.

Custom API now has an option to enable parallel processing, with rate limit user configs. I managed to get translations of 30~ texts to translate 100x faster with this.

Both now have a proper way to cancel translation requests mid-inference.

Let me know if you find anyone issues with it! :)

Thank you ⭐

Well, it works now, so that's what counts :)

I don't really use LLaMA because I haven't found a model that fits my needs or that my machine can handle. However, I just did a quick test and here's what I found: it's fast at translating single words or short sentences, but it gets a "Request timeout" when trying to translate longer, multi-sentence texts. This isn't a huge problem for me, though. If I really need to, I can run the same model through LM Studio on my machine. When I do that, it handles longer, multi-sentence texts just fine and even seems faster. I've never gotten a "Request timeout" with the smaller models I've tried there.

This is really useful, and if it wasn't fast enough before, it definitely is now. Thanks for that!

Keep up the great work (👍🏻ᴗ _ᴗ)👍🏻

Interesting! I would have assumed that this is an improper request body, like no n_predict or failing end tokens. At least in my experience, not having n_predict on some models makes them continue forever until they timeout. I've never used LM Studio, but maybe it takes care of such things automatically behind the scenes?

I'm considering adding another LLM translator backend that runs on GPU only and does a better job of it. Not sure which one to go with though and I need it to be relatively small.

No worries, glad it's faster for you! I'm wondering if I could somehow add a batching option too.. it'd be so dependent on the API but there may be a good way to implement it.

Thank you ༼ つ ◕_◕ ༽つ

Well, I'm not an expert, so I can't say for sure. I'm just playing around with this stuff for fun :)

That sounds interesting! I'm sure it will work out.

Yeah, it's definitely faster now. At least, it seems that way :) To be fair, I didn't really have a problem with the speed of the Custom APIs before; I was pretty happy with them, especially the ones where you can turn off or don't have the reasoning feature.

I'm all for any useful new features, so go for it!

I found two minor bugs. The text fields are oversized relative to the content.

Ahh! I forget that it is possible to maximize the main app. This is actually not intended, but I've kept it for now since the UI for the app is pretty atrocious at this point. I need to spend a day at some point to make sure everything scales uniformly on resize..

I also noticed the tooltip bar on the bottom-right looking real bad, lol.

Oh, I completely missed that one. I didn't notice it at all, but you're right, that looks bad too. 😅

You play some visual novels, right? If you play any with fairly still backgrounds, I think you will like the new alpha update I'm dropping now.
It should make things a bit smoother looking, and it will save you translation requests if the text is continuously written out.

To use it, go to General tab and check the 'Still background' checkbox. There's also two new hotkeys, one that toggles it, and one that requests a new background in case the automatic background detection isn't updating correctly.
I've also added a 'Text removal expansion scale' user config to General tab. Increasing it will help to remove text that is too thick or in those cases that the OCR is struggling to detect the text box correctly. Decreasing should help make the text removal less destuctive.

Let me know if you try it! Thank youuu! :)

I'm trying to make VN's, Comic/Manga reading as smooth sailing as possible. Would love to know from anyone that often plays or reads these on their input. It's hard for me who only ever read physical comics and books to know what I can do to improve the user experience here.

Well, I do play one game that's similar to that, but I think it has more animations than the average Visual Novel, so I'm curious if that will cause any interference. I'll test it out when I play next to see how it performs, but either way, I think this will be a very useful feature.

The Text removal expansion scale looks really promising. That sounds like an excellent feature that will be a big help.

Outside of that one VN-like game, I'm not playing any other Visual Novels right now, but I do read manga frequently. I will certainly test them out sometime :)

Thanks so much for adding these features!

(2 edits)

I ended up running a quick test, lol. I tried out the Still background feature, but when I hit the New still background button, the text doesn't translate, and this happens with both manga and games. It successfully updates the background image, but it keeps showing the translation from the previous image instead of the new one.

I do want to point out a positive side effect: the Still background feature actually eliminated the text displacement problem I was having in Stream friendly mode. At least when the feature is active, the text displays correctly. It's just a pity that the new text doesn't get translated.
Obviously, this doesn't fully solve my Stream friendly mode issue since it only works with the Still background function for manga or vn, but I definitely see this as progress in the right direction :)

(1 edit)

Love it! This feedback is so important. I never expected feedback to be so rare, and I really can't tell if it's good or bad at this point, lol. I feel like I got way more feedback when I had wayyyy less users (and the app obviously being more broken and feature-less..).

Yes, this is actually a misunderstanding. Maybe my description of the function in-app is confusing.
So 'new background' literally only does just that, updates the background - not the text itself. The purpose of this hotkey is just to update the background when the automatic check fails.  The 'One shot' hotkey should suffice for this, but thinking of it... pretty sure it is disabled when automatic capture is paused.. Might have to rethink that. Anyways - I have noticed the automatic background check isn't working very well, I'll have to keep tweaking that.
Also, I believe the reason to why your text isn't updating at all when new text shows is because there is something moving (even just one pixel) in your capture area. With Still Background activated, it literally has to be still. I will look into how I can improve this so that it works even if there are faint animations in the background.

That is super interesting information... When you tried this, were you running 0.5.4-alpha.1 or 0.5.4-alpha.2?

Edit:
Wait, hmm.. Are you absolutely certain that it starts misaligning like before when you unselect Still Background mode?