Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(1 edit)

Hi,

Unfortunately, a couple of config preset bugs have reappeared with the new update.

The Deepl API key is not being saved into the config. For example, if I paste the API key into one Deepl config and switch to another Deepl config, it probably gives the same one, because if I delete the key and nothing is there, and I switch to the other Deepl config, that one is also empty, even though it shouldn't be. Since the key is no longer visible in the app, I can only assume this is happening due to the reasons mentioned above. When I exported the configs, the API key wasn’t included either. But interestingly, right after updating, when I first opened the app and noticed that config presets could now be exported (before you couldn’t, right?), I exported them straight away. In those first exported files, the API key was still there.

Also, before, you could only pick the languages once the API key was entered, but now you can pick them without one. Not sure if that’s a bug or just a change, but it's not an issue for me. :)

I also noticed a bug with the API encryption setting. It appears to be shared across all Deepl configurations. If I enable encryption in one Deepl config, it incorrectly shows up as enabled when I switch to another Deepl config, even if I never set it there.

I have a small UI request regarding the config preset field. Could you possibly make that field a bit larger? It's currently quite small, and when a config has a long name, the full name gets cut off, making them hard to distinguish, especially when multiple configs start with the same name. Alternatively, showing the full name when the mouse hovers over the field, or displaying the full name near the version number above, would also be a great solution.

Thanks :)

Heya!

The DeepL bug you mention is not a bug at all, this is the intention. All API keys are now stored in the same plain text file, until you encrypt them, and when you do, they are safely stored and trusted by your computer & your master password. All configs will use the same DeepL key.

If you are using multiple DeepL keys (sneaky sneaky..!), let me know, I can set it up so it is possible to use more than one.

Yes, exporting is new, as well as importing. The reason the API key existed in the config on the first launch was just because your configuration file had not been saved & updated yet before export.

You can only pick one if you have previously had the API key to authenticate during that session. This is not really intended however, I do want it to reauthenticate even the API key is changed/removed! :)

Yep! I'll fix that UI request, I agree it wasn't optimal. Thank you!! :)

Hi,

Thanks for clearing that up. I honestly thought it was a bug.

Yes, I do use more than one Deepl key, specifically two (ര ‿ ര ). That's why I brought this up, since I was running into issues with using two keys. It would be a huge help if you could get it working with multiple keys (˵ •̀ ᴗ •́˵)

Oh, got it.

And thanks so much for considering the UI request! ദ്ദി(˵ •̀ ᴗ - ˵ ) ✧

Eyup,

0.5.3-alpha.2 is now out! DeepL can now use 'custom' API keys too, exactly like the custom API. Fixed the UI request too, if I didn't understand it wrong.

Love the smiley haha. Gives me nostalgic feelings.

Well, I went ahead and named the key in the “API Key Name” field and pasted the key into the “API Key” field. Then I switched to another DeepL config, and surprisingly the same key showed up there, even though I had erased everything earlier. I kept jumping between the configs, and the pattern held: any change I made in one reflected in the other. If I deleted the key in one, the other one lost it too.

That persisted until I first switched from a DeepL config to a Custom API config, and then went to the other DeepL config. From that point on, they behaved separately. Now, if I delete the key in one and immediately jump to another DeepL config (without going via Custom API), the other one still keeps its key. I hope it’s really working independently now, and not secretly using the same key under the hood, but we’ll see over time.

By the way the UI request you did is spot on. It’s exactly what I was looking for. Thank you! :)

Hmm. I'm not able to reproduce this bug.

Just to confirm, you are naming the keys to different values in each config? They have to be unique.

If you are, it'd be great if you could do a quick recording of the process. I've ran into enough issues with these config presets that I don't doubt there are still other bugs lingering.. I really gotta redo this in another way at some point.

Perfect! Good to hear 🤝🏻

(4 edits)

Yes, I confirm that both Deepl configs have unique API keys and names.

I can no longer reproduce the bug, as the configurations now appear to be working independently. The issue only happened initially when I was switching exclusively between the two Deepl configs. However, after I switched to a Custom API config just once, it seems the problem resolved itself, and the presets are now "good" because they are functioning separately. Since the API key is no longer visible and only dots are shown, I'll only be able to verify which key is which in the future, when I check the character usage on my Deepl account.

Update: Right as I was about to finish writing this, I checked my two Deepl accounts real quick to see if the character counts had updated, and they did! So, fortunately, it looks like the issue has been resolved, as characters were charged separately to each account.

It may have been because the configs had not been properly updated yet with the new config entries. Not sure. Please do let me know if you happen to run into any more bugs like that. I'm glad it's working well now though! :)

0.5.3-alpha.3 is out now.

LLAMA again uses parallel requests per default. Should increase the speed of translations by around 40% when capturing more than a few different lines of text.

Custom API now has an option to enable parallel processing, with rate limit user configs. I managed to get translations of 30~ texts to translate 100x faster with this.

Both now have a proper way to cancel translation requests mid-inference.

Let me know if you find anyone issues with it! :)

Thank you ⭐

Well, it works now, so that's what counts :)

I don't really use LLaMA because I haven't found a model that fits my needs or that my machine can handle. However, I just did a quick test and here's what I found: it's fast at translating single words or short sentences, but it gets a "Request timeout" when trying to translate longer, multi-sentence texts. This isn't a huge problem for me, though. If I really need to, I can run the same model through LM Studio on my machine. When I do that, it handles longer, multi-sentence texts just fine and even seems faster. I've never gotten a "Request timeout" with the smaller models I've tried there.

This is really useful, and if it wasn't fast enough before, it definitely is now. Thanks for that!

Keep up the great work (👍🏻ᴗ _ᴗ)👍🏻

Interesting! I would have assumed that this is an improper request body, like no n_predict or failing end tokens. At least in my experience, not having n_predict on some models makes them continue forever until they timeout. I've never used LM Studio, but maybe it takes care of such things automatically behind the scenes?

I'm considering adding another LLM translator backend that runs on GPU only and does a better job of it. Not sure which one to go with though and I need it to be relatively small.

No worries, glad it's faster for you! I'm wondering if I could somehow add a batching option too.. it'd be so dependent on the API but there may be a good way to implement it.

Thank you ༼ つ ◕_◕ ༽つ