I'm not home yet for another few hours. Is this only happening with Azure?
I haven't had time to check everything yet, but it looks like the issue isn't just with that specific one. I went through a bunch and many are broken. I'm running into the same issues where it either just shows "%text%" or outputs something completely unrelated, which I’m seeing with the Gemini, Gemma models.
Thank you! If I remember correctly now, before 0.12, I had missed to commit some changes in a shared library, accidentally discarded it and then had to rewrite it and I must have broken it. I did this at some point anyways, but sounds like it must have been before 0.12. I'll fix this and reupload 0.6.0 everywhere in a few hours.
No worries, sorry that I created an unnecessary bug hah.
By the way - I fixed the $.. being formatted to $. in the JSONPath in 0.6.1-alpha.1, so I'd recommend updating your Azure presets now to use $.. instead of $. since the latter is an invalid path for the translation results for Azure. :)
And oh, I finally managed to create an Azure account. Seems pretty good!
No worries, it happens! :) But like I mentioned earlier, DeepL is enough for me at the moment. I only checked because Plusnet mentioned it wasn't working. Either way, glad it’s good now. :)
Got it, all updated.
Happy to hear that! :)
Also, I have to correct myself on the Gemini Pro models. Somehow my account was on Tier 1 (Pay-as-you-go), (I might have triggered it while messing with Google Translate) which is why the Pro models were working for me. I just switched back to the Tier 0 Free tier and now I’m seeing the limits too. So yeah, the Pro models definitely aren't available on the standard API for free. Sorry about the mix-up! I can delete those presets if you want since there's no free version for the standard API right now.
*Edit: I actually found a workaround using Gemini CLI + Cloudflare Workers+OpenAI API PROXY, so I got the Pro models to work through that. The RPD is still kind of low, and it’s a bit slow, but at least it works. Even Flash was kind of slow in the beginning, but I somehow got it to be fast enough, and it’s great now. At least according to my current tests. :) I’d already burned through the Pro version limit by that point, so I don't know if the Pro model would be fast with the current settings yet, but I’ll definitely check. Also, from what I’ve gathered, the Flash models have a higher RPD in the CLI version than the basic API ones, and the quotas are separate. I might try to put together a preset for this if I have the time, although it might be a bit complicated for some users.
Very much appreciated by you, good to see people using the app helping each other out! :)
Thank you! I fixed a bug in the latest version that didn't let other users download updates to the presets they had already downloaded. Must have broken it when I structured everything up a bit after deciding which style I was going for.
Ah, no worries! There's definitely value in leaving them up, just try to make sure they're not tagged as a free and it's perfect! :)
That's interesting. I haven't heard or seen this CLI tool. How fast was the Flash model through that method?
Yeah it sounds like a bit of a setup, probably something the more advanced users could appreciate though. I wish I had more time to try out all sorts of APIs and LLM models.. lots of interesting stuff out there.
You’re welcome! I’m happy to help when I have the time. :)
Okay, I've updated the tags to "paid."
It seems mostly the same as the regular one, maybe a split second slower at times, but they seemed equally fast during testing. The Pro models are definitely "hit and miss." Sometimes they're fine, other times they're slow or just get stuck. I assume it’s a reasoning issue, but even with that off, it doesn't improve much. They’re probably still fine for anyone who really needs the translation, though.
I also found a Qwen CLI recently. I managed to get it working via Cloudflare and a local Python file. Only one model is free for now, but it’s fast, similar to the Gemini Flash models. The current limit is 1,000 free requests per day through their OAuth system.