Nice! Glad the feedback helped. I totally get that it's a lot of work, so take your time. I'll keep using it as it is for now and I'm looking forward to the updates whenever they're ready. Thanks for being so chill about the suggestions and for the support
CristianoBR
Recent community posts
Hello again,
Thank you for your previous explanation! I understand how the current workflow works, but I would like to suggest a new feature to make Thaluna even more competitive with other tools.
Many users (including myself) are used to tools like Mort or mobile bubble translators. These tools allow us to select a specific area of the screen once, and then they automatically detect and overlay the translation in real-time, without the need to click "Snapshot" every time we scroll or a new dialogue appears.
I would love to see a "Real-Time Lens Mode" where:
- We define the area (like the current pink frame).
- The software continuously monitors that specific area.
- It automatically replaces/overlays the text in the same position (even if it's on the left or right side of the frame), keeping the UI organized.
Technical Suggestion: This could work using a local OCR (like Windows OCR) to detect changes in the image. The software would only call the expensive Translation API when it detects that the text has actually moved or updated, preventing token waste.
As a great visual example of this "Overlay" behavior, please look at the end of this video (around 10:45):
One final note: I apologize if anything is written incorrectly or if I sounded arrogant in any way. That was definitely not my intention. I am just very excited about Thaluna and would love to see it become even better!
Thank you for your hard work and for considering this!
Hello,
I purchased Thaluna yesterday and I am trying to use it to read Manga (English to Portuguese-Brazil). However, I am facing several technical issues that prevent a smooth experience:
- Auto-Translate is not triggering: I am using a PC with an i5-10400, 28GB RAM, SSD, and GTX 1050 Ti. Even with "Economy Mode" active, "Similarity threshold" set to 0.85, and "Ignore text shorter than" at 2, the program does not detect when I turn the page. I have to manually click the snapshot button every time.
- Missing "Lock/Pin" Icon: I cannot find the padlock icon to lock the OCR area to the manga reader window. Because of this, the program sometimes ignores my defined area and translates the entire screen (including the Windows clock and taskbar), wasting my API tokens.
- OCR Language Issues: The option for "English (Default)" is missing from my OCR list. I am forced to use "Universal (Latin)", which is misinterpreting manga drawings as text.
- Target Language: I am trying to translate from English to Portuguese (Brazil). I would like to ensure the best model settings for this specific pair.
- UI Behavior: Even when the "Pause" icon (||) is visible, indicating the engine is running, the overlay does not update automatically as I scroll or change pages.
Note: I have spent a lot of time trying to fix this. I even used ChatGPT and Gemini AI to help me troubleshoot every setting, but we couldn't get the automatic mode to work.
I apologize for any mistakes in my English. I am using an AI (Gemini) to translate my thoughts so you can understand my situation clearly.
Thank you for your help!