Hey!
First off — genuinely, thank you. Not just for the idea itself, but for taking the time to explain it properly. That’s not something everyone does, and I really do appreciate it.
And just to be clear — you didn’t sound arrogant at all. You sounded like someone who actually uses the tool and knows what’s missing. That’s exactly the kind of feedback that’s worth something.
What you’re describing is basically a hybrid between Real-Time Mode and Lens Mode — you define the area once, and then the app watches it on its own, detects when something changes, and refreshes the overlay automatically without needing a new manual snapshot each time. I get why that would be useful, especially for manga, webtoons, or UI-heavy games.
Your technical point about local change detection also makes sense — triggering OCR and translation only when the area actually changes is the right way to do it, so you’re not burning API calls for nothing.
So yes — I’m taking this seriously. But I’ll be straight with you: it’s not a small change. This would be its own separate workflow, not just a tweak to Lens Mode, so I can’t give you an ETA right now.
For the time being, the two options stay the same:
- Real-Time Mode — continuous OCR inside a defined capture area
- Lens Mode — snapshot-based overlay, great when layout matters
But the “real-time overlay with a remembered area” idea is on the list now. And I’m glad it is.
Thanks again.


































