I am, I got it the moment it was published
After some research, it seems that A770 is simply not well-supported by llama.cpp. You could try running Gemma-4-Sparse to see if a sparse model is faster, if you aren't already doing that.
Hmm that's a shame, I'm pretty green on this field but I'll see what I can do. Thanks for the support thus far.