Yes and no. On the one hand when the models when they run they do run better, on the other hand it's still impossible to run the model set as character model and I have to use the custom model option. The good news is that it does run faster and that at least on custom I haven't encountered the issues I had before where it just wound't run.
BTW, did you put the linux AMD ROCm support in just for me as your one known linux user or is HammerAI actually detecting that I have an AMD AI capable CPU with iGPU besides the Nvidia GPU it's using right now? Because if the latter that's actually impressive - ROCm support is so spotty on linux is might as well not be there. The 780m from AMD is a lot weaker than the 4060 so I don't think It will see much usage, but I might try bigger models just to see how it behaves if Hammer can actually use the AMD iGPU natively.
PS. Please add a few newer RP models. Some of your competitors have a few finetunes that are open source licenses. ArliAI, Latitudegames, Dreamgen. Please add a few newer Nemo finetunes. Also, and this is just up to you, consider IQ quants and do not offload KV cache to VRAM. IQ quants can make 8GB fully enough to 100% offload most non-gemma 12B models, as long as one doesn't also try to offload KV cache to the Vram. That's in case you're not doing this already.
Anyways, cheers and thanks for the new Ollama update, it did in fact help.