Fixed!
You were on the right track, but the magic parameter I needed to set was num_predict. Setting that to 128 forces the model to terminate if it gets stuck. Longer translations are cut off, though. I'm going to experiment with some other translation models to see if this is an issue with the model or ollama itself.
Thanks again for working through this with me.