Can you make it work with a local LLM and various models, like Ollama does? Maybe even just using Ollama itself?