Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Thanks for sharing your experience — this is really helpful feedback!

You're right that the local LLM setup is much harder than it should be, especially for non-programmers. The sticking/freezing issue with Ollama + Roo Code is a known pain point — local models often struggle with the large number of tool definitions and can timeout or get stuck in loops.

A step-by-step video tutorial for local setup is a great idea and I'll put it on my to-do list. In the meantime, a couple of tips if you want to try again:

- Use `--minimal` mode (35 tools instead of 169) — this drastically reduces the context size that chokes local models

- Gemma4 26B should work with `--minimal`, but give it simpler one-step instructions rather than complex multi-part requests

- If Ollama freezes, it's usually the model running out of context window, not the MCP plugin

Glad you joined the Discord — drop a message there anytime and I'll help you get it running!