Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Thanks for the detailed reply, I appreciate it! ❤️
Just to be clear I don't think there is a problem with the plugin, but for beginners like me it's not easy to run at all after some time here is my experience:

First, I tried the free version it was very complicated to install with VScode because of Roo Code, also Ollama always got stuck every 5 minutes (even if I didn't do anything) I used also lower VRAM models just to test so I only used about 22GB VRAM (Gemma4:26b) with everything open (Godot, VScode, Ollama + Loaded Model to VRAM).

First it worked almost, it created the Player but didn't care about adding the  default icon.png so I tried to explain what is the issue so it can fix it, but then it was always stuck in VSCode, it seems like it worked on the free version at least only on the first command to do something in Godot so after 5 minutes it always stuck after I tried everything from scratch many times.
I also tried to get some help via Gemini just to make it work but it couldn't solve the problem, so I tried again with LMStudio but couldn't make it connect to MCP Pro, so I had too many issues trying to just feel how it works in Godot + MCP Pro.

If you'll consider in the future to make a step-by-step video tutorial for non-programmers to install (local use) it will be easier follow it from installation, connect everything, and actually make something simple like my example which will make a great test before anyone purchase. 🙏

I joined the Discord in case I'll decide to try again maybe I can get some help.

Thanks for sharing your experience — this is really helpful feedback!

You're right that the local LLM setup is much harder than it should be, especially for non-programmers. The sticking/freezing issue with Ollama + Roo Code is a known pain point — local models often struggle with the large number of tool definitions and can timeout or get stuck in loops.

A step-by-step video tutorial for local setup is a great idea and I'll put it on my to-do list. In the meantime, a couple of tips if you want to try again:

- Use `--minimal` mode (35 tools instead of 169) — this drastically reduces the context size that chokes local models

- Gemma4 26B should work with `--minimal`, but give it simpler one-step instructions rather than complex multi-part requests

- If Ollama freezes, it's usually the model running out of context window, not the MCP plugin

Glad you joined the Discord — drop a message there anytime and I'll help you get it running!