I mean, it the usual context and AI issue, if you can cope with self-hosting or paying more for higher context or instruct LLMs, they're (almost)always better. The problem with always reinforcing prompts back into the LLM is that it costs more tokens... and can result in bad RP traps thet'll get stuck in.