Got it. Thank you. It would be cool if there was a possibility to use models running locally to allow for trying some of the newer LLMs that are both powerful and less restrictive. Additionally, it would lower your operating costs, if the development was not too time consuming.