You do realize they don't need to force you to use OpenAI? They could just open up the API so that you could use locally run LLMs.. which could keep the game alive long after the devs stop supporting the game.
OpenAI's API is pretty common nowadays and many popular local LLM UIs support it like text-generation-webui.. though Koboldcpp which is probably the easiest to use uses it's own API, but should be fairly easy to implement still. (the devs could easily look at things like SillyTavern and the code on how they implemented those things)
As far as I know, SillyTavern requires API keys that still cost money. From their FAQ directly:
"
Q: So I should use GPT-4. It's a no-brainer, right?
GPT-4 or Claude, yeah.
But not so fast. GPT-4 is the state of the art, but also the most expensive API to use. You pay for each word sent to it and returned (entire Tavern prompt, followed by the chat history up to that point). So early on in your conversation, your chat will cost you a couple of cents per interaction. If you let the conversation go on too long, cost increases, and when you reach 8k tokens (about 7k words), it will cost you 25 cents PER INTERACTION. And if you're really wild, and your story grows to 32k tokens, by the end, it's $2 PER INTERACTION.
If you're the child of a Saudi oil sheik, or a nepo baby paid a fortune to do nothing on the board of a Ukrainian gas company, then you're in luck, you can experience the state of the art right now. For the rest of us however, GPT-4 is too expensive as anything but an occasional treat.
Also note that GPT-4 is still in preview access and you need to go on a waitlist. Most people get approved within a day, but naughty kids can end up waiting for weeks. You can sign up for it here: https://openai.com/waitlist/gpt-4-api . I'm not sure why some people are approved quickly while others are kept waiting. Try to sign up using an academic-sounding name instead of sktrboi99, it might help.
GPT-3.5 is a more cost effective model while still outperforming most models.
BE SURE TO SETUP A MONTHLY USAGE CAP ON OPENAPI IF YOU USE A CHATGPT MODEL. THIS WILL KEEP YOU FROM OVERSPENDING"
That means that it still costs money, which was LITERALLY the whole reason the team moved to having their own API; some of the biggest feedback they got when they released the previous version was that people didn't have the kinds of cards to setup accounts on OpenAI. So they decided to find a monetization path that allowed players to play the game using in-game currency.
EDIT okay so I found some that are free. However, each require directly downloading and installing the LLMs, are severely restricted compared to online resources, and would be a pain to integrate and support. Not to mention that any technical issues with the LLMs may wind up as false reports to the Devs. It would be nice to add, but it would certainly pose its own problems, and you have to trust the users would be both tech savvy enough to install them, as well as having machines beefy enough to run not only the game, but the LLM as well.
Sure if you aren't tech savvy you can always just pay for the ease of use but if you don't mind figuring things out to do it for free then you should still be allowed to do that.
(Btw you don't need particularly beefy computers to run LLMs especially llamacpp.. it can run on your phone if you have 6-8gb memory.)
I work with the public somewhat often in my career. I once had to walk a person through the complicated job of... entering coordinates correctly... four times. They could not understand that they had to convert their hours minutes seconds form of coordinates to decimals despite me telling them that three times. I was barely keeping my cool the third time he entered coordinates wrong. People are dumb, and then they expect you to clean up after their dumb mistakes.
I would never want to imagine the hell that would be unleashed trying to offer people a "free" version of a product that would require setting up an LLM on their machine. I know you might say something to the effect of "But that isn't on the devs if someone doesn't know how to do it", but people will still blame them for not supporting them enough. Not to mention that, unless all locally run LLMs have a specific protocol for integration, then I would imagine it would be a bunch of work. Because once you make the move from supporting one and only one method, to supporting multiple methods, people will continue to push for more and more to be added. I totally get why they wouldn't want to do all that.