I can definitely look into it. When I first started the project, Gemini's API was a little more convoluted and it wasn't too clear to me how to navigate Google's ecosystem. I got drawn to OpenAI's standard and features and just ran with it.
But one extra consideration is that for each API I support, everytime I add a feature, I have to handle that one feature for each API.
Hmm... but maybe that's an issue I could solve by going the MCP server route. You've got me thinking now. I'll see what I can do to expand the API support without making adding more features too cumbersome. To include Grok.