I can look into adding those APIs, especially Grok.
Robot Punch
Creator of
Recent community posts
I can definitely look into it. When I first started the project, Gemini's API was a little more convoluted and it wasn't too clear to me how to navigate Google's ecosystem. I got drawn to OpenAI's standard and features and just ran with it.
But one extra consideration is that for each API I support, everytime I add a feature, I have to handle that one feature for each API.
Hmm... but maybe that's an issue I could solve by going the MCP server route. You've got me thinking now. I'll see what I can do to expand the API support without making adding more features too cumbersome. To include Grok.
That's actually a bit of a design issue at the moment. I don't have a good way to use a keyboard when in VR and have to use the push-to-talk button when chatting to the companion. I also have to keep the voice messages short, or long audio clips run into processing issues for some reason. Lots of issues!
However, it's a design oversight on my part that this doesn't help you with inputting the main-menu details. I need to add the ability to use a local LLM and speech-to-text more cleanly, for sure.
Thanks for the kind words and the feedback!
The camera appearing in the floor sounds like when the game boots to the HMD but you’re viewing on desktop. It might be launching in VR if the Quest HMD is in link mode.
In VR, currently, you are intended to hit the pause button and open your wrist menu, which enables your VR pointer to click in menus and stuff.
On the Quest, hitting B should open the interaction menu once you’re inside- and if you’re using GPT or the Robot Punch api, holding Y will handle short PTT recordings.
I might have got those buttons mixed up.
I should make a video explaining some things, I think.
No worries, the API key is the toughest part here, so it makes sense.
I'll make a detailed setup video soon, but the gist is you request an API key from OpenAI or run LM Studio somewhere on your main PC or home network.
If you request an API key, you'll also have to add whatever amount in funds to the account- so this is the money that is consumed when you make API calls and it's something like a penny per message, or now that I'm adding Vision, the requests for image analysis are maybe $0.03 each. But talking to GPT like this is far cheaper than a $15 month subscription and I'm trying to get it it match the functionality you can get through your usual GPT interaction. The API key needs to be saved into a specific spot on your PC that my software looks at to read from. I will add a mechanism to setup this key at the required location for you, but I need to be sure not to break an existing key that is properly setup with it all the same. So I'll come along to help with the API issue soon for more users.
If you're using LM Studio or want private offline conversations, you need a local PC powerful enough to run LM Studio + whatever model you choose from the software (you'll find various models that specialize in different tasks, like coding or roleplaying to choose from). With LM Studio running a local server, Ami needs an IP address to connect to and you can inference with the local LLM through Ami.
The Elevenlab API key is optional, but for people who have Elevenlabs, you add the API key the same way you do the OpenAI key. As long as the system variable holding the API key is named what OpenAI and Elevenlabs tell you to name it, my software will find the key all the same.
But if you don't have OpenAI keys or Elevenlab keys, you can still use LM Studio + Offline TTS.
After Ami makes enough money, I can setup a server and establish a connection on behalf of the user in a fashion that requires no API key. But until I can afford the servers, this seems the best I can do to make it as cheap and accessable as possible.
So it'll get easier eventually, but for now it's a little technical and I'll try to help with that fact.
It should be nearly instant or only take a few seconds to send a request and receive a response, based on the API you've selected. No longer than 30 seconds at most. I should make some sort of timeout error handling for this situation, it seems.
What API are you using and are you sure your API key is valid and is named correctly in your system settings? If there's an issue establishing the connection or the request is rejected, this will spin forever. The underlying code is looking for a successful API response in order to continue- so without it, it implies to me there's some sort of issue with the API request being sent or the lack of a successful response.
Screenshot is a little confusing. It looks like the POV is through your VR headset, but because it is at chest holster height, I'm assuming it's not on your head?
If you want to play PC, make sure the headset isn't in PCVR mode and maybe even make sure the Meta Desktop app is closed. Will launch the PC player if no headset connection is detected.
You shouldn't have any issues once I push v0.4.
I've cleaned up the VR menus some and they appear fully functional in my working build.
In the current release, I did see an issue where you couldn't start the game by grabbing a companion cube, which is fixed in 0.4. This was due to case sensitivity issues under the hood.
I'm not aware of any issues in the main menu with selecting a title option.
I'm aiming to release 0.4 within the next 24 hours,so let me know if you see any more issues after that for sure.
Hey, thanks for the feedback. I haven't heard of those last two issues before, so I'll check that out and get back to you on that.
Regarding the UI option resetting back to default, one other way you can see if the selection "took" is the lower right corner of the companion select screen.
I will investigate if the selection really didn't take, or if the UI just shows the default connection first, no matter what.
At the moment I'm setting up an uncensored model for Patreon users to connect to, so that even more folks can get access- but I'll investigate whatever's going on here and add the fix to the next update.
Cheers!
I don't know if this is the issue, but if you're running the game and LM Studio on the same machine, try to use "localhost" instead of an IP address. No port, I believe.
Maybe the issue you're seeing is because it's failing to connect and some weirdness happens from that. I will investigate- thanks for letting me know.
But for anyone else looking, when using them both on the same machine, LM Studio expects the HTTP requests are getting sent to "localhost" instead of an IP address.
You can see this too in LM Studio- in the tab for hosting a model locally, there's a python example in the upper right corner that will show localhost as the IP address for the HTTP requests. That's basically how it's structured under the hood.
I know what you're talking about here. I could solution this better for sure. The name and description boxes have a UI requirement to be edited before being marked as "ready", so as a dev, a bug I kept working around forgetting to resolve was this.
You just gotta edit the text in each field and you're able to make a companion- but currently any selection will always give you Lexi.
The next update I push will correct these two issues- so you are "ready" as soon as you click a companion, as well as the other avatars being appropriately spawned in the ship. I'm also adding one more female avatar, to make 3 female and 1 male avatars in total.
Thanks for checking it out, by the way.
The build checks if you have a connected VR headset and will launch in VR mode if so. Maybe that's what you're seeing.
I know it launches to VR, but it's not really ready yet, so if that's the case you can fix it by closing SteamVR or Meta's Desktop App so it doesn't think a VR headset is the way to go.
Or toss on your HMD and see if video is streaming through while the game is running.
Hopefully that helps.
Hello! I played your game and thought it was very impressive. I thought I was going to lose for a bit there, but I either got lucky, or the game tipped the scales for the sake of excitement. Either way, I had a good time.
While playing, I did find there to be some minor UI issues. With the main menu (web version), going full screen and quitting the game would make it freeze in place. Clicking the Itch.io button would open a new tab and leave the store button highlighted. And during gameplay, when the character selection screen would pop-up, I'd get confused about what was going on.
Absolutely loved everything about this game. Thanks for making this!
Hi everyone! I'm a solo developer who normally lives in VR land, but I wanted to take a short break from that to work on something non-VR, with a team, for a change. I suppose my current work and development activity is visible from my profile, so I won't go too into it here, but my main skillset is as a programmer in Unreal Engine. I'm already quite experienced with Git and communicating remotely with a team.
Just looking to work with some other folks on a game jam and I happened upon this one! Let's team up! I promise to do my best!
Hi! I saw your reddit post and thought I'd try your game. I thought I'd give some feedback on my initial reactions and impressions, hopefully that's alright.
I watched the trailer first and thought the vivid colors and music was cool. Showed me what the game was about and was right to the point.
Main menu looked good and it was cool having the graphics options.
I thought it was odd that there was the floating pendulum in air on the main menu, and that the menu was also silent. I noticed a volume slider and normally adjusting it would give some indication of the game sound level, but it was still quiet. Not a big deal, but you know.
Tutorial stage was helpful, but I didn't really fully understand the concept of the red and blue barriers until the first level. In the tutorial I could just go right past the horizontal hazards without even noticing the blue barrier, but in level 1 I was surprised when they caught me. It was then I realized what the tutorial text was trying to tell me.
I found it a little difficult to get the timing right and didn't have much information to use to make the timing. When you're still against the barrier, it's difficult to realize how quickly you'll be moving when you emerge and I never made it past the fire. I think i gave it about 6 or 7 runs, but never got past that fire. Which is maybe a little too tough for the first level?
The skating on ice feeling was a little off putting at first, but I got used to it and started to understand how it added to the difficulty of the game. It just felt bad to see the gap and move towards the gap, but be a little off because you were sliding, but that's the designed challenge it seems.
During the game, I wanted to pause it to speak to someone who walked into my room but couldn't.
All in all, I think it's a fun game with a lot of potential. I liked the game's atmosphere the most and the difficulty in timing the horizontal hazards, the least. It abruptly put an end to things and each run back to try again became drier and drier. I think the foundations of what makes a fun game are there, it just might benefit from some balancing and extra design implementations to help keep it feeling fresh or to keep you engaged, even if you're failing to progress.
Here's a link to the raw video file (.MP4). We've also uploaded a copy to YouTube:
https://drive.google.com/file/d/1GQex5Z9t8fqUHARjJIBHSzQMh2PYvbtR/view?usp=sharing