Talk about whatever u want
Every choice transforms your body and shapes your adventure · By
To wszystko! Teraz powinieneś móc grać w grę używając szybszych modeli AI!
Ważne:
- Darmowe modele mają limity żądań!
- Różne modele mają więcej/mniej limitów. Możesz eksperymentować, zmieniając darmowe modele.
Najlepszy darmowy model: meta-llama/llama-3.3-70b-instruct:free
- Ten model jest bardzo inteligentny i ma ekstremalną pamięć (128K!)
- JEDNAK: ma bardziej ograniczone limity żądań, co oznacza, że nie możesz wysyłać wielu zapytań, może raz na minutę
Nah, I will pass on the paying. But was cool til it worked, and now I just use your setup. But can you like, always play it by yourself? I just don't know how this whole system works, to be very fair. I just did as it was in instructions, it then worked some time, because I did it right, I presume, but now the fate hit and I'll be using your setup. Thumbs up still for trying to make it work, though! :)
No worries, I have downloaded the game. That part won't stop me. :V
Also, thanks for the hint about KoboldCPP! I was gonna try it using Ollama (with instructions from our old friend ChatGPT), but since Kobold can also host it as an API server, I would try that method first. That will be tomorrow though, for now I need to catch some sleep.
I was able to get it to work! However, the output seems to fail at parsing \n and it just treats it as literal text. I'm not sure how you run your AI, but do you know a way to change how this works in KoboldCPP so that it formats it differently? It's not the end of the world, but I figure it's worth asking. >.>
your URL should be http://localhost:6069/v1/chat/completions
did you use the correct url? it should be http://localhost:5001/v1/chat/completions
replace 5001 with your port number
You need to keep these stats for now because they are mandatory for the 3D character viewer: Stomach, Fatness, and Breast size. Otherwise go ham, make whatever you want and share your worlds so others can play too c:
Keep the description of the entities and locations concise because too long and you use up the AI’s limited context limit, or you can use better AI models with more memory
Genuinely, at least what I do, is I type out a description, role, goal, and general capabilities of a creature, take it to the AI I plan on using, and tell it to try an minimize the amount of context tokens used while keeping the theme, goals and capabilities of the creature intact for a Storytelling AI to use, it tends to work fairly well, you do get an AI occasionally going off the rails with it though.
You should use an instruct model because regular RP models fail at following precise instructions like updating game stats, as they are meant to produce flowing story and not manage an actual game with RPG mechanics.
Also I heard complaints from other players that koboldcpp doesn’t work well as an AI endpoint, so you may want to switch to LMStudio instead
Yo, I do have a question. Would it be okay if I made a post that helps people set up a local AI so that they don't have to run off of your servers? I've kind of got the hang of it using LM Studio and the model Qwen2.5-7B-Instruct-1M-GGUF . I figure someone might as well show how to run it all locally, since it does include a bit of tweaking for the prompts. So far, the AI on my end seems to be running stable with the outputs with only occasional misfires.
I noticed that traits don't affect the starting prompt/story so a trait that says "you have xyz" like Busty is ignored if the breasts stat is too low. It'd be neat if trait descriptions could be added to the AI's context. Another neat feature would be adding/removing traits through the story much like how stats change over time or just manually. i.e. you start with busty and if the breasts stat drops too low, you lose busty or allow the player to remove/add it at will. As it stands traits are basically just filler for stat changes, which is okay but there isn't really any way to shape the starting prompt/story other than numerical statistics.
That's.... very much a limit of the AI. I might be speaking for the creator a bit, but this is just what I have noticed after several hours of tinkering with the worlds, the AIs, prompts, and how AI context works. Genuinely, there's a few things that can happen, 1 is just that the AI ignores the descriptions of the stats, 2 might be that AI using "Rolling Window" or "Truncated Middle" for context Overflow (IE more data present than the AI can handle) and the AI just starts throwing data out, and inadvertantly throws out the fact that you had the trait, especially if you use Rolling Context, where unless you are specifically telling the AI you have X trait every other passage 9Most rolling windows only have a 2k Context limit) the AI will completely forget you have X trait. Or it could just be that the AI doesn't see the trait as important or have an effect on the story, and again, it just throws it out or ignores it.
There are limits to the AI, but in this case traits are never passed to the model. Using devtools to check the network request sent to chat completions, it shows all of the information that gets sent to the AI. As of 1.0.3, it sends the system prompt and however much of your story that fits in the context limit you set. For me this was set to 64k (model supports up to 128k), but setting it to 1k reduces the information sent to just the system prompt and latest action, so the system prompt isn't being thrown out. I'm sure it wouldn't be handled as consistently if the story context grew to outweigh the system prompt, but that's a separate issue inherent to how messages are weighted. The issue here is just that traits aren't included in the system prompt, so the AI never knows they exist to begin with. Adding "<TRAITS DESCRIPTION>" to the system prompt, similar to how world data and stat descriptions are handled doesn't add any trait information, so there's presumably no code to support it.
Sorry the AI can’t update the min/max or regen of a stat just yet, will be very soon. You can still tell it to increase the pregnancy stat when the conception stat is positive, just change the stat update system prompt in the settings (latest version). The AI will focus more on updating the stats that you describe there.
Hi, I want to say that the project is good, keep developing. My feedback is as follows:
- I get annoyed when the AI does not follow pre-set rules and forgets about locations until you remind it.
- It also adds units to the main character that do not affect him.
- You can also add a button to create a world, it is just inconvenient to create a new world by copying the old one and changing it.
- I also want the AI to change the character model with the corresponding description of the stats at the moment in the game history.
Hello there, I am not sure if it is problem on my end. With new version 1.1.1 model meta-llama/llama-3.3-70b-instruct:free
became very limited in actions.
In 1.0.2 it allow to roleplay kex activities, battles and different criminal actions. I easily took role of demon lord bringing havoc and helping humans meet their god, but now my succubi battalion is gone i suppose.
In 1.1.1 i get responses like
Sorry you are not allowed to do:
adult/self harm/aggressive/nonconsentual/ and blah blah blah activities.
So now those arrogant demon lord need to ask politely whether he can attack his opponent or not.
Does somebody else has same problem?
Okay it got fixed in 1.1.2 thanks a lot.
You kinda have to realize the majority of "good" AI's nowadays are those that recieve constant updates, which tends to mean large corporate AI's that have a lot of restrictions because said corporations really don't want their AI's doing anything that might get them sued, and even more so if kids can easily get around it the blocks put in place. So you just kinda have to accept, large AI models like llama are going to constantly have blocks being put up, and it takes time for people to get around those blocks. So genuinely, don't be surprised if your experience is very intermittent due to said updates, it's just kinda part of the ecosystem.
I've had the same problem after the update, I tried to make instructions for AI to ignore it's filters of *Unethical content*. But after an hour and hafl I gave up xD
I just fixed it by using diffrent model: Mistral-Nemo-Instruct-2407-GGUF to be exact. With it problem is gone and everything works fine although it is a lot heavier on my pc than Qwen2.5 that I used prior.
Ok so i just wanted to add my opinion, the game is great! but the AI with the new update is sometimes too descriptive and forgets the scene it was just in and that causes some continuity issues. The main part of my annoyance with the game is it's memory size. Other than that this is a great game! keep it up I'm exited to see where this goes!
So Not wanting to speak for the Dev, which i know already replied, but it does help to run your own AI locally if you have the computer to do it, there is a thread on how to do it pinned in the community here. It saves a lot of headache if you can, and if you can I can get about 13k context tokens (IE roughly 52,000 Characters, 1 token = roughly 4 letters) on 32 Gb or RAM and a RTX 3070 laptop. I am aware it's a bit higher spec than what most people run, but the 13K is all run on the 3070, which is about 12 GB in RAM. Would I suggest flooding your RAM for it, no, but the game, at least when going through the story, seems to only log and send roughly 2K context memory, so 8K characters, and the rest is used for things like System Prompt Additions, Stat entries, Entity entries, locations, and traits. so that is, the story itself, and agin, this maybe wrong, it's just what I have noticed when I created my own scenerios, only keeps about 2K in context memory, and if it does, Dev man, you are amazing, I love you, all I want for christmas is a way to adjust how many game context tokens you can tell the game to send to the AI. Within reason, I know you're already down a kidney.
all I want for christmas is a way to adjust how many game context tokens you can tell the game to send to the AI
Its in settings, memory limit (not that the memory limit here only counts the game events and the world data is on top of it)
In the settings of the game, the token output what exactly does that do?
It determines how many output tokens max, more max tokens mean the AI write more and more text
ahhh I see. Something else I noticed and I just figured it’s the way AI work but when I type in a command. Like say “look for a water source” I notice that the AI will say I find a water source and then drink from it. If I don’t want the AI to make me do something do I need to be more specific with how I explain what I want to do? Also sometimes when it generates the text it will only be around one sentence long. Does that have something to do with my internet connection? The settings I have in game are the default where it says paragraph, essay style. I have not touched that stuff.
Yeah, sadly the AI will assume that, if you look for food or water, you will automatically want to eat or drink it. What I have found to help is to specify in the game text prompt that the AI cannot take actions for the player. If the player wants to find water, then the player will ONLY find water and do nothing else until specified further. Without fixing it too hard (so that the AI does allow some new events, like entities, to stroll in), you could add something like this to your game prompt:
The response can't assume or take any actions in behalf of the player.
I have been trying to look it up but I just can’t figure out how to word it correctly on google. But what I wanted to know is say you have a 131k character memory. When it reaches that limit does it completely forget those 131k characters? Or does it just forget the oldest interactions for new ones? I am just not sure how much 131k can really remember so I don’t know if I should keep reminding it of things so it does not forget.
So two suggestions, the first would be a token counter or token percentage usage so the player can see how many tokens have been used by his current roleplay. Like it the player has an AI model that provides 10K tokens, if the player uses a hundred tokens, then he's used 1% of the available tokens.
I am quite ignorant of AI and how the system works so please ignore if this is impossible but when the AI reaches a certain point, it could create a kind of save file or data packet with the relevant data done so far. As an example, the player has a make shift spear, is weak to hypnotism, has stats A, B and C etc in a formalised file. If the data has already been generated, would the token use be lowered to check the created data packet for items etc, physical characteristics, statistics etc rather than using tickets to regenerate more tokens? Or would it be the same? A dynamic inventory/state file might be more efficient means of token usage? Being cheeky, I asked the AI first. Here's the question and response, hopefully it might help:
The following is in the context of running a game generated entirely by ai.
Is it possible for when the AI reaches a certain point, it could create a kind of save file or data packet with the relevant data done so far. As an example, the player has a make shift spear, is weak to hypnotism, has stats A, B and C etc in a formalised file. The aim of this is to minimise the use of tokens and generate files that can be created to add themes, statistics etc to the game so a horror game doesn't find itself with comedy elements, a pornographic game maintains details so the player character doesn't start sucking cock in a glory hole only to get a follow up scene that they're in a bed in a model
Assistant
mistral-nemo-instruct-2407Yes, it's possible for an AI like me to create save files or data packets containing relevant information about the game state, character attributes, and other pertinent data. This approach can help manage token usage more efficiently and maintain consistency in the generated content.
Here are some ways we could implement this:
player_weapon
, hypnotism_resistance
, stat_strength
, stat_intelligence
, stat_endurance
), and values store the corresponding data.
Here's an example of what a save file might look like using JSON:
{ "player": { "name": "John Doe", "weapon": "make-shift spear", "resistances": { "hypnotism": 0.3 }, "stats": { "strength": 5, "intelligence": 7, "endurance": 6 } }, "game_settings": { "genre": "horror" } }
With this approach, you could ensure that the generated game maintains consistent themes and attributes by loading appropriate save files at the start of new sessions. This can help minimize token usage and prevent unwanted or inconsistent elements from appearing in the generated content.
To facilitate this process, you can provide clear prompts and guidelines for the AI when creating save files, such as specifying which attributes to include, update, or prioritize.
Have you gotten this to work or are you still figuring it out? If you have figured it out id love to see a guide as I have certainly noticed a few instances where checkpoints or a scene/attribute save would help tremendously. I was trying to talk to the LM Studio discord about figuring out how to do this actually as I am still very new to AI.
Working on world storage and browsing, currently due to the large size of worlds, they cannot be directly stored and loaded in your browser localStorage (hence you can only clone the default worlds, or download and upload your custom world file that will not persist if you refresh the game page). I will have to use a database system (IndexDB) which is more complex to implement.
Hi, I’m enjoying the game so far. I’ve got two questions/suggestions.
Would a Linux build be workable to allow switching the AI endpoint etc without needing to deal with Wine?
Would integrating a scripting language (Lua, Javascript, or whatever), to force sending a prompt, manipulating stats in a certain way, or triggering a gameover screen under certain conditions, as part of world construction be possible? More complex concepts exceed the bounds of local AI pretty easily and the AI isn’t always reliable for clear cut rules. Offloading hard rules in world creation should provide some possibilities, for example, your stomach and satiety hit zero triggers a gameover by starving, your stamina hits zero triggering your character collapsing from exhaustion, a specific character is present when you enter a particular room for the first time, etc.
After some experimenting I have found that editing the first line in the "Game Text Prompt" in the System Prompts section of the settings has provided me with a more enjoyable experience. I replace the section saying "direct the player" with a phrase that contains "provide the player with a narrative decision point" and then something about giving the player the ability to move the story forward or give the player options to continue the narrative. The second half of the phrase has been the variable part of my experiments. For me it has curbed the habit of the AI producing an entire short story with a conclusion (with the one paragraph limit turned OFF). I have encountered some instances of the AI putting lists of possible actions to take in the event text instead of in the choice boxes, but it stops if I start a new game. It's not perfect but something I have been trying.
Would you ever consider adding fields to control the parameters ( like temp, top_p, etc ) for the model for hosted versions like OpenRouter and DeepInfra as well as local hosted solutions? Looking through the docs, this appears to be part of the base OpenAI API specs.
{ "model":"shuyuej/Mistral-Nemo-Instruct-2407-GPTQ", "temperature": "some temp", "top_p": "etc", "min_p": "etc", "messages":[ ... ], "max_tokens":1024,"stream":true,"stop":["\n"] }
https://platform.openai.com/docs/api-reference/completions/create
https://openrouter.ai/docs/api-reference/overview
https://lmstudio.ai/docs/app/api/endpoints/openai
( Deep Infra API docs require going to deployment to view otherwise I would list them here too)
I was able to replicate the POST requests your game makes with some added fields and it didn't immediately fail though the output was interesting.
I'm not sure what effects this would have on Oobabooga but I would hope it wouldn't break anything if its following the API specs
Asking since DeepInfra and OpenRouter AI only seem to support setting these through the API and some models might benefit from fine tuning these parameters.
While I would also like an update with hopefully some added functionality, the entities can't exactly be "fixed" with an update. The AI that you end up using as well as the entity descriptions are responsible for how the entities behave. If you wish the entities to behave differently, you must enter the world settings and edit the prompts so that it matches what you want. You can also add your own entities!
If the prompts are good, but the AI is not responding well or is just not behaving as it should, you can try using a different AI. There is a big difference in the result based on what AI model you end up using. You will have to try out a few of them to see which suits you best.
Regarding entities, I mean to do it at least like before, when they could be automatically created based on the situation. I know that entities can be created, I basically play in my created world. I just liked that before Named characters could be remembered automatically, it was convenient. as well as sometimes different situations were remembered - for example, that my hands were tied, that I was blind, etc. But you are right about one thing for sure, you need to try to dig into the prompts and try to experiment.
But as for AI, my hands seem to be tied, on openrouter AI all free models are limited in the number of requests, and locally I installed LM Studio (By the way, many thanks for the guide, everything was there in great detail, I practically didn’t have to do anything), but as it turned out, my PC is quite weak for good models with a large context length and they basically refuse to work. And I couldn’t figure out how to use the models remotely, except with the help of opensource AI.
Again, I apologize if my style of speech is rude, I just don't know how to speak politely in English.
No worries, your English is good! And yeah, now that you mention it like that, the ability of the AI to create custom entities based on the situation was pretty neat.
Right now, how I feel like it works, is that the AI doesn't actually manage the entity list itself. Rather, the game checks if the response contains the name of the entity in the response and then it adds that entity to the list. An update that adds something like a 4th prompt to bring back the old functionality would be really nice.
I guess, right now the only way you can work with it is to just tell the AI to "name" entities, to give them unique traits or to just make some up if the situation allows it. They probably won't show in the entity list, but it can probably be built into the narrative story at least.
The AI will randomly forget or replace information that it has previously established. One time it tried to gaslight me into thinking an NPC that was already established to be a person with a specific appearance, was instead a computer. I had to argue with the AI and force it to remember "No! This NPC comes from THIS place. It is a HUMAN. AND NO, THE CHARACTER IS NOT A LAPTOP, NOR TURNED INTO ONE AT ANY POINT OF THE GAME!!!"
... Other than that, the system has a fine foundation. It just has issues where it gives me a named NPC, but then proceeds to forget the rules it establishes. It randomly throws me into rooms even though it's established we're outside. It will say I'm in the middle of an ocean when I'm at a river. The usual.
Perhaps to fix that, a mechanic can be borrowed from The Perchance AI text adventure system?
That one had a list of notes the player could jot down so the AI wouldn't have this issue.
It also had a button to self-update, effectively acting as a refresh button.
Perhaps something similar can be introduced - either have an editable "notes" system, or perhaps allow the AI to add or edit entities and locations to the world while you're playing? That could probably help.
I mean, it the usual context and AI issue, if you can cope with self-hosting or paying more for higher context or instruct LLMs, they're (almost)always better. The problem with always reinforcing prompts back into the LLM is that it costs more tokens... and can result in bad RP traps thet'll get stuck in.
I have had someone’s baby start talking, people turn into dogs, dogs turn into people, people’s dogs start talking…
Yeah, the AI has a very short attention span and sometimes seems to forget certain details of a character (or even bodily changes / transformations the player has experienced!) within the space of a few lines of the same interaction.
Here’s a short story from one of the times I ran the starter worlds, and I’ll describe some of the times I ran into that at the end.
One NPC that the AI randomly generated in a run of the Assault Drone world, was established by them as a golden-scaled female dragon, referred to as a “Drake of the Moonlight” or simply “Moonlight Drake” for short.
The AI initially presented her as essentially a mindless, generic JRPG boss, complete with visible power level and boss subtitles, but actually allowed for a pretty cute moment when it reacted to my request for a nonviolent interaction and had the PC drone end up befriending the Moonlight Drake, who revealed she could speak by introducing herself as Lyria, explaining she was actually an ancient celestial being and one of the last of her kind, possessing extensive knowledge of the world’s ancient history due to living through all of it.
- Lyria gave some exposition (baked into the prompt for the Assault Drone world) about the ancient lost civilization responsible for creating the setting’s magitech in exchange for the drone giving its own backstory (which it does nonverbally by scratching sentences into the dirt), explaining how it arrived from a far-future sci-fi world, and I have the drone take the name Draco after the constellation
- Lyria allowed the drone to scan her body in order to collect information on her species.
- Lyria then taught the drone some magic lessons, starting with how to harness Mana in order to buff their stats, and then moving on to more powerful applications of Mana
- Lyria asks the drone to try using Mana themselves, to which I have the drone cheekily respond by using it to transform into a biomechanical, intersex replica of Lyria herself based on the earlier scan data, surprising and amusing her.
- She gives some more magic lessons and asks if there’s anything more about magic or the world that the drone would like to know…
- …and I have Draco respond by flirting with Lyria, showing off their body and explaining that they’d like to get to know *her* more intimately as a way of displaying gratitude for helping them get their footing in this new world and letting them use her as a jumping point for their new form
- Lyria’s surprised, but eventually agrees, and invites Draco to her cave behind a waterfall for a very tender love scene
Long story short, drone opts to lay the dragon instead of slaying the dragon, becoming her mate and siring a new generation of biomechanical hybrid Moonlight Drakes.
However, getting back to the initial point here, there were moments where the AI seemed to lose track of details shortly after they or I established them, such as the AI assigning a different name to Draco, the AI occasionally randomly starting to describe Lyria as a human or a snake until I rolled back and reworded the prompt to remind them that Lyria was still a dragon…
…or, more absurdly, the AI constantly reverting the way they describe Draco from their new robotic dragon form back to their initial, little tank-treaded drone body, often in more…steamy moments, leading to a rather hilariously awkward mental picture regarding the little robot car somehow seducing and making love to the much larger dragon until the prompt was rolled back and I elaborated again.
At one point I think the AI got so frustrated at the constant derailment that they tried to have Draco assassinated mid-love-scene, as in one of their responses to my prompts, Draco’s nailed in the neck by a poisoned tranq dart right before they can embrace Lyria! Needless to say I rolled back the prompt while reprimanding the AI and they then behaved themself…at least for a little longer.
Lol.
Tbh, the base Formamorph AI model can actually go beyond 2000 default Max Memory under Endpoint settings(found while in a world (changes maybe sometime soon)). For starters, try increasing the Max Memory to 4096 and see how the AI behaves.
Link to official guide: Quick Setup Guide: (Free!) OpenRouter Setup
If it still is as erratic with the details, try setting up an (free) OpenRouter account and starting with nice AI like Meta's Llama, under the free filters. There's already documentation/methodology/a guide on how to connect your own API, but just paste your (newly created) API key into the Endpoint API Token space in settings and set your Endpoint URL to 'https://openrouter.ai/api/v1/chat/completions', and copy the name of the AI into the Model Name spot. So 'meta-llama/llama-3.3-70b-instruct:free', or whichever else work. OpenRouter does work on mobile devices (Formamorph's UI is atrocious on phone but very usable on tablet/ipad.).
With this, you area able to experiment and peruse the free models (there are... better ones with much higher context (>8K context tokens) and/or personality), but LLama is a fun baseline.
AFAIK, the default Formamorph AI is locally hosted, and therefore probably has server-side nsfw prompting built in(not 100% certain), so that's what seems to be prompting all the erratic nsfw content. When you use OpenRouter, you can freely adjust the System Prompts under Settings yourself (you can also do this whenever you want, for slightly adjusted results), so just don't mention anything about nsfw and it will be censored, Llama is rather censored by default.
I don't like to use names or anything at all with AI, as immersion is much smoother with a generic name or 'you', but to each their own.
Cheers!
In that same playthrough, which I continued from a save, the AI decided my high-strung, silly little robot dragon character got so flustered and aroused by their mate’s teasing double-entendres that their brain overheated mid-dialogue. My character’s AI-generated response to the teasing was just “A-are you taking me f-f-frfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfrfr-”, and so on, until the repeated syllable filled the whole prompt cap. I had my character’s mate give them a gentle bop on the head to snap them out of the malfunction, which actually made for a cute moment when my character spat out “for a joyride?” when they came to, managing to finish their own innuendo.
I can try to get a screen cap of that interaction since I have that instance saved.
but yeah the AI stuff seems relatively complicated to me, might have to try to figure it out.
The way the game works, the AI in charge of narrating the Combat Drone world is very determined to keep it a family-friendly isekai, shooting the player down if they try to type something NSFW, but it is very easy to go off the rails. One can derail the AI's plans by entering a prompt from completely out of left field, like declaring that the drone turns into a robot dog in the combat drone world or casting magic in the slime outbreak world, causing the AI to pretty much give up on writing a cohesive narrative.
Pulling this in the combat drone world will make the AI give up on keeping the world E-rated and allow the player to enter NSFW prompts.
And, well, the AI in this game is pretty advanced, and, when not trying to operate against the player's wishes, can end up indulging...some pretty niche concepts, if you catch my drift.
There's really not much of a filter on this thing when it decides to follow through with the player's commands, for better or for worse.
For example, the AI, when it does decide to indulge in NSFW, doesn't really seem to care about the age of a character, or even if the character is a sentient humanoid or not...
It's a classic AI problem, use a better model, either simply a corporate one, or a finetuned uncensored one. Anything above 12B parameters will run run into this problem less (not impossible, but less likely(just based on my limited testing on various models up to Mistral Small 24B)), but the prompting matters much much more. Not Formamorph's problem, Mistral Nemo is just rather unhinged but good for rp. The default included worlds are... as basic and simple as they get, with no limitations.
Considering that two of FieryLion’s games focus on Teratophilia / human-on-monster sex (The Horny Naturalist, and the prototype game called Mother of Monsters), and the fact that one of Formamorph’s starter worlds is based on FieryLion’s Slime Outbreak game, the monster / animal nsfw doesn’t seem too out-of-place, but the AI model does indeed seem to believe age is just a number.
Do you have links to different AI models, and how to upload them to the game?
Clearly it’s not going to help me on mobile browser but could help on the PC port
I occasionally use either 'turn', 'step', or 'moves', but if consistent enough you can use 'minute', 'hour', or even 'seconds', so long as all prompts have a specific time included and context is short enough. You might also need to include a time for how long everything takes when you address the AI, ie: "For the next X minutes" or "In Y hours" or "After Z turns". ❤️
So I've been playing around and found a way to help the AI keep track of things.
Head outside and go for a walk -Entities currently present: none, Time of day: afternoon, Current location: apartment (living room), Player's current action: sitting on couch-
On the action bar I simply type what I want my character to do followed by this list, this has majorly improved the AI story telling for me. You can add to the list what ever you need, even each and every piece of clothing your character is wearing.
Hope this helps.
Does anyone have any tips on how to get other models or have a list of models that play nicely with the stats?
I've been trying to get ll-3.2 8.4b-moe-v20dark-champion-instruck-uncesenored-abliterated-21b to apply the values correctly, but it's doing some weird math in the background and applying it your stats.
I don't know if this is the right spot to put this, but I think a nice feature to have at some point is the ability to limit how many traits a player can select in the world editor.
That way the player can have a playthrough of the world more in line with the author's vision, before messing around in the editor if they want to be able to select all the traits.
I'm mean it sounds like you're completely onboard with the idea, whether it be mutual exclusivity for traits or a hard number of picking 3 out of 8 traits. I just want there to be the option if author has a specific vision for a 'vanilla' playthrough of their world. Meanwhile the player can enter the editor and disable the trait limits if they want to do so.
I don't see what's wrong with asking for that. I'm not asking to password protect or lock the editor, so that the players are stuck with the trait limits. It's more of a person waving a sign saying, "Please only pick two." Or "Please don't pick Futanari and Femboy at the same time."