Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Formamorph

Every choice transforms your body and shapes your adventure · By FieryLion

The best AI model you think you have found.

A topic by pickes created May 13, 2025 Views: 5,681 Replies: 11
Viewing posts 1 to 7

Thats is. just want to find some cool models to use.

Aclaration: need to work in KoboldAI

(+2)

I've used Qwen2.5 7B Instruct Uncensored. Many others have recommended this in various community posts. It's a fast, capable all-rounder that does an excellent job and can be very creative with the right prompts.

I've also used

  • Llama 3.1 8B Lexi Uncensored V2 : Close contender, but it's a lot slower, especially as history tokens increase. Also has a tendency to convert huge parts of game text into choice lists because of formatting
  • DarkIdol Llama 3.1 8B Instruct 1.2 Uncensored : Performance was a lot slower, and it had a heavy bias towards suspense/horror genres even when instructed otherwise by the system prompt
  • L3.2-rogue-creative-instruct-uncensored-7b : Performance was just bad, even standalone in LM Studio (64GB RAM, 16-core processor, 8GB VRAM, loaded on SSD, and this thing runs at typewriter speed). But, it was hands down the best at long-form story...so good, in fact, that it regularly ignored my attempts to be concise (text prompt was 2-5 paragraphs and it routinely shot for 30).
  • llama-3.2-8x3b-moe-dark-champion-instruct-uncensored-abliterated-18.4b : This one ran like the DarkIdol llama model (decent performance), but even more biased to suspense/horror

And a handful of others whose performance was just not good or recommended draft models

Top One I have found, but it is a bit slow, is "Nemomix Unleashed 12B" by Bartowski , it does good work in catching any specific prompts you use, and acting on them within the limits of the game world. Again, main issue is that it is slower

Close Second would be either "Starcannon-Unleashed" or "Magnum V4 12B"

I just love 

gemma-3-27b-it-abliterated.i1-Q3_K_S.gguf

(3 edits)

I find the DavidAU (very prolific ‘tuner’ of LLM) stuff interesting, and am distraught that the model that produces amazing output for my flavor of game isn’t an instruct model so it just barfs out a nicely paced tale and then creates another complete narrative upon feeding it a prompt. I’ve put it out of my mind for the moment until I figure out how to train it to respond to instruction. I think it’s Dark Forest or Dark Universe flavor.

NEW FAVORITE: deepseek/deepseek-chat-v3-0324 on open router. It’s a paid model, I’ve run $2 to $4 an evening running a few hundred prompts through it as I tune my scenarios. I have it set to 100k/10k context/output and it very consistently makes it 40 or 50 prompts deep into scenarios while staying consistent with the scenarios outline and carrying early improvisations deep into the ongoing narrative without too many “LLM: Please remember x,y,z” reminders. It tends to like to break its possible choices section into single lines, but that is very likely my prompt customization coming back to bite me. It”s doing better after I specifically told it to avoid line breaks while presenting its, often elaborate, choices. Deep into a scenario it will start presenting choices as if the scenario had just launched, but a custom prompt, even a simple “continue scene” will often get it back on track for awhile. It’s can be erotic without making the scenario participants immediately rush to humping, though it does tend to shift my “arousal” settings up to the max within three or four prompts due to all sorts of reasons (temperature plays a factor in one scenario and sweating has now joined adams apples and forearm hair as one of the most erotic things on the planet.10 out of 10 LLM’s concur)

I’m currently beating a 12.??gb version of DavidAU/LLama-3.1-128k-Darkest-Planet-Uncensored-16.5B-GGUF into shape via the system prompt box. It starts off amazing, customizing the skeletons of the character prompts I give it, fleshing out the basic environment in logical ways, and, if fed detailed multi-sentence prompts can keep the narrative flowing right up to the point where my characters start interacting physically. Then it has a tendency to get in a deep rut of samey responses repeated endlessly. At this point I’m not certain its a flaw with this particular model, or, more likely, I haven’t found the magic balance of prompts to keep it’s context window open to variety.

Dark Champion is my fallback model, but it has several ‘experts’ in it that quickly start lecturing about explicit content. They can be over-ridden by refreshing the prompt or “consenting” to dubious content via a direct message to the LLM, but it’s still a bit of an immersion breaker, even though it outputs wonderful stuff. Additionally it has a tendency to have characters start falling in love in the most cliched way possible. I mean, in my real life, sure, give me intimacy and trust and consent and a fascination with my adams apple (the training data must be made up of 7gb of soft-core romance porn focused on body hair and adams apples), but, dammit, can’t these people just be horny for a second?!

I’ve tried a mess of other models and they do ok, and the smaller instruct models often adhere to the game output better, but the DavidAU stuff seems to flesh out my worlds the best.

tldr: DavidAU makes good stuff. Dark Champion is creative in fleshing out a scene, seems to take longer before falling into an intractable narrative rut but wants the characters to stare into each other’s eyes from inches away and build ever increasing levels of “intimacy.” Darkest Planet is initially amazingly creative, horny as hell, but seems prone to locking into narrative ruts the minute characters start to rut.

(8 edits)

No matter what I try, there is nothing better than these three free models via Openrouter. "qwen/qwen3-235b-a22b:free" "deepseek/deepseek-chat:free" "microsoft/mai-ds-r1:free" Without censorship and restrictions, well, at least I didn’t notice. There is a guide on the main page with guide setup if anyone doesn't know.

On my opinion all the models you guys listed above are very small and simply cannot provide a well-developed answer and keep a long narrative. If you have extra money try this  paid model" sao10k/l3.1-euryale-70b" or  "x-ai/grok-3-mini-beta"  He responds quickly, is more diverse and holds the narrative well. 

On openrouter there is a daily limit on free models on one account, 50 requests or something. 10 bucks will upgrade your account and solve this problem by increasing it to 1000 requests per day, which is more than enough.


I’ll give that a try. I’ve managed to kick one of my scenarios into ok shape, 50% of the time it’ll go thirty or forty iterations out before it decides on it’s three favorite paragraphs and writes them forever. I’ve got an upgraded account already so I might as well burn some requests. Thank you.

Tell us what you think is better when you try it

(2 edits)

Good lord that’s a difference. I tried your suggestions and they are SO fast, but were lacking some of the initial pick-up-the-ball-and-run-with-it that the larger models seem to have so I just plugged in Roleplaying and 128k as my model prompts and then fluffed my account up to see what it costs for an evening of debugging my models.

mistralai/mistral-nemo did a fine job. Didn’t lecture too much about spicy content and was endlessly redirectable when it started to stray into wanting to restart the scenario or turn it into a PG hand-holding and imagining the future event. It did seem to like to do odd things with dialogue during explicit scenes, “ I-I-I-I-. . . oh-oh-oh-oh-oh” x20 at times, but was willing to go back to a more creative approach if kicked.

anthropic/claude-3.7-sonnet: EDIT: It was Claude-3.7-Sonnet that worked.. One of the sonnets, I think the 4.0 just flat out refused to play around with a relatively tame isolated-characters-fall-in-lust scenaro. 3.7 did work did an amazing job of being creative, including one long stretch that I’d have had to work on for days to get right, and I like to think that I’m an ok writer with caffeine and my ADD meds on board.

EDIT: Google: Gemini 2.0 Flash initially almost did as nicely as Claude 3.7, but even with a pretty clearly defined scenario it would crash into non sequiturs along with a disdain for english syntax and start creating it’s own sentence structure.

So, thank you for the suggestion to go back and try the models hosted elsewhere. 3 second pause while it queues up, then twenty paragraphs of good content, much much reduced context-locking and not nearly as much lecturing on sensitive topics as I’d expected. All three models I worked with did need a little prompting to get out of extensive thoughts bubbles on the topic of safe, sane and consensual, but once it knew that both characters were ok, they’d run for many narrative cycles before trying to “check in” again.

I likely ran my problematic (as far as overly convoluted prompt problems on my end) scenario for two hundred prompts and my more streamlined scenario for three hundred prompt cycles and it cost me. . . $3.50? I’ll still play with the locally hosted stuff just to see how far I can push it, but the cost/benefit for even the most capable role-play models online is well worth it to me.

(4 edits)

Glad to help, I also played around with hosting locally for a long time, but my weak PC can't even come close to the models that openrouter offers. Now you just need to find a model that suits your taste, configure promt for it and that's it.  I tried  mistralai/mistral-nemo and anthropic/claude-3.7-sonnet too but I didn't like them. You can also go to the "rankings" tab on the openrouter and filter the models by the roleplay tag and try some of them

I am running my models locally, so I am very much constricted by the specs on my system, but the models still do well.

(1 edit)

I've used the following AI models locally, they're great but they do start to slow down a lot after a few hours. Within the first few prompts, up to maybe ten, you can get a response within a couple of seconds but when you start reaching ten or more, they can take a minute to load.

angelslayer-12b-unslop-mell-rpmax-darkness-v2 - Goes whole hog into it, doesn't guilt you for your fetishes or try preaching feminism at you but if you tick in the one paragraph setting, you get almost nothing. Untick it and you get a massive page of text with the second half being possible prompt options and then actual prompts. You get a lot of code popping in like the following as well though:

<|im_end|> <|im_start|>

If it wasn't for the technical issues, I would recommend angelslayer absolutely. Or if someone knows a specific version that doesn't have the coding pops up that it works well with prompts, please let me know.


[Edit]


I found this version of AngelSlayer:

AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3.i1-Q4_K_S.gguf

Doesn't have the same code error thing as above but it's not as responsive as the one above.