Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Thank you so much! This is incredibly useful. I will implement these suggestions. Curious, any chance you're also an engineer and would like to get paid to come help work on HammerAI image gen? Would love someone with your expertise to help out directly!

While I appreciate the offer I'm not and would hate to scam someone who's giving us this wonderful program for free. As for how I become so knowledgeable. Well, I just decided at the start of the year that AI wasn't going anywhere so I might as well start experimenting with it and since I didn't have a 4090 to play with but didn't want to be dependent on someone else's server I had to learn a few things. 


It took some time before I discovered that Flash attention doesn't work with all models but if it does it will not degrade the response at all while diminishing the VRAM/RAM needed, or that if you can't fit both KV cache and model into memory it's better to let the KV cache in RAM and fit just model into VRAM. A while to learn that ComfyUI was the best and fastest way to generate images on consumer grade hardware, a while to find out the limits of diffusion models and checkpoints etc.  And none of this info is in any one place. I'm sure that for people that work at google or meta it's all very clear, but if you're just now getting into AI... the learning curve, especially for consumer grade hardware is steep. Especially if like me one is NOT an engineer and has to rely on programs like LMStudio, Stability Matrix or HammerAI. But I'm more than willing to share the little I have learned with you. Cloud models will always be superior to locally run models, but local offers privacy and in the care of RP spiciness. Plus, I want to see how hard I can push my hardware to go.


That being said, if you want a killer feature given the limited context you offer online, try to see if you can get a model to generate new intros for existing characters. If you go to chub you'll see most intros aren't alternatives but continuations of chats from the previous intro. If you can get HammerAI to take the charterer sheet and the base intro and generate a new intro you would have surpassed the core context window limitation most of your competitors currently face and made the characters of HammerAI be secondary only to the large context window behemoths like OpenAI or Grok.

(1 edit)

Thank you so much! This is really so incredibly useful. I just changed the default samplers, and will look at the rest. 

If you ever do learn to code I'd be happy to hire you! Alternatively, I'd happily pay you to help improve the prompts in the app and how I set up different parts of image generation? You wouldn't need to code, we could just be on a call and look through different parts of the app? You have so much more expertise than me here, and I think it could really help out all the HammerAI users! Feel free to DM on Discord (hammer_ai) if you're interested?