Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

> honestly not seeing any changes between runing default ollama, and ollama with the following environmental parameters:
Hmm, I wonder if maybe the env vars are not being picked up? Tbh I wasn't quite sure how to test it. If you have a good idea then lmk!

> Also, SD WebUI forge is kinda dead. 

Yeah, that's true. Working on adding ComfyUI next! Any other local image gen tools you think I should add?

sorry, not really an ollama user. It's just worse if you're not using it a server that LM Studio. That being said, I just compared the speed and the RAM and VRAM footprint and it seems to be the same indifrent of what KV cache quant I chose. That's not really how these things are suposed to work. dunno any other way to test it thoug. I mainly use GUIs and stick as far away from the terminal as I can. Actually installing and uninstalling Hammer AI's flatpack is the most usage my terminal has seen in months. :)

As for other infrence engines for tti models, Automatica's SD webUI is still in active development but it's not the best. Forge has a fork but it's far from the same quality as the original since the original forge was made my a literal genius with a PhD and all. ConfyUI is the prefered infrence engine for most desktop and laptop image generations since it's the most efficient. The easiest way to install any of them on a PC would me Stability Matrix if you want to do some testing.

So, I've been playing around with the 230 betas (I think am at 233 now) and I really think you should be made aware that diffusion t2i models really don't like to generate images that are smaller than their default set size. That's 512x512 for SD 1.5 models and 1024x1024 for SDXL and FLUX. That doesn't mean you can't diverge a fair bit, but 256x256 really is not a good image size and that seems to be what the in-chat image generation seems to want to default at.