Skip to main content

On Sale: GamesAssetsToolsTabletopComics
Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

GRisk

341
Posts
1
Topics
1,826
Followers
A member registered Jan 14, 2020 · View creator page →

Creator of

Recent community posts

Thanks for understanding, hope the key help a little.

Other languages as target are not supported by Whisper, but I plan to add a another model for that.

Yeah, once I mess a little more with the code.

Not right now, perhaps in a future update.

Not sure why other GUI use that much ram, but your welcome

Patreon only charge you at the end of the mouth.

If you are having this problem, is a hardware bug, you can use the float32 option as a work around, but it will require more vram

I just did a test and you are right, I was under the impression that anyone could message me on Patreon, but it seen that only Patrons can. The worse is that there is no option for me to open private message to everyone, this suck.
Well, there is also my email in any case:  griskai.yt@gmail.com

I think your question was wen 0.5 would appear in here? For now I don't have an answer, will keep developing it on Patreon and at some point it may appear here.

Dalle-2 don't have the source code or the model as open source, so its impossible for now

Dreambooth models are still not supported. Still need some time to make they work

It should totally not use that much memory for a 512X512 image. There is anything else using the vram of the computer?

This bug is more of a hardware problem than a software one. The fix is pretty much a  work around the hardware bug.

Yeah, it will be incorporated in the GUI eventually

Please no fighting in the comments, if someone have any question you can always send me a private message on Patreon.

There is still to much problems and lacking options to be a full product. Possible in the future.

Hi there, can you post this again but without changing the font size? It really pollute the comments.

Hi there. You will able to select the graphic card on 0.51, but only on patreon for now

You must have a card that don't like half precision models. In 0.51 on Patreon is should fix this

Patreon version use less memory. It should run with 4vram using 512x512

There is a tutorial on Patreon. If you have any question you can send me a PM via Patreon

You can get new downloads as long you tay subbed, you can keep using the application forever if you cancel the sub. The Patreon version already have a lot of new stuff and use less memory to run the model.

Its possible to run pytorch scripts on AMD, my rife-app runs on AMD as well. But it take some extra work

I never experimented with linux, first would need a machine running linux as well

Patreon only charge you at the end of the month, so you are free to test the tiers if you like without paying anything. In the future they may become free updates, but not for now. 
There is already some updates on the patreon version, like better memory and img2img

Its a little hard since it require some code change and I need a machine running linux, but not impossible I think

Hi there, on Patreon version is already possible. It may take some time to be public

This license is a copy from the Stable Diffusion team, this line is just about the AI model.
Images generated by the model are explained on the second line of the license. You can use the generated images as you like, no credit needed, but if you have more questions about the license, you need to talk with the Stable Diffusion team.

For now only one copy per prompt, you can repeat the prompt in multiple lines if you want.

Glad you are enjoying it.

Actually is a miracle a model like that can run in a consumer card, even so its probably will improve a little the memory usage in the future.

Perhaps there is a empty line in your prompts window?

You can already use it, if your card support it.

Its a hardware problem with half precision I think. The fix is to use full precision model, but it will take even more memory to use the app.

Not for now, probably in the future

Improvements for the resolutions will be available in a few updates, thanks for the compliment =)

I didn't think anyone would think of me for this, thanks.
Yeah, img2img will be available in a few updates.

Are you using a 1660 card? It seen to be a bug in it with half precision models.

Yep

Can you open CMD and then open the .exe with it? This way the windows will not close and you can send me the error

In the future, only to speed up samples >= 3, it would be very hard to use 3 cards to generate a high resolution image.

There been a few reports like this. It seen that 1660 need to run on full precision, but not sure if 6vram will able to run a good resolution at full precision.