Your card might not be able to handle the flan-t5-xl which is what the chatbot is using. You can either wait a future build or try to install the open source version anf make the chatbot use a smaller model.
Without more details it’s hard to say what is going wrong on your end.
I suspect you might be able to run airunner but not Chat AI as the memory management for diffusers models is better than the flan model.
I will be releasing an updated chatbot (including the demo) soon that allows switching to smaller models, and I’m looking into LLaMas which can run in 4bit mode. Those things should help lower powered cards, so keep an eye on this page.