Skip to main content

On Sale: GamesAssetsToolsTabletopComics
Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

HammerAI

110
Posts
294
Followers
A member registered Sep 02, 2023 · View creator page →

Creator of

Recent community posts

Not yet, sorry

When I have it able to run locally on your computer I will! But for the cloud-hosted versions they will likely stay paid for a while b/c they are expensive.

Nothing has changed.. so maybe other programs are taking up space? I'd suggest closing everything and restarting your computer? Else you'll need to use a smaller LLM.

Will do and update you when ready!

No ETA, but we have a plan and some progress has been made!

(1 edit)

Thank you so much! This is really so incredibly useful. I just changed the default samplers, and will look at the rest. 

If you ever do learn to code I'd be happy to hire you! Alternatively, I'd happily pay you to help improve the prompts in the app and how I set up different parts of image generation? You wouldn't need to code, we could just be on a call and look through different parts of the app? You have so much more expertise than me here, and I think it could really help out all the HammerAI users! Feel free to DM on Discord (hammer_ai) if you're interested?

Fair! But we do have Proxy LLM mode, so you can use any LLM you want. I.e. you can run llama.cpp or LM Studio for the LLM provider, and then just HammerAI for the UI and character chat. (and that's all 100% free, you only pay for access to cloud models)

Mmm perhaps, though not sure. I'm already about break-even / losing some money with HammerAI, and b/c I'm just a solo dev building this can't afford to lose as much money as I think I would if I made image gen free :( Sorry about that. But we are working on local image gen (i.e. on your computer), so that might be free. Also, you can currently run ComfyUI locally and then have HammerAI call that server, and that is free. 

Thank you so much! This is incredibly useful. I will implement these suggestions. Curious, any chance you're also an engineer and would like to get paid to come help work on HammerAI image gen? Would love someone with your expertise to help out directly!

Oo, thank you so much! I bumped the values all up to 1024. Does it work better for you now? Or should intelligently choose 512 or 1024 based on the type of image generation model?

> honestly not seeing any changes between runing default ollama, and ollama with the following environmental parameters:
Hmm, I wonder if maybe the env vars are not being picked up? Tbh I wasn't quite sure how to test it. If you have a good idea then lmk!

> Also, SD WebUI forge is kinda dead. 

Yeah, that's true. Working on adding ComfyUI next! Any other local image gen tools you think I should add?

(1 edit)

Okay interesting. I think the new update (which is in beta right now at https://github.com/hammer-ai/hammerai/releases/tag/v0.0.206) might help. 

The website is linux servers + Runpod. But it's a different codepath than the desktop app. So doing the website doesn't really help with the Electron app.

(1 edit)

Glad to hear it's better! I really need to get this update out to users. But there is one bug I know about that before I can launch.

Linux AMD ROCm support was just in case I had any users, I wanted to make sure it was awesome for them! Glad to hear that day happened faster than I expected.

Will definitely add some more models, I'm pretty behind. Any specific suggestions? Would love to hear what the best stuff is nowadays.

I will learn more about IQ quants and the KV cache offloading. Is that suggestion for the local LLMs, or the cloud-hosted ones?

Anyways, happy it's better. If you want to chat more, I'm hammer_ai on Discord - would be fun to chat more about finetunes to add / any other suggestions you have.

Nice! I do have this if you'd like, but no need, your nice words are enough! https://www.patreon.com/HammerAI

Thanks for the feedback! So right now our Ollama version is actually really old. Does it work better if you use this beta version? It updates Ollama and should be MUCH faster: https://github.com/hammer-ai/hammerai/releases/tag/v0.0.206

Oh, so I just use the default Electron Forge makers:  https://www.electronforge.io/config/makers/flatpak

I can look into putting it on flathub, but I don't have a linux machine, so just haven't actually tested any of the Linux apps myself. Sorry about that. Anything I need to fix with them?

Can you try closing and trying again? Maybe turning on and off the computer? Sorry about that.

Thanks for the kind words! Okay, I'll add a way to use an image gen proxy API. We already have it for LLMs so that you can use OpenRouter / Featherless / etc., so this fits the pattern well.

Yes, I want to! But interesting, so you'd be willing to set up automatic1111 locally, and then want to use HammerAI just as the frontend UI? I can definitely do that, not that hard.

Oh, sorry! Can you come chat in the Discord? 

PS. I offer a 100% refund policy, no questions asked. So you can  just DM me on Discord or email me your email if you want me to refund you. Sorry again.

Could you join the Discord to chat more? We can help you in there!

Would love if you can join the Discord and post there! Then we can chat more about the feature.

You probably don't have a powerful enough computer for the model you chose. Can you try a smaller one?

Uncensored! You can see the content policy here: https://www.hammerai.com/terms

Sure, DM me on Discord, hammer_ai is my username. Share your resume + Github + some projects you've worked on, please!

So I have it mostly working already, but not yet polished up. I do want it really badly, and will update you when it's ready. Sorry for the long delay.

Sorry about that! It's a weird issue people get into related to Discord interfering with our update. The solution is to either restart your computer or kill all "HammerAI" and "Ollama" processes under Task Manager. Sorry again.

No specific timeframe, sorry! As a one-person project, I can only do so much 😭 

PS. If I could find someone to work with me, I'd definitely go faster, so if anyone reading here is a dev, please reach out!

Okay, things are working much better now! Still not perfect, but when this was posted the success rate was ~85%. 

(1 edit)

Thank you, that feels nice to hear. I'm just a solo dev building this, trying my best. And I do feel like unlimited free messages to any characters with no login required is pretty good, most other sites make you login at least.

(1 edit)

Hi! Sorry about that, usually that's because the character you're chatting with wasn't written very well. If you try with one of these is it any better? https://www.hammerai.com/characters?tag=Featured

In terms of paywall like, it is 100% free to chat with the cloud-hosted LLM Smart Lemon Cookie 7B, or with any local LLM! But I am a solo dev building this, so I made saving chats is a paid feature, sorry about that. If it makes you feel better, I spend all the money I make to pay other contractors to help build it with me. And I have a 100% no questions asked refund policy, if you end up not being happy with it. Again, sorry for the issues.

(1 edit)

Sorry, yes, there are paid options! But it is 100% free for unlimited chats with the Smart Lemon Cookie 7B cloud-hosted LLM, or with any locally hosted LLM. You do have to pay for saving chats, better cloud-hosted models, or image generation.

For some context, I'm just one person building this project as a side-project, and I use all the money I make to pay for GPU server renting, and to pay coders to help me out. 

Sorry if the app's not for you though, I totally get it. I will say though, if you want to try out the paid features of the app, I offer a 100% refund policy, no questions asked.

https://www.hammerai.com/plans

Not as an app yet, but you can use it just as a website!

It depends! The lower parameter (i.e. 7B and 8B) models forget more, and the higher (20B and 70B) forget less.

Hi, so sorry about that. I think maybe there is an Itch.io bug? It shows me that this the sensitive content warning is on..

(1 edit)

Yes, it is safe! But you don't need to take my word for it, you can also ask in the Discord, or maybe read this review from someone on Reddit? They said:

> All in all it is one of the best options for a locally installed AI chatbot to use privately. Using wireshark, iftop, and other tools I didn't notice any unnecessary calls or shady traffic. Which is awesome. However, please be aware you lose some of that privacy as you need to log to discord to access basic docs for the app.

https://www.reddit.com/r/HammerAI/comments/1i2a9tp/60ish_day_review/

PS. I'm working on adding docs to the site to help address their privacy concerns (they don't like Discord)

Hi, image generation is now live! Right now it's only with cloud models (and is paid), but local is coming (and will be free).

(1 edit)

Hi, image generation is now live! Right now it's only with cloud models (and is paid), but local is coming (and will be free).

(1 edit)

Hi, image generation is now live! Right now it's only with cloud models (and is paid), but local is coming (and will be free).

Hi! Is this for the desktop app? If so, on Windows the data is under these two folders. Just delete them both and HammerAI will be gone:

  • "C:\Users\*your-username*\AppData\Local\HammerAI\"
  • "C:\Users\*your-username*\AppData\Roaming\HammerAI\"