Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
Pinned ReplyAdmin (2 edits) (+1)

Unfortunately we can’t respond to your rating reports in real time. It looks like when posted your request the ratings had been only created hours ago. Please continue to report the ratings if you suspect them which will allow us to process them when we get the opportunity. We tend to process account suspensions in bulk.

I understand you are frustrated, but I just want to clear up a couple misconceptions you are spreading in this thread since they can often do more damage than good.

Itch.io doesn’t even require a captcha to make a new account.

The details of our risk mitigation systems are much more complicated than you can tell at a cursory glance. Generally speaking “human scale” activities are not blocked by captcha. Captcha is generally only suitable for limiting automated activities. Given the relatively small number of negative reviews you have received, it’s likely just a person clicking through and doing it manually. A captcha will have no impact here, as the individual is motivated to leave the negative review, they will complete all the steps to still do it.

I’m quite sure this could be fixed with a minimal amount of work, but at this point I’d just be happy getting these malicious ratings taken down and some sort of answer from itch.io at all.

The nature of threat mitigation is very complicated. There is unfortunately no switch we can turn on that will solve this problem for you. Especially for a free game, if someone has a conflict with you and decides to create accounts to negatively review your game, our automated systems are not going to prevent that activity unless they are using automated systems themselves. You will need to report the pages so a human can review the activity. If you aren’t willing to accept the potential risks that a public rating system entails then I recommend disabling ratings and reviews on your page.

Hope that explains

(+1)

Hi Leafo,

Thank you for responding. 

You're right that most of this stems from frustration, but I think it's an avoidable frustration.

 However, I do want to thank you for taking the time to respond and with that, I consider my part in this issue concluded. All I wanted was confirmation that there was a human handling these issues and that's obviously the case now.

I do have a few comments in response, but these are aimed at being purely constructive. I'm satisfied with the conclusion as-is.

The details of our risk mitigation systems are much more complicated than you can tell at a cursory glance. Generally speaking “human scale” activities are not blocked by captcha. Captcha is generally only suitable for limiting automated activities.

The point about captcha isn't that captcha itself would be a viable solution for the problem we were facing, it's that there are essentially no measures taken at all during account creation or at the beginning of an account's life to verify its legitimacy.  This is not industry standard by any means.

However, you are correct in that this would not stop a malicious actor from doing these manually. That being said, it doesn't change the fact that it's still a glaring vulnerability that isn't present on the majority of your competitors. 

The nature of threat mitigation is very complicated. There is unfortunately no switch we can turn on that will solve this problem for you.

There are certainly smaller steps that can be made to further mitigate this, as seen on the review systems of large forums like F95 which have to deal with millions of free accounts too. Small requirements for leaving reviews, such as a certain amount of activity, downloads or even just making a purchase could make the barrier to entry for abuse high enough to dissuade bad actors, or at least limit the scope of it.

Moreover, having reviews be almost entirely hidden puts the impetous of action solely on the developer to self-police their own review boards. This does work to an extent with malicious reviews left on other pages, but what about bad actors who use the same idea to potentially boost their own pages? Nobody would ever catch that if they weren't doing anything to trigger the automated bot response.

That being said, even doing something like disabling review submissions from VPNs would be massive for both botting and manual abuse.

Especially for a free game, if someone has a conflict with you and decides to create accounts to negatively review your game, our automated systems are not going prevent activity unless they are using automated systems themselves.

Now of course, I may be putting far more importance on this because I frankly have no idea how much ratings even matter as far as itch.io analytics go (I understand that you're probably not liable to reveal anything about that and I won't ask you to).

However, even as a free game itch.io is an enormous amount of exposure and this is my full time job as seen by my non-trival Patreon, so it's just as vital as any other platform I host my game on. I don't think I'm unreasonable for being concerned about something that could potentially affect my livelihood and I don't want these concerns written off as some egotistical developer having a hissy fit over nothing.

If you aren’t willing to accept the potential risks that a public rating system entails then I recommend disabling ratings and reviews on your page.

Accepting risk is only one part, mitigating risk exposure is the other side of that same coin and as a platform that facilitates the livelihoods of the same creators who keep your site alive, you do bear some responsibility in helping to prevent abuse of the systems you own. That's coding ethics 101.

But again, I do consider this matter closed with your involvement whatever the outcome overall maybe. I hope you'll consider my feedback, and I thank you for your time.