Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
Tags

Alice

15
Posts
5
Followers
A member registered Apr 23, 2016 · View creator page →

Creator of

Recent community posts

Thank you for your response (and, uh, for other responses so far too ^^).

I understand the point about the design goal of the system, and it being primarily to compare entries against one another. My point about agency likely stems from this emphasized suggestion "participate in rating games" and generally encouraging participation. It's a very valid message, but also made it sound like it's primarily participant's responsibility to get their game above median, rather than a (closer to reality) combination of participant's involvement and out-of-control factors.

I'd like to point out there are two key aspects to the Jam experience:

  • "global" aspect - what kind of games were created and whether top-ranked entries deserve their spots (since it's the top places people are mostly excited about)
  • "individual" aspect - how one's entry performed, both in terms of feedback and ranking

The median measure seems focused on improving the global aspect - making sure that ratings are fairer.
Except it's a finnicky measure, because:

  1. You mention 6-votes entry ranking above 200-votes entry, which I presume is about the 80% (rounded-up) measure of mine; but this implies the median is 7, and 7-votes entry ranking above 200-votes entry doesn't seem like a massive improvement.
  2. In a recent (non-itch) Jam I participated in, there were 25 entries with votes from 19 entrants + 4 more people. Most of them ranked nearly all entries (people couldn't rank their own entry). The median-and-above entries got 20-22 votes, the below-median entries got mostly 18-19 votes (two entries got 14 and 16 votes). Also, one of the 19-voted entries was 5th out of 25, making it a relevant contender.
    With the strict median measure, an entry getting 19 votes would have its score adjusted while 22-votes (most-voted entry) would not. It means that, depending on a situation 19 is deemed too unreliable vs 22, but in another Jam 7 seems reliable enough vs 200. Now, even with my proposal it would be 16-22 vs 6-200 spread, but it goes to show that median system adds extra noise - potentially near top-ranking entries, too -  when all entries are voted-for almost evenly. The difference is that raw median semi-randomly punishes 11 out of 25 entries, while with my adjustment only 1 of 25 entries qualifies for score adjustment - that's one 1 less!

I guess the problem of extreme-voted entries can be tackled two-fold (maybe even both measures at once):

  • Promote the high-ranking (e.g. top 5%) low-voted entries, so that more people will see them and either prove their worth or get them off their high horse. People don't even need to specifically be aware these are near-top entries (especially since temporary score isn't revealed), what matters is that they'll play, rate and verify.
    It's sort of "unfair" for poorer-quality entries, but chances are already stacked against them and it can improve the quality of top rankings by whittling down the number of undeserved all-5-star outliers. And let's face it - who really minds that 6-voted entry with all 2s ranks above a 200-voted entry with mostly 2s and some 1s?
  • More work in this one, but with great potential to improve jam experience - streamline the voting process.
    In that Jam I mentioned, we have a tool called "Jam Player". It's packaged with the ZIP of all games, and from there you can browse the entries, run their executables, write comments, sort entries etc. As the creator of the Jam player I might be blowing my own horn, but before lots of voters played only a fraction of games. Ever since introducing the Jam player, the vast majority of voters play all or nearly all entries, even when the number of entries reach 50 or so (with 80 the split between complete-played and partially-played votes was more even, but still in favour of complete-played).
    I imagine similar tool for integrated voting process could work for itch.io - obviously there are lots of technical challenges between a ZIP-embedded app for a local jam and a tool handling potentially very large Jams, but with itch.io hosting all the Jam games it might be feasible (compare that with Ludum Dare and its free links). With such a player app, same people would play more entries, making the votes distributions more even and thus reliable (say, something like 16/20 vs 220 instead of 6/7 vs 200).
    Perhaps I should write up a thread on the itch.io Jam Player proposal...

The 80% median seeks to improve the individual aspect - making sure it's easier to avoid the disappointment of getting score adjusted on own entry despite one's efforts

If someone cares about not getting their score adjusted and isn't a self-entitled buffoon, they'll do their best to participate and make their entry known. If someone doesn't care, then they won't really mind whether their entry gets score adjusted or not. The question is, how many people care and how many don't.
If less than 50% people care, they'll likely end up in the higher-voted half of entries. Thus, no score adjustment for them, the lower-voted half doesn't care, everything is good.
However, if more than 50% people care, there'll inevitably be some that get in the score-adjusted lower half. E.g. if 70% people cared about score adjustment, then roughly 20% would get score-adjusted despite their efforts not to. The score adjustment might not even be that much numerically, but it still can have a psychological impact like "I failed to participate enough" or "I was wronged by bad luck". I'm pretty sure it would sour the Jam experience, which goes against the notion of "the jam is the best experience possible for as many people as possible". The fact that 60-70% Ludum Dare entries end up above 20 entries threshold, and that 19/25 entrants voted in the Jam I mentioned, I'd expect in a typical jam at least half of participants would care.

Do note that in the example Jam from earlier, 9 of 19 voting entrants would get score-adjusted with 100% median system despite playing and ranking all or near-all entries. Most of that with quality feedback, too, you can hardly participate more than that. Now, I don't know how about you, but if I lost a rank or several to the score-adjustment despite playing, ranking and reviewing all entries - just because someone didn't have time to play my game and its votes count got below median - I'd be quite salty indeed.
With the 80% median system, all voting entrants would pass at the cost of 16 vs 22 variance, which isn't all that great compared to 20 vs 22 variance (the least voted entrant didn't vote).

To sum it up:

  • if the votes count variance is outrageous in the first place (like 6/7 vs 200), then sticking to strict median won't help much
  • if the votes count variance is relatively tame (like 18 vs 22), then using strict median adds more noise than it reduces
  • provided that someone cares about score adjustment and actively participates to avoid it, the very fact of score-adjustment can souring/discouraging, even if the adjustment amount isn't all that much
  • rather than adhering to strict median, the votes variance problem may be better solved by promoting high-ranked low-voted entries (so that they won't be so low-voted anymore) and increasing number-of-votes-per-person by making the voting process smoother (like the Jam Player app; this one is ambitious, though)
  • with more votes-per-person and thus more even distribution of votes, we should be able to afford a leeway in the form of 80% median system

Also, thanks for the links to the historical Jams. Is there some JSON-like API that could fetch the past Jam results (entry, score, adjusted score, number of times entry was voted on) for easier computer processing? Scraping all this information from webpages might be quite time-consuming and transfer-inefficient.

I want to emphasize that avoiding the score adjustment is not a design goal of this system. The point of the adjustment is to allow entries to be relatively ranked in the bottom half by minimizing the randomness factor by scaling down scores with lower levels of confidence.

It is not, but I believe enforcing the score adjustment isn't a design goal of this system, either.

The problem with the current system is that even if everyone puts at least close-to-median-sized effort, they still might get their score adjusted semi-randomly, with some entries getting one or two ratings below a median of, say, 20 (just like rolling a 6-sided die 60 times doesn't mean all numbers will appear exactly 10 times). It can lead to a somewhat ironic situation, where the system designed to minimise the randomness factor introduces another randomness factor (i.e. which entry ends up with an adjusted score and which won't). After all, using median means that - excluding entries with exact median number of votes - the lower-voted half of entries will get its scores lowered no matter what.
Also, while the median increasing by 1 might not be significant with a median of 100 votes, the score adjustment might be more significant with a median of 20. And considering median depends on how many games can people play within voting time (as opposed to number of entries), I'd wager getting something like 10-20 median across 200 entries wouldn't be all that unusual. With medians this low, the randomness factor of score adjustment becomes particularly prominent - possibly even moreso than the few-votes variance it's designed to minimise.

Another randomness factor comes from the indirect relationship of giving-receiving - some people might get lucky and get 100% of reciprocal votes, while others might often give their feedback to people who aren't interested in voting at all. Not sure if itch.io more prominently displays entries with higher "coolness rating" (i.e. how much feedback the author gave vs how many votes their entry received); it would definitely add a stronger cause-effect in the giving-receiving relationship.
On the other hand, I imagine public voting would add some extra randomness to giving-receiving, because there's no way to vote on a public voter's entry in hopes of receiving a reciprocal vote. I suppose public voting shifts relevance away from feedback-giving to self-promotion (the higher the proportion of public voters, the more self-promotion becomes relevant compared to feedback-giving). Not really calling to remove public voting altogether, rather pointing out another reason why voting for other entries might not always be the most effective nor reliable method of getting past the median threshold.

I do not advocate for 100% entries avoiding score adjustment most of the time. I do, however, believe that if I take my time to cast a median amount of votes, I should reliably be able to avoid score adjustment (say, 95%+ of the time). Thus, among the numbers-checking for the previous Jams, it might be worth finding out how votes given correlate with votes received. In particular, how much of median I'd be guaranteed to receive 95%+ of the time if I voted on median number of entries. This could give a more fitting median multiplier than my feeling-in-the-gut 80% I initially proposed (assuming my proposal would be implemented in the first place).

(1 edit)

My fix isn't as much meant to ensure that no-one will get their entry punished, but rather that everyone has a reasonable chance of avoiding the punishment with proper effort.

The current system is somewhat volatile in that increasing the median by 1 means that previously median-ranked games become punished. So people are sort of encouraged to get their game ranked significantly above median, which itself eventually leads to increasing the median. Also, the voting/rating other entries isn't 100% efficient - the score depends on the game's own ratings, not the author's votes, and not everyone returns the voting favour. So it further compels people to increase the median. It might help the number of votes but make it more stressful/frustrating for participants (and perhaps make them seek shortcuts by leaving lower quality votes).

My fix is meant to stabilise the system. People might aim for the 80% threshold (or maybe 90%), but then they still end up on that shaky ground. However, those who get their votes at median level are in a comfortable position - their entries can still handle extra 25% of median (e.g. increasing from 16 to 20) and don't need to rev up the median (potentially punishing other entries in process) to make sure they're on the safe side.

If we add to that:

  • a clear information on game's page about a current median, threshold and what it means for the entry
  • a search mode with entries voted below median (not threshold, because median is safer) sorted by author's coolness (to add extra cause-and-effect between voting for other entries and getting own entry voted)

then everyone should be able to safely avoid punishment by putting roughly a median-sized effort. And given how active some voters can get - note that a voter can single-handedly increase the median by 1 by voting for all entries - median-sized effort is by no means insignificant. 

How about keeping the median as a point of reference, but making the no-penalty threshold slightly lower?
For example, 80% of median (rounded up) as opposed to 100% of median. A game getting 8 votes shouldn't have much higher variance compared to a game with 10 votes, likewise a game with 80 votes rather than with 100 votes (compare it to e.g. 2 votes vs 10 votes). Getting number of votes below that threshold decreases the rating like now, but with the 80% of median as a point of reference.

With that approach, it's perfectly feasible to achieve 100% of entries without the penalty, as long as the lower half of most-voted entries stays in the 80%-100% of median range. As it is now, the entry must either be in the top half of most voted entries or get the exact same number of votes as the median in order not to get the penalty.
Also, showing the median and no-penalty threshold during the voting - and which entries have how many votes - would allow participants and voters to make an informed decision about how much votes need to be gathered/cast to keep oneself/others out of the penalty zone without overcommitting to the jam.

By still using the median as a point of reference the system keeps its scalability. By lowering the threshold to 80% the system doesn't penalise nearly all entries in the lower half of most-voted entries, and isn't so sensitive to the median changing by just 1 vote. Finally, by keeping the threshold around 80% rather than 50% or 20% of median we can still keep the ratings variance comparable between the least-voted non-penalised entry and the median-voted entry.

Thoughts?

I liked the concept and overall presentation, though I feel there could be a few more graphics effects sprinkled here and there. Also, maybe go for higher-res graphics for llama and sparkles, considering the squares are smoothly rotated anyway <- note: I prefer hi-res graphics to pixel art in general, given similar quality.

Going through one tunnel after another was pretty fun for a while, but the game never seemed to end while not offering any extra variation compared to, say, wave 5 (just going faster and with sharper "turns"). I eventually stopped at wave 17. If you were to expand upon that idea, I recommend adding extra gameplay elements to keep things fresh.

As others, I strongly recommend adding at least some form of audio; background music doesn't take long to add if you know where to search, and it can improve the experience by leaps and bounds.

Other than that, there's some decent platforming prototype going on, but very little content (at least the game wasn't packed with lots of levels featuring way too little variety). I think there's some room for improvement as far as game feel goes; maybe some knockbacks and/or player's actually moving during attacks, so that the combat doesn't feel "stiff"?

So, the core mechanics are just a basic paint-the-largest area thing. However, there are loads and loads of amusing events to keep the game interesting, though I'd prefer if large-scale events occurred only once at the beginning of point, or at least not just before the end - as it is now, a nicely painted area can go down the drain seconds before the point and then whoever is lucky enough to paint the most within these seconds will get the point.

Other than that, I really enjoyed the general aesthetic and variety of powerups. Took me 3 attempts to win on the default settings.

Tricky to learn, but I eventually figured out how to catch creatures on my own. Interesting concept, though there is room for polish in the gameplay aspect (e.g. allow using the same symbol in quick succession) and general communication (the fish-catching animation after going back to the overworld was quite inconsistent, so I wasn't even sure if I caught the fish or not).

I liked the general atmosphere and a variety of environment, even this not directly relevant to the gameplay - it really added life to the game. One potentially nice addition - have the background change between zones gradually (e.g. using merge_color) rather than abruptly. Overall, I managed to find 3 clues and all four fish and completed the game. Nicely done. ^^

(1 edit)

Nice little arcade game, fun to play for a few sessions. I haven't quite seen this exact gameplay premise anywhere else, and all types of the powerups were a nice and helpful addition (with Super Stacker powerup being particularly satisfying). It might not have as much *depth* as, say, "Neon of the dark Realm", but it easily makes up with a *high* amount of polish. Well done. ^^

Oh, and I strongly recommend learning how to setup an online hi-scores system. It really enhances games like these.

(1 edit)

Yeah, the neons were supposed to be much more prominent, but the heat wave turned my brain into mush in the final weekend of the Jam (and we know the final weekend is usually the most productive). After that I had no time to properly incorporate the neons in the graphics and the story. >.<

Thank you for your detailed report. I could narrow down the problem quickly thanks to that.
I uploaded Neomon.fixed version which should prevent this problem (and some other potential crash).

Hello, thanks for the report.
Was it just entering the barn, or did you try to perform an action in it (examine/talk/use item)? And if so, which action was it and what was the error message (if any)?

Generally, the source code is only required for verification whether the game was actually made in GM:S 2 and doesn't need to be made public. Quoting this post:
> QUESTION: Can you clarify the situation regarding providing source code please @rmanthorp? You said you were sure it wouldn't be public, but haven't actually confirmed this, nor - if it's NOT public - what we'll have to do to let YYG see it.
> ANSWER: Yes. Not public. Anything you don't want to share don't share and when it comes to judging/confirmation we will reach out privately if we require proof of GMS2.

Since the main purpose of the source is verification that the game was made in GM:S 2 - and considering the fact that it doesn't need to be made public in the first place - I think you should be fine.

To be on the safe side, you might want to replace all the paid sound assets with silence (removing sound assets altogether could break things) and explain the situation to YoYo Games (while keeping the original to yourself). I'm sure they'll understand.

I suggest checking out the topic on GameMaker Community forums, especially this post: https://forum.yoyogames.com/index.php?threads/amaze-me-game-jam.86376/post-51671...

Relevant bit:
> I had brought this up but I think we ultimately wanted the rules light as a bit of testing ground for how we are going to be running future events. [...] With the theme in mind you are welcome to toy away with ideas or even get started. Likewise if you can re-use previous assets to fit the theme you are welcome to do so! Obviously don't get stealing things - maybe we should make that clear...

While it doesn't directly state Marketplace assets are allowed, it does mention "previous assets" which are roughly in the same category (resources made before the Jam, whether by the participant or someone else).

Also, elsewhere in that thread I summarised a few points: https://forum.yoyogames.com/index.php?threads/amaze-me-game-jam.86376/post-51673...
In particular this:
> 2. Using pre-existing assets (be it graphics, audio or code frameworks) is perfectly acceptable, as long as there's no stealing (no unlicensed use).

Ross Manthorp (representing the organizers) gave this post a like and didn't add any reply that disagrees with me, so it's reasonable to think he agreed with what I wrote.

Hope this helps. ^^