At time of writing, the scoring/ranking system used by Escape The Review has just had a major overhaul for the first time since the site was opened up to user-submitted ratings. For the vast majority of games this will make only small changes to how they appear on the site; in a few cases the impact will be bigger.
Handling of anonymous ratings
Ratings from anonymous users, and ‘drive-by’ raters, are welcome and appreciated; but these also tend to be less reliable than other ratings, and the site has always given them less weight when calculating the overall score for a game.
However, some games receive such ratings in far greater quantities than others, usually all strongly positive. That typically reflects a large, enthusiastic player base, which is a sign of quality; but can unfairly advantage games that are popular or well marketed over ones that may be less well known but as good or better. The algorithm now adjusts for that; every rating makes a difference, but large quantities of ratings have diminishing returns.
Some players (and bloggers) are much harsher than others. Experienced players are often more reluctant to give a top rating. Blog sites that use different rating scales, with scores such as 8.3 / 10.0, may never give a maximum score at all.
Which is fine, but say a game has only received 5* ratings, and then a blog gives it their highest ever rating, 9.8 / 10.0. Most ways of combining those ratings would mean that that blog’s rating would bring the game down in the rankings, which doesn’t make sense.
The algorithm now applies several adjustments so that different rating styles can be combined more fairly. This also improves the handling of fine-grained rating scales versus less fine-grained ones.
Games’ scores could already be influenced by ratings of other versions of the game (such as copies in other locations, or ‘normal’ versus play-at-home versions). The way that works has been tweaked in several ways, mainly to make the algorithm treat ‘inherited’ ratings with more scepticism.
Where a game has very few ratings, its score can now also take into account ratings for other games of the same type at the same venue. Obviously, quality can vary hugely between different games at the same venue, so this is only a small effect that doesn’t apply when more direct ratings are received. But the effect is that the algorithm is a little quicker to to accept strongly positive ratings for a game if the venue’s other games have high ratings.