The Seven-Point Rating System


This article describes the first public draft of The Seven-Point Rating System (7PRS, name subject to change), my own attempt at addressing the sentiment that in most rating scales for reviews, 60% of the possible scores are effectively identical (the so-called four-point scale), and the remaining 40% aren't well defined aside from the extremes. This system shifts the traditional 0-100 scale to a scale where half of the scores are negative. It also provides a third option in the center of the positive and negative scores, but it is not a middle ground—the 0 is the worst possible score. Although 7PRS is a response to perceived faults in video game review systems, it is intended as a generic system for all media. The intent is to quickly and unambiguously answer two questions: Is it good or is it bad? How good or bad is it?

The Scoring System

The Seven-Point Rating System is a hybrid of the binary or recommended/not recommended rating, and numbered scales. Scores range from -3 to +3, with 0 in the center. There are no decimals or fractions allowed in this system. Scores can be thought of as vectors, with a direction (the sign) and a magnitude (the number), which stand in for the binary and the numbered systems respectively. These features are designed to directly address the greatest shortcomings of the binary and numbered systems by separating approval and disapproval in the numerical component.

The Sign

This is the simplest component: Did you like the work or not?

If in doubt, ask yourself if you would want to engage with it again in the near future. You'll likely have to ask yourself this question if you liked a work, but some heavy flaw or a part of the work soured you on the experience of engaging with it. If you answered yes in spite of that part you didn't like, give it the +. If you hated it and can articulate why, or if you wanted to like the work but you decide that the problems don't make it worth going through with it to the end a second time, it's a -.

The Number

The absolute value of the score is the extent to which the reviewer felt positively or negatively about the work. There are three digits possible in each direction, which I have found to be the smallest number of scores that works as a proper rating system. Early drafts of this system had two digits. A third digit was added to signify defining works for one's tastes.

For positive scores, the number component is relatively straightforward. These guidelines are an example for clear separation:

  • +1 is something you liked. You might not want to read/play/watch the thing again any time soon, but you don't regret your time with it and you came away from the experience slightly better than before.
  • +2 is something you loved. These things are what you think you will come back to regularly, the works you find yourself thinking about this work long after finishing it.
  • +3 is your all-time favorites and the stuff you recommend first and above all else. Use this sparingly, as the things you put in this category define your tastes.

For negative scores, this works differently. Unlike the positive scores, there is one thing in common with everything that scores below 0: You have no desire to engage with this work ever again. The magnitude of the negative score reflects how strongly you're able to criticize it.

  • -1 is something that you found annoying, where whatever good points the work had are outweighed by the negative aspects, to the point that you don't want to replay or rewatch it.
  • -2 is something that angered you, with very little if anything positive about it.
  • -3 is so terrible to you that you would be able to use it as a case study in how not to create a work in this genre. This is the score that inspires hyperbolic criticisms, e.g. wishing to imprison everyone involved in its creation. Similar to its polar opposite, the works in -3 collectively describe all that you dislike in creative works, which is equally important as what you seek out most in the same medium.

This range of 7 points was chosen as I've found it's the smallest number of scores on either side such that each one is distinct, and the maximum absolute value of 3 will hopefully discourage readers from seeing a linear relationship—a +3 is not three times as good as a +1, neither is +1 half as good as a +2. Future versions of 7PRS may have a different name or some other means to label each score.

Every work that shares a score gets the same general reaction from you. You should have no trouble distinguishing the works you think are okay from the works you urge others to try for themselves. Conversely, the things that make you mad just by thinking about it is not the same as the things that make you laugh at their ineptitude. But anything that does none of these is given a different score altogether.

Zero

Mediocre is worse than bad. If something is offensive, incomprehensible, irritating to watch, or just plain sucks, there's value in examining why it's that way. Such analyses might just be for comedy or could be a study in how creator's intent failed to come through in the final product. 0 is the middle of the scale because it describes the worth of middle-of-the-road, average works.

If something is boring, unmemorable, or fails to inspire emotion, it's wasting your time just by existing. 0 is also a mark of disapproval, but unlike negative scores, there's nothing to be gained in trying to analyze it. If you have to think hard about what was in a work, and still can't decide if it was good or bad, then it failed to make an impression either way and is likely a 0. In this case, the review process is complete and no further deliberation is needed on such works.

Rationale

There's been a lot of talk over the years about the importance of review scores, particularly for video games. All kinds of debate over how many points to use, the difference between 9.5 and 10, whether 6/10 is "passing" or "failing" (subconsciously equating the scale to an objective rubric for grading school assignments instead of artistic qualities), and if having scores at all does more harm than good. Ultimately, the goal of any production is for someone to spend any amount of their finite time on this Earth engaging with it. The amount of time someone chooses to spend expressing why you should or should not read that book, watch that movie, or play that video game reflects how strongly they hold their opinions on it. Reading or watching these reviews is itself a time investment, and so you start to form a profile of that reviewer's tastes and subconsciously get a feel of how much you can trust their views to align with yours. When reading multiple reviews from the same author, any two disparate titles with the same score are implied to be of the same quality, and justifications for this disparity will usually fail.

It is my belief that these rating systems are not reliable or consistent. The more points there are in a given rating system, the more difficult it becomes to justify given scores, and the more it feels like an objective rubric instead of a subjective assessment of appeal. There is very little real difference between 7.0 and 7.5 or even between 6.0 and 7.0, two middling scores that indicate the latter is superior to the former but not by a meaningful margin. There is no difference at all between 2.0 and 4.0, even if the reviewer insists otherwise, because the message of both is that they are not worth the reader's time. At the highest ranges, the difference between 8.5 and 9.0 is hotly debated, and even more so with 9.5 and the perfect 10. And 100-point scales introduce an absurd amount of granularity that is wholly unnecessary. Conversely, fewer scores makes it easier to express recommendation or lack thereof, and so some outlets use only a binary system (e.g. Steam user reviews). A common criticism of the pass/fail system is that no room is given for half-recommendations due to some major reservation such as narrow appeal, or a severe caveat external to the game itself such as isolated technical problems or marketing practices.

Several major video game review outlets have since agreed with this and have since introduced systems that clearly define the meaning of each score, e.g. Polygon with 20 points, Destructoid with 10 points, Eurogamer with 5 points, and make clear that the middle points of these scales, not somewhere at three-fourths, are "average".

7PRS goes further than these efforts—since only a small number of review scores are meaningful to most readers, 7PRS makes them the only possible scores. 5 points/stars didn't leave any room for the highest, "perfect" works to stand alone, and 10 points left too many possible places for works that really are average. The unusual numbers for the caps prevents readers from subconsciously thinking of the scores as a linear scale. By using a small number of scores, the reviewer is forced to compartmentalize the works they liked but wouldn't rewatch or replay immediately from those that they look forward to doing so as soon as possible. The meaning of these scores is highly visual with the unambiguous negative range: for many readers, the presence of a negative score is all they need, while others are curious on how negative that review is. Most importantly, there is only a single "average" point, and it and everything below in the eyes of the reviewer are exposed with no room for interpretation. It may sound like I'm treating the use of negative scores as some major innovation, but I'm not aware of any reviewers or news sources who use negative numbers for any reason outside of hyperbole.

This first public version of my rating scale is subject to change. One of the biggest flaws is that it still uses numbers even though I'm trying to avoid an image of a linear scale, but I wasn't able to come up with different labels that I was satisfied with at the time of writing. The only real reason I went through the trouble of publishing this system is because I wanted to write reviews differently from what I've seen in other sites, in the hopes that I may avoid struggling to justify any score I give, and to see if grouping the works I engage with this way is more effective than giving them grades. Hopefully it's a solid enough system that I won't have to revise it too much in the coming years.