The proverbial "good old days" are usually characterized by a less hectic pace, fewer responsibilities, and all you cared about then was getting out of school and carousing with your friends. For me, those times refer to the 60s. Upon reflection though, the representatives in the boardgame department at that time were pretty dismal. Yes, the original innovators were there with Monopoly, Summit, Clue, etc., but these also bred a bazillion other property acquisition, world conquest, and detective game clones, along with all the mindless roll-the-dice and move-along-the-track games. The conspicuous revelation is that now is the best of times. Boardgame design has never been more creative and varied than today, due mainly to the "German revolution", of course. Although the same themes as bygone years are still being applied, to step back and look at today's efforts, the innovation and quality of their implementation boggles my mind. Aladdinıs Dragons, El Grande, Settlers, Battle Cry, 18xx, Druidenwalzer, Carolus Magnus, Unexploded Cow, ad infinitum, all show the greatness of the present in their own way. For most of us fanatics, it is now not a choice of what to buy, but of what not to buy. So, within this cornucopia, how do we prioritize and distinguish between the "must haves" and the "I'll pass"?
Since gaming preferences are as numerous as dice rolls in Titan, I would never tell anyone that a game is good or bad. As in any decent review, along with a game's description, an explanation of the reasons why I do or do not particularly like it are much more relevant and informative. Which leads to the crux of this writing, and the seed stuck in my craw: With all of these varying tastes, how do reviewers give games a single numerical rating?
The sundry halls of gamedom, webwide and hardcopy, have produced a fabulous wealth of information to assist in our evaluations and purchases. Along with the wonderful previews and reviews, how insightful is a game's single overall rating? Can we improve or standardize the current systems of evaluating subjectivity? My first exposure to game ratings harken back to the halcyon days of the magazines Strategy & Tactics (SPI) and The General (Avalon Hill). Back when rules lawyers were born, and being anal retentive was a prerequisite. These magazines listed the rank of games solely from subscriber feedback, where ratings were compiled within a ten point decimalized system. "Gee, this game is rated 7.1, so it's gotta be much better than that crap rated 6.8." Taking this system to the Nth degree was another otherwise masterpiece by John Kisner, called Zone Of Control. Utilizing a five point decimalized scale, games were rated in eleven categories: Luck, Utility, Rules, Game, Simulation, Innovation, Solitaire, Number of votes, Complexity, Comparison to review, and Overall! This overkill had some merit in comparing the categories, but did nothing toward whether the game was playable or enjoyable. Each of these efforts were also very skewed by the subscribing hardcore grognards who favored the higher complexity games. "If it has fifty pages of rules, three maps, and two thousand counters, it must be great! Who cares if I ever play it!"
The most typical overall rating systems used nowadays are the five icon-of-choice method, the ten point scale, and the d6 method.
Whether a game is rated with five stars, smiley faces, rockets, whatever, five is a common and easy system to assimilate. One or two stars, it stinks. Four or five stars, it's good. But, does a five star rating mean that it is perfect in every way? There's also the ambiguous three star rating, and what if half stars are used? If you are going to give three and a half stars, why not use a more straight forward ten point scale and call it a seven?
Well, I'll tell you why. Even though increasing the extremes of a scale will present a more accurate delineation of really good and really bad, (1) You will still very rarely see games rated a one, two, or ten, (2) Distinguishing a game rating between two of that many consecutive numbers is really splitting hairs, (3) Could you ever honestly say that a game is perfect to you in every way to justify a ten rating, and (4) There is still the ambiguity of the five, six, or seven ratings.
Resorting to the pictorial die face (six point scale) to rate a game has much the same problems as the five star method, but with the major improvements of eliminating a middle ground selection - Now a three is on the low side, and a four is on the high side - and avoiding the multitude of choices within the ten point scale.
Regardless of the method used, to condense a review into a single rating is still problematical, and not too insightful. So, what can be done to provide something concise, but informative? The capsule comments and categorized ratings used by Bruno Faidutti and from Hall 9000 are both exceptional. Mr. Faidutti uses a five point scale in three categories. The Hall uses a six point scale in five categories. I propose a simplified synthesis of both.
As is usual, the first thing most aficionados and lay folk take notice of when spying a new game is how it looks. "Wow, cool! What's this!" Then through a more intense scrutiny, the quality of the bits, board, box, completeness, and rules clarity are assessed. These I would lump into the category of Components. For ingenuity, interaction, mechanics, complexity, and value (or fun, if you will), these factors fall under my category of Play. Utilizing a six point gauge for each of these two classifications would provide me with a simple but more enlightening rating. For you to evaluate this process, below are some of my suggested ratings of a few diverse games. Is this really any less subjective and more informative? Are two ratings significantly better than one? It would work for me.
- Ray Smith