In the last post, I mentioned the problem of putting a value on unquantifiable things. But obviously, people do find ways to value the unquantifiable: every time I write a theatre review, I have to boil down the experience to a number of stars at the end. And like Leon, I know that the number probably has more meaning to the reader that the 250 preceding words that I’ve delicately prodded into shape. (People really, really love numbers.)
Personally, I don’t find that hugely painful. Skipping down the scale from 5 (amazing), via 4 (enjoyable), 3 (pleasant) and 2 (boring) to miserable 1 (agonising), most things can be comfortably shoved into one of the categories. But these reviews are pretty low stakes: I’ve never had to face down an angry PR or deal with being blacklisted from an arts centre. Some industries take numbers very seriously indeed, and there’s an article in the current Edge (available online here) on Metacritic, the site which collates review scores, weights them by publication and averages them out into a meta-score.
For developers, this creates what must sometimes be excrutiating pressure: not only are there people boiling down your hard work to an inflexible figure, there’s also some bastard sampling all these figures and presenting a final count of your creation’s merit. Ouch. And the industry is racing to work out how to use these numbers, tying royalties to Metacritic scores, calculating the relationship between sales and average reviews. Marc Doyle, Metacritic founder and editor, explains to Edge:
“I know that certain publishers have done very comprehensive studies and they’ve been able to highlight certain types of games and certain types of genres for which predictability will be much higher – racing, sports and certain types of action games, certain types of franchises. Others you just don’t know, like why did the Ben 10 game sell through the roof? I don’t know. It’s not so predictable, it’s not scientific or perfect.”
Although, if humungous kiddie-bait franchise Ben 10 is the best example of ‘inexplicable’ success you can come up with, that suggests that the Metacritic system might not be so flawed. The numbers actually tell quite a lot, and possibly more than the original reviewers would like them to:
… for every five points above 80, on average, sales double. But […] many games buck this trend, and […] the largest publishers have found that the greatest sales growth tends to occur in games scoring in the region of 70 compared to those scoring 80 or more. [Of 18 products achieving scores of 90 or more in 2008 and 2007] only two were projected to sell over seven million copies, while seven sold less than a million. Overall, 12 out of the 18 sold less than two million, a figure that marks a rough break-even point for a triple-A game. In other words, there is a correlation but quality does not assure success.
Or more brutally, there’s a noticable – not universal, but statistically interesting – point at which reviewers’ affections diverge from public interest. The Edge piece comes down fairly comfortably: in the end, sales are still king, and if Metacritic pushes more emphasis on quality, then that could be a good thing. But there’s another option in the numbers too: that publishers identify that 70-80 band as the area that makes money, and squeeze out the developers aspiring to 80+, so choking innovation out of the industry. In that case, the combined voice of every reviewer would have killed off the games they love best. It would be a self-crippling, short-termist strategy for the industry to adopt. But in a time of financial uncertainty, it might be a tempting one.