Regular blog-column reader Kent Benson has a close eye for detail, often being the first to point out my errors, so when I wrote recently that I had taken price into account in my scoring of a selection from the Pays d’Oc Trophy Collection, he was quickly on the case. “I’m surprised you would factor price into your rating scale. Does this mean that these scores would have been lower had the prices been higher? How is the reader to know just how much the price influences your score? Or, have I misunderstood?” It’s an excellent question and my answers follow. But if you’ve had enough of reading about scores (I can’t blame you) or feel the very principle of scoring wine is preposterous (you’re right – but it’s useful), hit the back button now.
First, let me point out that Decanter officially sits on both sides of this particular fence. The rubric for Decanter magazine tastings reads “Prices are not revealed, and thus not taken into consideration when scoring.”
When it comes to the Decanter World Wine Awards, however, judges are told which of five UK retail price bands a particular wine falls into, or would fall into were that wine sold in the UK based on the indicative price provided by the producer (A = up to £7.99, B = £8 – £14.99, C = £15-£29.99, D = £30-£59.99, E = £60+). “Take into account the retail price of each wine,” read the judging instructions. “A Gold under £15 might not receive a Gold if it were in the over £15 category.” It doesn’t happen often, but I can certainly remember Gold Medal wines from price category A which would not have received that medal at higher price categories.
Now over to you: the consumer. It’s unlikely that the price of a wine is a matter of complete indifference to you. No one reading this column will be limitlessly wealthy. You wouldn’t lay into a bottle of 2010 Latour with the same abandon that you’d lay into a bottle of 2015 Côtes du Rhône. A wine, in other words, is always a wine at a price. You’ll judge a £10, $10 or €10 bottle in a very different way to a £100, $100 or €100 bottle. You may or may not go to the trouble of scoring wines for yourself, but whatever verdict you reach will always relate to price.
One of the fundamental misunderstandings about point-scoring systems is that they are in some way universally calibrated. It’s particularly tempting to assume this now that almost every issuer of scores has switched over to the 100-point scale, since these scores give an impression of homogeneity and consistency. Some tasters, too, may work under the impression that they are using their scoring system in a universal manner, or actually seek to give this impression as a way to create “authority”.
It’s not possible – because the quality potential of wine regions varies so enormously. If you are rigorously honest about the level of attainment of the world’s finest, then the wines of “up-and-coming” regions, even the most successful, would be condemned to scores of less than 70 points, since they are comprehensively adrift of the quality summits. By any universal scale, few wines of regions of ordinary attainment could hope to score much more than 80 points. It would be hard for any wine from a non-classic region (or a ‘non-noble’ grape variety) to obtain a perfect score, or even a score in the high 90s. Would all of this be fair to the drinking experience of consumers? No, it wouldn’t, because consumers are always buying and judging a wine at a price. The unfairness of universality is why (I assume) Robert Parker’s tasting rubric always insisted that “The numerical rating given is a guide to what I think of the wine vis-à-vis its peer group”.
Peer groups differ by quality potential – and they differ by price, too. I recently took part in a Decanter magazine tasting of 2014 Médoc cru bourgeois wines where the highest noted price was £35 and where most wines cost less than £30. We didn’t have price information as we scored – but we did score in a manner appropriate to this particular peer group, which means in a different way to the manner in which we would have scored had the tasting included the wines of the entire region, First Growths and Right Bank stars included.
So I would suggest to Kent Benson that all scores from all scorers already take price into account to some extent. They do this to a greater extent when the peer group is closely prescribed, and to a lesser extent when it is loosely prescribed. (The maximum level of prescription would come when you know exactly what the retail price is of a wine you are scoring.) If you taste and score by peer group, then de facto you taste and score by price. Consumers understand this in a kind of rough-and-ready way, and are relaxed about it.
I’m still not happy, though. Why not? I think that ‘scoring for value’ should go much further than it already does in order to do the best possible job for the consuming public. In other words, I feel that every peer group potentially has the right to have ‘a perfect wine’ or one that approaches perfection – a wine that could not, by the lights of that peer group, be any better than it already is.
It’s not hard to imagine a Sancerre, or a Picpoul de Pinet, or an Eden Valley Riesling, or a Torrontès from Salta, or a Marlborough Sauvignon, or a Côtes de Provence rosé, or a Morgon Côte de Py, or a Côtes du Rhône red or a wine from one of hundreds of other peer groups to smell and to taste “just perfect” – for what it’s meant to be (and cost). We’ve all had wines of that sort that left us feeling like that. The experience is different in kind to that offered by ‘a fine wine’, but equal in excitement and pleasure. So if a taster comes across a wine like that, why shouldn’t he or she give it 97 or 98 points for exceptional peer-group attainment? This would be readily understood as such. If a taster gives a Picpoul de Pinet 98 points, in other words, no one would seriously expect it to be as good as a Montrachet with an identical score sold at 20 or 30 times the price. They would simply understand that it was a truly outstanding example of what is a fundamentally simpler wine.
We have not, though, yet got to that advanced level of wine-tasting consciousness – or scoring courage. In most cases, tasters seem to choose where to ‘top out’ with scores for particular peer groups and vintages, and just leave consumers to guess that for a certain peer group, a certain score roughly equates with the best imaginable wine for that peer group. It’s hard for any Beaujolais cru wine to come away with more than 93 or 94 points; for New Zealand Sauvignon to get more than 90 or 91; or for Picpoul de Pinet to get into the 90s at all. In big peer groups, this is often deeply unfair. When an entire high-quality Bordeaux vintage is being tasted (as the 2016s shortly will be), even the most outstanding cru bourgeois will be lucky to get 90 points, which would be ridiculous if such wines were to be calibrated, ten years later, with rival Cabernet-Merlot blends from ‘up and coming’ regions which were initially awarded identical point scores, though it makes perfect sense when you consider the volume and attainment of its regional peers in that vintage.
The value perspective is always there, in sum — but blurrily. It would be good to see more clarity about this, so that every outstanding wine from every peer group could enjoy its day in the sun.
Translated by Oliver Zhou / 周维
All rights reserved by Future plc. No part of this publication may be reproduced, distributed or transmitted in any form or by any means without the prior written permission of Decanter.
Only Official Media Partners (see About us) of DecanterChina.com may republish part of the content from the site without prior permission under strict Terms & Conditions. Contact email@example.com to learn about how to become an Official Media Partner of DecanterChina.com.