In a recent article in Sommelier India titled ‘Are Wine Competitions Any Good?’, Rosemary George concedes that tasting is an “inexact science” and that “personal taste and preference still come into play”. At the same time, she argues that when competitions are well organised, the right wines will shine through and win the gold medals.
Wine competitions, however well organised they are, could be influenced by random factors such as the order in which the wines are tasted or at which time of the day or in what weather they are drunk. But her line of argument raises at least a couple of important questions. Are wine competitions generally well organised? Is there a body of evidence that shows the results they produce are consistent and dependable?
A recent study of more than 4,000 wines that were entered in 13 U.S. wine competitions in 2003 turned up results that have rattled both wineries and consumers. Published in the Journal of Wine Economics, the study found that out of the wines entered in three or more competitions, almost half — or 47 per cent — were awarded gold medals. At first glance, this might seem like only a problem of plenty, of distributory largesse, but consider it along with the study’s other findings. Eighty-four per cent of gold winners received no other gold medals in other competitions; 98 per cent of gold medal winners were ranked as slightly above average or below average in other competitions; and in only 132 cases did wines receive the same score in all the competitions they were entered in.
The findings suggest that bagging a gold medal is almost a matter of chance. Even if this conclusion is exaggerated, the survey draws attention to an inescapable fact — that wine competitions have spawned recklessly over the years because they serve the interest of wineries and those who organise such events. The latter charge money for every bottle entered while the former love to flash their medals on press releases and slap them on their bottles. It is a cosy relationship — a synergy of interests that has led to more competitions and more medals.
While this may work in influencing the gullible customer, the sheer volume of the medals being handed out has rendered them somewhat meaningless where it really counts. Ever heard of a sommelier at a restaurant recommending a wine because it won a gold medal at a competition? The chances are that he or she is more likely to mention a Wine Spectator or a Robert Parker rating, which raises another question. How is that a rating system devised by a magazine and a wine critic are more influential where it really matters than a commendation by a group of experienced palates at many wine competitions?
There are many reasons for this, but the answer is not that someone such as Parker is more objective than others. If anything, Parker’s “tastes and personal preferences” come heavily into play. His bias in favour of fruit-forward high alcohol reds has influenced a whole generation of winemakers to make a “Parkerised” style of wine. But once this bias is factored in, there is a logic to the ranking, a method in the grading.
Which brings us back to Rosemary George’s point. What we need are a few well-organised wine competitions that achieve an “element of consistency”. Not an endless procession of them that hands out medals by the bucketful.