How to Judge Wine Without Tasting It
(Bloomberg) -- Before there was moneyball for baseball, there was moneyball for wine. And yet, unlike in baseball, the use of statistical analyses in the wine industry remains relatively rare. What explains the difference?
In the late 1980s, the Princeton economist Orley Ashenfelter found that he could predict the quality of Bordeaux red wine vintages based on characteristics such as the temperature and rainfall during the harvest year.
In particular, Ashenfelter was able to explain the price of a bottle of Bordeaux through its age, the average temperature during the growing season between April and September, the rainfall in the previous October through March, and the average temperature in September, when the grapes are usually harvested.
Using just these variables, he was able to account for more than 80 percent of the price variation for vintages in the 1950s, 1960s and 1970s.
Just as with baseball scouts, this analysis threatened the status and existence of professional wine tasters and evaluators. Not surprisingly, they were offended and harshly critical. Since then, Ashenfelter’s predictions (for example, that the 1989 and 1990 vintages would be exceptionally good) have held up quite well. Furthermore, his analysis was based on simple regressions. More advanced statistical tools are now also being applied, with notable predictive success, to evaluating wine quality.
And yet, unlike with baseball, data analytics remain mostly a sideshow in the wine industry today. If anything, the role of subjective quality ratings has become more, not less, dominant, as a recent Wall Street Journal article highlights. Even relatively unknown raters are cited by wine stores and drive significant changes in sales; a rating of 98 instead of 94 triggers a massive uplift in demand.
One big driver of the different evolution of analytics in baseball and wine is the feedback loop involving the expert doing the subjective judging. In particular, in baseball, it doesn’t really matter if a scout says a player is highly skilled. All that matters is the player’s performance. The algorithm can therefore replace, or supplement, the scout, as long as the result is better prediction of the player’s quality. In wine, however, the “scouts” seem able to drive sales themselves, as the Wall Street Journal story underscores. This phenomenon makes the prophecies of the wine raters effectively self-fulfilling.
What should we conclude? First, in domains in which quality is difficult to judge, moneyball approaches will have less traction in toppling subjective evaluations. In political analysis, for example, quantitative approaches have become more popular because there’s a clear outcome that can be used to judge results: Either you correctly predicted the outcome of an election or you didn’t. By contrast, algorithmic approaches to evaluating the quality of art would seem less likely to displace art experts, though even for art there are attempts to use machine learning.
Second, even in somewhat subjective fields such as wine or art, an ultimate indicator of quality is price. That would seem to provide a pathway for analytics to become dominant. The role of human evaluators, however, will be displaced only if the price consumers are willing to pay doesn’t depend importantly on the evaluators’ view and instead only depends on the inherent characteristics of, say, the wine (or art). The issue is that most buyers are influenced by the views of people they believe to be experts, which is why blind taste tests of wine often yield such different results than non-blind ones, even if the blind tests do suggest some degree of differentiation across types and qualities of wine.
The bottom line? As long as people are influenced by the quality ratings pronounced by others, as taste-test evidence suggests, the wine industry is likely to remain dominated by connoisseurs rather than computers.
©2018 Bloomberg L.P.