Image Image Image Image Image Image Image Image Image Image

Elliott Morss | October 31, 2014

Scroll to top

Top

No Comments

Judgments of Paris, Princeton, and Lenox, Part 3

Judgments of Paris, Princeton, and Lenox, Part 3
© Elliott R. Morss, Ph.D.

Introduction

It was striking in 1976 when Californian wines “sort of” beat French wines in Paris tastings. That tasting was memorialized in George Taber’s great book Judgment of Paris. I say “sort of” because Orley Ashenfelter and Richard E. Quandt found that while Californian wines got the highest ranking in both red and white categories, judges found very little difference between the French and US red and whites overall. Taber, Ashenfelter, and Quandt arranged a re-enactment of the Paris tastings last summer at the annual meeting of the American Association of Wine Economists in Princeton. But in Princeton, French wines were compared to New Jersey wines rather than Californian. The New Jersey wines did quite well, but again, the judges’ rankings differed significantly.

The Lenox Wine Club

In November 2012, the Lenox Wine Club (LWC) was created. Consisting of 14 “veteran” wine drinkers, it decided to start with four tastings: “heavy whites”, “heavy reds”, “light whites”, and “light reds”.  All tastings address the following questions:

  1. Among comparably-priced wines, are the judgments of the veteran drinkers similar enough to identify a significant preference among the wines, and
  2. Does price matter?

Blind tastings were done at a restaurant with very light hors d’oeuvres. Tasters were asked to score the wines on a scale of 1-5 with the best wine getting a 5 score. Ties were given the average rating of the wines that tied. As I have reported earlier, 3-liter box wines got the best scores for both the “Heavy Reds” and “Heavy Whites” tasting. For both tastings, the boxes (priced at $4/750 ML) beat out wines costing as much as $80+. But just as was the case at Paris and Princeton, the results were hardly definitive because of the scoring differences among judges.  

The “Light Reds” Tasting

On February 27th, The Lenox Wine Club tasted “Light Whites. Two Pinot Noirs, a Red Burgundy (Pinot Noir), a Beaujolais, and a Mourvèdre were tasted, first with light hors d’oeuvres (unsalted crackers and celery sticks) and then with a meal.

The results for the first tasting (with light hors d’oeuvres) are presented in Table 1. It is again notable that the Box wine effectively tied with the Meiomi for the best score. Just as significant: the most expensive wine – the Remoissenet – was a distant last.

Table 1. – Lenox Wine Club Scores, “Light Reds” (5 – best, 1 – worst)

 

There was a pretty strong inverse relationship between price and rating with a correlation coefficient of .616. That is, the tasters favored less expensive wines to more expensive wines.

In Table 2, the correlation between each taster’s choices and the average scores for the “Heavy Whites”, “Heavy Reds”, and “Light Reds” are presented. A high positive number indicates a taster is close to the overall average. For example, KM’s correlation of 1.00 in the “Heavy Reds” tasting means KM’s scores are the same as the overall average. Low or negative numbers indicates the opposite.  

Table 2. – How Tasters’ Scores Correlated to Average Scores

While KM and LS score very close to the average, there do not appear to be any continuing “rogues”.

One other measure is worth mentioning. The Kendall W statistic indicates how much congruence there was among the ratings of the tasters. The Kendall W for the “Light Reds tasting was only 0.181, indicating there should be very little confidence in the judges’ overall ratings.

Wine with Food

Since more wine is consumed at meals, it is odd that most wine tastings are done without food. To remedy this, we wanted to see if wines scores changed with food. The tasters were given their score sheets back and were served a meal. 7 tasters had a main course of Cod while one taster had Duck. And without yet knowing the names of the wines, tasters were asked to indicate whether their wine scores improved, got worse, or remained the same as in the earlier tastings.

The results are given in Table 3 where a + indicates the wine tasted better with food, a 0 indicates no change, and a – means worse.

Table 3. – Change in Wine Scores When Wine Consumed with Food

The most significant change was for the Almaden Box. Its improved score while the Meiomi’s score remained the same means the Almaden Box got the highest score with food. Scores for the Red Burgundy also improved, while tasters did not like the Mourvèdre or Beaujolais as much with their meals. Of course, these findings are hardly definitive. And we will experiment with better ways to score the tastes of wines with and without food in the future.

Conclusions

While the “robust” performance of the box wines in our tastings is amusing/somewhat remarkable, our results reflect the common pattern of most tastings: the ratings/scores of the “judges” are all over the map. As noted in my earlier reports, this could be either because the taste differences of the wines could not be picked up by the tasters or because the ratings/scores are dominated by different taste preferences of the judges.

One other observation on judges: what is the basic difference between our wine drinking veterans and the “experts” that work for Wine Spectator, Robert Parker, et al? The experts have developed the ability to distinguish between wines from a specific grape or blend. Beyond that, they (or others) have developed criteria to use to rank them. These criteria aren’t necessarily the same as their own personal wine tasting preferences. The Lenox Wine Club judges also are being asked to distinguish between the wines; but they are only being asked to use these distinctions to report on their personal taste preferences – what wines they like to drink.   

 

Submit a Comment