Image Image Image Image Image Image Image Image Image Image

Elliott Morss | November 23, 2014

Scroll to top

Top

4 Comments

Judgment of Princeton – Its Real Significance

© Elliott R. Morss, Ph.D.

by Elliott R. Morss

 June 2012

Introduction

Anyone seriously interested in wine should attend The American Association of Wine Economists’ annual meetings. The analytical work on all aspects of wine tops anything done elsewhere. This year, the annual meeting was held in Princeton. In addition to having 80+ papers presented, there was a wine tasting. And as was done in Paris 36 years ago, the tasting included French and American wines. The results? Just as in 1976, 1978 (San Francisco), 1986 (French Culinary Wine Institute), and again in 1986 (Wine Spectator), the American wines did quite well.

But in Princeton, it was a bit different. Instead of wines from California, wines from New Jersey were pitted against some of France’s finest. The New Jersey wines performance? For whites, the average New Jersey ranking was better than the average French ranking. And for reds, New Jersey wines ranked 3rd and 5th.

Old News

But except for the fact that the US wines were from New Jersey, the results are old news: this is 2012, not 1976. George Taber, in reporting on the ’76 Paris tasting concluded:

“The Paris Tasting shattered two foundations of conventional wisdom in the world of wine. First, it demonstrated that outstanding wine can be made in many places beyond the hallowed terroir of France. Second, the Paris Tasting showed that winemakers did not need a long heritage of passing the wisdom of the ages down from one generation to the next to master the techniques for producing great wine.”

In 2012, as I and many others have been reporting for some time, the readily available wines from the US, Australia, South Africa, New Zealand, Chile and Argentina are excellent. There are also excellent and inexpensive wines from Europe. My own sense is that today, unless you have very unusual tastes, you don’t have to spend more than $10 for an excellent bottle of wine.

Judges’ Rankings

The judges’ rankings are presented in Tables 1 and 2. Consider first the white wine rankings. Most striking to me is how often one judge ranked a wine best (or tied for best) while another ranked it worst or tied for worst. It happened for 5 of the 10 wines tasted! The Clos des Mouches was ranked 1st or tied for 1st by 4 judges. But one judge ranked it worst. Two tasters gave the Ventimiglia a tie for worst rating while one judge gave it a tie for best rating.

 

 

 

Table 1. – Judges’ Rankings of Princeton Whites

Copyright (c) 1995-2012 Richard E. Quandt, V. 1.65

 

Consider next the judges’ rankings of the red wines. Here again, we see tremendous differences. There were 6 wines ranked best by one or more judges and worst by one or more judges! Perhaps the most striking case here was the Mouton Rothschild. Four judges ranked it the best or tied for the best. But one judge ranked it tied for worst.

 

Table 2. – Judges Rankings of Princeton Reds

 Copyright (c) 1995-2012 Richard E. Quandt, V. 1.65

 

 Interpretation – The Real Significance of the Princeton Tastings

 

 Let’s start with the statistical findings. Richard Quandt and Orley Ashenfelter have been analyzing wine tastings for more than 30 years. Quandt did the statistical analysis for the Princeton judgment.  He concluded:

 

“…the rank order of the wines was mostly insignificant. That is, if the wine judges repeated the tasting, the results would most likely be different. From a statistically viewpoint, most wines were undistinguishable. Only the best white and the lowest ranked red were significantly different from the other wines.”

 

So Quandt is saying that given the tremendous differences in the judges’ rankings, the only thing you could be pretty sure of was that the Clos des Mouches Drouhin was better than the other whites and the Four JG’s Cab Franc was worse than the other reds. So what could account for the large differences in the judges’ ratings? Was it just that the wines all tasted the same? I doubt it. Read on.

 

Judges and Judgments

 

The judges for the Princeton tastings were well qualified in the sense that they know about wine. Four were economics professors who write about wine, three were wine journalists, and one owned a restaurant. But what does “well qualified” really mean? Studies on the consistency in wine judging do not instill much confidence. Neal Hulkower is a mathematician, a wine lover and an expert on how to award medals at wine tastings. At the conference, he told me a good approach to selecting judges was to have judge candidates taste six glasses of wine, with 3 of the 6 glasses holding the same wine. A judge candidate should be rejected if s/he found differences between the 3 glasses with the same wine.

 

I believe the Princeton judges were as qualified and as competent as any judges you could find. I also believe there were significant taste differences between the wines. What, then, was going on?

 

Idiosyncratic Differences in Taste

 

I have been drinking wines for 40 years and in that time, I have developed distinct wine preferences. Right now, I prefer heavy reds: Cabs, Malbecs, Shirazes, and Barolos. While I was growing up, I drank Mateus, Lancer’s Sparkling Rosé, and Chianti in the straw bottle. But for special occasions, my parents would buy White Burgundies. I still like them, mostly because of those happy memories when I was younger. They are part of my personal, idiosyncratic wine preferences. Would it be surprising to think the Princeton judges did not also have their own wine preferences? And is that not – their taste preferences – what they should be using in making their wine judgments?

 

Concluding Thoughts

So the Princeton tastings told us the judges had their own wine preferences. Is this so remarkable?  So Judge TC (see Table 2) told us he did not like the Ch. Mouton Rothschild 2004 even though Judges JF, RH, DM, and TR thought it was the best red wine tasted.

 

Times have changed. When in earlier days, there were some pretty bad wines along with excellent French wines. Today, it hard to find a bad wine. That leaves most judgments to the taste preferences of individuals.

 

Postscript – Back to Paris in 1976

 

How much have times really changed? Ashefelter and Quandt analyzed the results of that tasting. They concluded that only the first two wines were the only ones that were better statistically while the others could not be differentiated. Table 3 provides the judges rankings for the 1976 Paris tasting.

 

Table 3 . – Judges’ Rankings of Paris Reds

Source: Wikipedia

 

 Nothing has changed. Of the 10 wines tasted, 5 were ranked best by one or more judges and worst by one or more judges! Sure there were bad wines back then, but not at the Paris tasting. They were all good. And the judges’ rankings reflected their own idiosyncratic preferences.

Submit a Comment