Image Image Image Image Image Image Image Image Image Image

Elliott Morss | October 23, 2014

Scroll to top

Top

No Comments

Chiantis – The Box Wine Wins Again

Chiantis – The Box Wine Wins Again
© Elliott R. Morss, Ph.D.

Introduction

In the six previous blind tastings of the Lenox Wine Club, a box wine has come in either first or second. Chianti was the focus of our 7th blind tasting. And once again, a 3-liter box wine got the highest score.

Tasting Details

There are three basic goals for our tasting dinners:

  1. Taste wines similar enough to make comparisons meaningful;
  2. See if price matters, and
  3. Have a good time.

We taste 5 wines. In past tastings, we have “rated” wines 1 to 5 with ties allowed. But ratings do not allow one to register differences in the intensities of likes and dislikes. But note: there are also problems in using scores.[1] So for this tasting, we allowed scores with tasters given the following scoring instructions: 60-70 = Poor/Unacceptable, 71-79 = Fair/Mediocre, 80-89 = Good/Above Average, 90-100=Excellent. From scores, we can get rankings. It turned out that for this tasting, the scores and rankings gave the same result.

The Chiantis Tasted

1. Piccini Rosso 3L Box 2010

While not called a Chianti, this box is pretty close with Sangiovese as the primary grape supplemented by the Malvasia Nera and Ciliegiolo grapes.

2. Melini Chianti DOCG Straw 2009

Melini is an old but “okay” wine producer. What does that mean? None of its wines have ever received a wine rating of more than 90 from Wine Spectator. However, it is one of the very few wineries that still cases some of its Chianti in straw. And what would Chianti tasting be without one bottle in straw?

3. Folonari Chianti 2012

Folonari is part of Gruppo Italiano Vini. GIV is the number one wine grower-producer in Italy. It owns 15 brands, including Melini. Overall, Italian vineyards have been able to keep their prices quite high. But GIV offers Folonari wines at very reasonable prices. The last time Wine Spectator rated a Folonari Chianti (2010), it got a 90 rating.

4. Castello dei Rampolla Chianti Classico 2009

Okay, enough of the inexpensive stuff. This and the following wine are more expensive with high WS ratings. Castello dei Rampolla is one of the most highly regarded wineries in the Chianti region. It has been around since the 13th Century and the wine we are drinking got a 92 rating from Wine Spectator.

5. Carpineto Chianti Classico Riserva 2006

Having been started in 1967, Carpineto is a relatively new winery by Italian standards. While the Castello wine (just above) is a Chianti Classico, this one is a Chianti Classico Riserva. This wine also received a 92 rating from Wine Spectator.

The Results

The results of our blind tasting are presented in Table 1. Both glasses of the Piccini beat the others. There was no correlation between scores and prices. The most expensive Chianti came in third from last; the next most expensive Chianti got the lowest score. For the explanation of why there are two glasses of Piccini, read on.

Table 1. – Chiantis – Blind Tasting Scores

Testing Tasters’ Competency

While there were 5 wines, we actually tasted one wine in two separate glasses.  What is this all about? As I have described in an earlier posting, Robert Hodgson has his own winery and has been troubled by erratic ratings his wines had been receiving from judges at tastings. So he came up with a way to rate potential judges. The key to his method? Have the candidates do blind tastings that include more than one glass of the same wine. If the candidates do not score glasses of the same wine nearly the same, they are not competent to judge wines. Hodgson’s suggested overall scheme is quite rigorous: candidates must do four blind tastings of ten glasses each. We used his methodology in a less rigorous way: two glasses poured from the same bottle: we can’t be as sure this will single out incompetent tasters but the results are “indicative”.

If you want to be rated high enough to be a judge, there is a way to “beat” the Hodgson test. For example, consider GR, one of our tasters. The “Piccini Spread”, the right hand column in Table 1 measures the difference in scores between the two glasses of Piccini, and GR did well – a zero spread. But look again at GR: he gave a score of 80 to 4 of the 6 glasses he tasted. The way you beat the Hodgson test? Score all wines very close together. Now, in defense of GR, I doubt he was trying to beat any system: my guess is that he found all four wines he rated 80 to be about the same – mediocre.

Look at the “Tails”

But GR’s scoring reminds me of another point worth noting. At the Stellenbosch meetings referenced earlier, Robin Goldstein gave a paper titled “Do Negative Ratings Mean More Than Positive Ratings”. And it got me thinking. It often happens wines come out on top not because of some outstanding characteristic, but because of a lot of middle scores/rankings. And in fact, most wines today are “okay/mediocre”. I am not interested in more of these. Instead, I am looking for wines that either are “really special” or wines that “should be avoided”.  To get data on these good and bad wines, we should not look at average scores/rankings but instead the “tails” – the very good and very bad scores.

So in Table 2, we count the number of times each wine got an excellent (90+), a poor (<71) and a mediocre (89–71) score. The “tails” results are not impressive. All wines got high mediocre scores with the exception of the second glass of Piccini. But that glass got a lower mediocre score only because two tasters scored it “poor”.

 Table 2. – The “Tails” Test

Conclusions

What do we make of these findings and how definitive are they? Findings are more definitive if all tasters agree. Perhaps the best statistic for measuring taster’s agreement is the Kendall’s Tau: a higher number indicates greater uniformity among tasters. The Tau for our tasting was only 0.079 suggesting very little agreement among tasters.

But there is another way to look at this: what is the probability that at all 7 tastings, one of the five wines tasted would come in either first or second by chance? There is a very low probability of this happening by chance: (2/5)7 = 0.16%. So if you think of a box wine as a type of wine, having it come in first or second in seven consecutive tastings by chance is extremely low.

These probabilities suggest that if you want to buy a good wine, get a box. It has scored better in blind tastings against wines costing $60+ with the highest Wine Spectator ratings. And this has only to do with taste. If you add in price considerations, a 3 liter box sells for, say, $17 or the equivalent of 4.25/750ML.

In summary, two findings from the seven Lenox Wine Club tastings are quite striking:

  • The consistently excellent performance of the box wines, and
  • The consistently poor performance of expensive wines.



[1] In judging wines, is it better just to rank them, e.g., 1, 2, 3, etc. or score them? This matter was discussed at some length at the American Association of Wine Economists annual meeting in Stellenbosch last June. Neal Hulkower argued for rankings while Dom Cicchetti made the case for scoring wines. In essence, Hulkower argues that scoring introduces too many arbitrary taster decisions into the judgments while Cicchetti argues that scoring allows tasters to register different intensities of likes and dislikes.

Submit a Comment