If you score wines..read on!!!

The following article has an interesting point that the opinions of Cellar Tracker users are just as valid/good as those of us who refer to opinions from the recognised so-called "experts".

Why amateur wine scores are every bit as good as professionals’

I should add that I am not sure how many folks in Europe refer/use CT. Over here, posts on this site seem to be prolific but I do wonder where many of these so-called "amateurs" live!!
 
Last edited:
One advantage the CT amateurs have is the notes are often taken on semi-mature / mature wines, whilst critics generally write up new release wines because that is what they are sent or invited to taste... and that their write-ups on new release wines (if credited in wine merchants) help publicise their own brand.

To his credit, Michael Broadbent filled an important role in showcasing the joy of mature wines. There isn't a replacement and I fear we've seen the last of his vintage wine books. The last book published - pocket book update - was a decade ago now.
 
Last edited:
The scores on Vivino also generally seem pretty reliable - at least for wines which have 100+ ratings - some of the ones with a small number of ratings can be a bit wild. I've certainly been happy with the stuff I've bought untasted based on Vivino ratings before.
 
Notes and scores, no matter who wrote them, are merely snapshots and reflect how popular or unpopular wines are -not how good they are. For me, the notes of amateurs are more useful and credible because I assume that in general the amateurs are more independent than the professionals who want, or need, to make money from their knowledge and their opinions...
 
Last edited:
If people find CT scores and notes useful, then fine, but I am not sure about the argument in that article: that CT scores are as good as professional scores because they correlate.

As a quick glance at the graphs shows the correlations is not that strong, and the statement begs all sorts of questions about what it means for a score to be good, or indeed at all meaningful. My guess is that most of those scores were for wines tasted non-blind, and that the broad trend in all the scores is mainly determined by how prestigious each wine is.
 
Isn't it fairly likely that critic scores are a major influencing factor on CT scores?

I get the impression, from reading the notes and reading their forum, that most CT users are based in the US - which would also go some way to explaining closer correlations to Parker than Jancis.
 
I've read lots of these articles. The point (no pun intended) that they miss is that there are still many people who like to bounce their palate off another critic. That's not to say they follow them religiously, but it gives an insight from a single source with (presumably) a great deal of context and experience. Secondly, these articles assume that publications such WA, Burghound, Jancis existence is based solely on points. However, I would argue that nearly all those publications changed radically a few years ago - for the better. They now offer a wealth of information that you cannot get from sites like Cellartracker etc. If I stripped out the notes and scores from a Burgundy report I've just filed, you'd still have a 70,000 word report on the vintage just from background information, winemaker interviews etc. Thirdly, most pro sites will include verticals that give greater context - comparing vintages vis-a-vis growing seasons vis-a-vis winemaking techniques etc. Personally, I always find that interesting to write and interesting to read. Fourthly, such tastings may well have been conducted blind, which often raises a whole host of other facets that you might not have seen. Five, Rudi suggests that amateurs are more independent. But I could flip that around and say that because in many instances lucky me has not paid a vast sum of money for many of the wines I taste (although some I do), you could argue there is in fact less prejudice. Generally, subscription models have much more independence because we are lucky in being able to finance the cost of hotels, travel etc instead of relying in PR trips. We are indebted to subscribers who appreciate and often demand that. Six, I agree with Steve that the prestige of a wine vastly influences a great deal of reviews - you can read it in the notes, the emotion attached to a wine. There's nothing wrong with that, but it skews judgement, if some kind of objective judgement is what you are looking for. Seven, in reference to Ian's point, whilst nobody will approach Broadbent for a library of tasting on old wines, many publications now offer far more retrospectives (FWIW I introduced and "Up From the Cellar" column in WA, have a 1986 horizontal in the next issue and about 30 verticals of Bordeaux château back to 1893 all lined up.) Lastly, there is the issue of accountability. There is nothing to prevent me writing fictitious reviews and loading them onto any socially generated site. True - I could do that in my professional job but I can guarantee you would quickly lose your reputation. People aren't stupid and they see through that. Like most writers, I am accountable to those that subscribe or use my opinions and that keeps you being as objective as possible, even when you buck public opinion.
Anyway, I thought I would give a quick viewpoint from an "expert"...I wonder when I'll be allowed to take off those speech marks?
I think socially generated reviews have a place, but it is a different place to professional reviews and/or commentary.
Rgds
Neal
 
I think there's a fair bit of "herd following" on CT. Especially for already highly rated wines. The post maybe a better study of crowd behaviour than wine quality.

I use CT a lot often as a portal to RP or JR etc. Knowing the reviewer is important for me.

The user reviews are unreliable from my experience, but can be a general pointer as to quality and drinkability.
 
I find both the professional and amateur reviews and tasting notes valuable. But I apply my belief that you cannot always believe what you read, so I always take notes to be a general guide as to whether or not I will like something and what it will taste like. After all we are all individuals and we all can only experience our own senses.
 
For me, this is a topic that is almost as hackneyed as the debate about the 100-point system itself, and I really don't have any opinion on one or the other as to whether amateur or professional is 'best'.

But the current thinking seems to be that the 'wisdom of crowds' (cellartracker, tripadvisor, amazon reviews) is the most reliable way to judgement something, and that thinking is deeply flawed. It's not just the anonymity of those making the judgements, and the fact that they may or may not have experience, independence, or have an agenda, but that crowd behaviour itself is far from impartial: we've all experienced people 'tasting the label' (like my oft-quoted example of a bottle of terribly corked Yquem that a whole room full of wine club members were oohing and aahing over before the flaw was pointed out) but there is also no doubt that members of the crowd allow their judgement to be skewed by others: if wine X has 6 scores of 97/100, it takes an unusually confident person to then declare it as 77/100. That's why in big groups scores always - always - converge toward the safe middle ground. That person in my example will end up scoring the wines 87/100, just to play safe.

In short, whatever the merits or demerits of any critical exercise, the wisdom of crowds may be trending, but I think it is vastly overrated.
 
There's a sniff of the old "Wine snobs get it wrong again" doggerel which surfaces quarterly and prevents yer average punter from understanding wine and making better choices.

I've rarely got involved in the ratings argument, and there'll be people far more qualified than me to point out how ratings can work but the whole (current) idea seems fundamentally flawed in that the inherent quality of a wine (which is what raters try to rate I think) is much better judged by the inputs to the wine and genuine objective features of the finished wine rather than the end result.

Enjoyment of wine depends on so many subjective factors, none of which can be expressed in a rating - a simple example would be a brllliant 95% Chateauneuf du Pape given all the care and love possible, for which my rating is 60(%) as I just don't get on with Grenache.

But it IS a quality wine, based on its genuine inherent qualities: so were we to say up to 10% for the level of green harvest, 5% for barrel quality, 10% for selection of quality clones, 10% for yield/ha, 10% for extract level in the finished wine, 15% for acid/tannin/alc/sugar balance etc etc. You'd then see Yellowtail et al at 65% and top CdPs at 95% irrespective of personal taste. Appelation systems attempt this I suppose, but crudely.

Impression scores are of course useful, but only if you know the similarities and differences between yours and the rater's palate, and as Tom says, if their level of independence of the grower/retailer is known to be 95-100 points! Inherent quality scores would always be useful and would more closely correlate with price/value as they'd point out why a wine had to retail at £20 and not £10. And hopefully shut the tabloids up (or better, down).
 
I thought it was a pretty well-researched article. My first observations were that, as Neal says, many amateurs on CT follow critics, especially when it comes to Californian reds and that most of those sources, CT included, have an American palate while Jancis has more of a European palate, which explains the correlations and non-correlations.

Going on from what Neal said, which I agree with for the most part, is that perhaps crowds do give a better idea of how a particular wine is drinking at a particular time. That's the strength of CT.

I also noted that they didn't try to correlate Vivino with critical scores. That would have been interesting because Vivino has a different membership to CT. It tends to be younger and the reviews are more about impulse buys than wines collected.
 
I log my tasting notes on CT more for my own use and reference (with no scores)
I then use CT as a quick sanity check when buying wines I may not know anything about, but generally most wines seems to coagulate across a very narrow 89-93 point scale on CT - I am more interested in users I know and trust and what their comments are.

What I do find useful with CT that I dont get so much with some professional notes is when the note was written - I can see a lot of notes about wines in shops from the usual suspects, but often the tasting notes were written 10 years ago on release for example.

Either way - nothing beats drinking shed loads of booze and making up your own mind.
 
Without being able to taste everything yourself, I think CT and professional reviews can be useful, especially if you don't want to just stick with the same regions and producers. Expanding horizons and finding new producers before they gain mainstream recognition and thus become expensive is a worthwhile endeavour IMO.

I think it's unfair to say that most CTers have a herd mentality. There is usually a good range of points. Wines with enough reviews tend to have pretty normally distribution of scores which is what is should be.

I think saying CTers follow critics scores is a huge assumption, too!

A big plus for me, mentioned by previous posters, is the fact that there is a constant stream of reviews over time on CT as opposed to Professional reviews which are largely based on En Primeur barrel samples and newly released bottles where there is an element of prediction in terms of how the wine will develop. CT reviews are based on how wines actually taste over a larger timeframe.

Of course everything has its pros and cons. There are some strange notes on CT for sure. There are also plenty of professional, trade and informed amateurs (PJaines!) It's not difficult to spot them.

There's definitely a big 'us and them' mentality that finds it easy to look down on anything where the masses take part, but if one is careful, CT reviews are, for me, a useful data point that I'm just as likely to disagree with as HRH or any other individual regardless of their credentials. If they all agree, bingo! We might actually have something. How many people are going to give a good bottle of Petrus a low score?

Vivino is a step too far for me, though, as there are too many scores without any description and I can't be sure any of them are 'valid.'

But of course, as Paul said, it's much better to taste before you buy making all the above arguments academic!
 
Whether they like it or not, the professional writers have to live with the fact that the influence of social media, the 'wisdom of crowds', or whatever you call it, will increase, while the influence of professionals will dwindle in the future -not only in the field of wine writing, the whole traditional media landscape is in a deep credibility crises today. It lies in the nature of the subject that the professionals are not exited about this unexpected competition from social media, and the unpleasant prospects. But speaking disparagingly of the new competition is harmful to the ‘experts’ own credibility and will accelerate this irreversible process and widen the gap between dedicated ‘amateurs’ and ‘experts’.
 
I notice HRH's scores are now only available to wine-searcher pro users. And I think you can't read her articles on the FT site without being a subscriber. Her site still has free articles.

Incidentally, in a recent review of a 2011 1er cru Santenay she says "Quite a treat to come across a red burgundy with this much age on it." AFAIK some on here she doesn't understand the burgundy ageing curve. Please forgive me if I've misunderstood!
 
If one is quite specialised in ones focus (as I am these days with Champagne and the Mosel) one finds amateurs who are much more reliable than generalist professionals. Indeed with the Mosel there aren't any professional writers I rely on and about a dozen amateurs whose suggestions would trigger purchases.
 
Just stumbled across this comment by Geoff Kelly on his website:
My appraisal of this wine today raises the intriguing thought that there are basically two kinds of wine writers: those who in publishing a rating for a wine, first check their previous ranking, and make sure the numbers correlate – these predominate. In contrast there are the rigorous writers, who call it as it falls, admit to being imperfect, and publish wildly contrasting marks. Jancis Robinson and Julia Harding (at jancisrobinson.com) are the leading exponents of this approach. The whole debate recalls the ultimate wisdom of Harry Waugh, a much experienced wineman of the previous generation, who noted frequently in his reports, that how he rated a wine on the day depended totally on the other wines it was tasted with. What this boils down to is this: it is hard to conceive of 'perfect pitch' in wine evaluation, though many think of themselves that way. The corollary of that is, naturally enough, that much of the numbers game in wine writing is relative at best (depending on the calibre of the winewriter), and in the hands of the majority, can be simply a delusion. Unfortunately for the customer / wine buyer, the more mendacious merchants exploit that delusion to the nth degree. GK 08/16

It is a very interesting view in the context of this discussion, and I find myself very much agreeing with him on this occasion.
 
Points are meaningless whether they come from professionals or amateurs if you don't know the reviewers' palates aren't they? Here's a wine with 89 points. Oh yeah? Compared to what?

Well, could be other similar wines at a similar stage of development at around the same price UK retail? Feels a bit better than the 87 red he's just tasted, or last year's vintage of this wine? Or does the reviewer knock off points if the weight/RS/flavour profile doesn't conform to a presupposed idea of what "White Burgundy" is? Or is it built up according to positive quals - balance=tick, weight of fruit=tick? Is there some inclusion of value - if the winemaker has somehow pulled out all the (major) stops for UK retail of £4.99, is that reflected in the rating? That's what most people are interested in, not the very tip of the tip-top-toppermost.

Anyway, everyone on here knows this! And I think I'd respond differently to an 80-point review cf a 90, despite there being no rational reason for doing this. TNs are of course better, but I was looking at some NW chardonnay reviews earlier and was struck by praise for "restraint", or "Burgundy" which is absolutely not what I'm looking for in NW Chardonnay; I assume this makes its way into the rating and I should perhaps be looking to buy wines with scores of 75, ignoring anything with 90-95 points of carefully-curated "leanness"!

As many note above, the only opinion which counts is your own, once you understand enough about the particular style: what it can be, and some of the winemaking difficulties and decisions (until then you should be sucking all the info in you can, good & bad!). After that, others whose palates are similar to yours or for whom you can read the differences may speak to you, IF they're not barking for a firm.

As for "Aldi! 89 points!" - gibberish.
 
It’s one of thousands and thousands of points, each point more climactic than the last! Constant, dizzying, twenty-four hour, yearlong, endless points! Every point massively mattering to someone, presumably. Watch the points, all the time, forever, it will never stop, the points are officially going on forever! It will never be finally decided who has won the highest point! There are points for every wine… that’s the point! Watch it! Watch the points! Watch it! Watch it!
 
It’s one of thousands and thousands of points, each point more climactic than the last! Constant, dizzying, twenty-four hour, yearlong, endless points! Every point massively mattering to someone, presumably. Watch the points, all the time, forever, it will never stop, the points are officially going on forever! It will never be finally decided who has won the highest point! There are points for every wine… that’s the point! Watch it! Watch the points! Watch it! Watch it!
Thanks for pointing that out ;)
 
Top