The UK’s Wine magazine is responsible for organising one of Europe’s biggest wine competitions, the International Wine Challenge (IWC). Wineries, shippers, distributors and retailers submit around 7,000 wines to the scrutiny of a panel of judges. The tasting is done in categories (“Spicy Reds”, “Aromatic Whites”, “Sparkling Wines”, etc) and within each category wines may be awarded gold, silver or bronze medals. In addition, a number of wines are awarded the “Seal of Approval” – just missing out on medal quality. Over half of all wines submitted receive no award whatsoever. All tastings are double blind, and no pricing information is known to the judges. The results of the competition are publishedaround October each year, and are hugely influential on the British and other European markets. The pool of judges for the Challenge is drawn from the wine trade and wine journalism. It includes many distinguished Masters of Wine and respected authorities on particular subjects. Membership of the judging panel is by invitation only.
Training and Assessment Day
For the past couple of years the IWC has organised “Training and Assessment Days” (TADs). The TADs are a chance for wine professionals and keen amateurs to test their own tasting prowess in a mock run of the Challenge, and for the organisers to recruit some new, suitably qualified judges for this massive event. Basically, the contestants match their assessment of various wines against that of some top IWC judges. If they do well enough – judging consistently and writing good notes – they may be invited to judge at the IWC proper. This year I decided to attend a TAD to get a glimpse of how this massive event is organised and to see how my own wine tasting and assessment skills bore up under competition conditions.
One sunny morning in March around 100 of us gathered at 9.00 am sharp. After registration we moved through to the tasting hall. Long tables were set with 10 tasting glasses at each place, spittoons, lots of bottled water and stacks of tasting note sheets. A small army of volunteers stood by, ready to begin pouring the 8,000 or so measures that would be swirled, sniffed and tasted over the course of the morning. The day was led by Robert Joseph and Charles Metcalfe, two well known British wine writers, who introduced the event, giving us the rules of engagement as well as some hints and tips based on past experience: beware of over-looking wines just because they are subtle, beware of palate fatigue, don’t “play safe” and mark everything in the middle ground, don’t swallow!
Apart from Messrs. Joseph and Metcalfe, the other members of the panel of judges were trade professionals including 3 Masters of Wine. They had tasted all the wines earlier in the week, had formed their judgement and had awarded scores to each. Sneakily (but in the interests of education) they had included an unspecified number of wines that were flawed, or had been chemically altered to exhibit characteristic faults – excess sulphur, oxidization, volatility, etc. These rogues had to be spotted and identified as part of our task.
The serious business then commenced: tasting 73 wines in 10 flights. This was carried out under strict IWC conditions: double-blind, with around 2½ minutes to taste, assess, write notes and award scores to each wine.
Here is the top portion of the tasting sheets used in the Challenge:
As you can see there is a section for each wine tasted with space to write comments, to mark any faults, to estimate maturity and to award a score.
The forms are carbonised, so contestants can keep a copy for their own reference. At the end of the day all the forms were taken away for scrutiny so that performances could be assessed. This is done by charting each person’s scores against those of the panel.
I thoroughly enjoyed the day and found it tremendously educational. I also found it very hard work however: tasting and forming an opinion of over 70 wines in 10 diverse categories in the space of 3½ hours is tough enough, let alone having to write up intelligible notes! It definitely took me a while just to begin get the pace. In the first 2 or 3 fights I resorted to a blind stab at the last couple of wines, with only seconds remaining on the clock. As the day proceeded, I relaxed and got into the rhythm. My judging also became more consistent.
After lunch we moved on to the second part of the event. The panel of IWC judges took us back through each wine. The identity was revealed and the comments and scores of the panel read out. A full explanation was given of why wines were judged/scored as they were. It is here that you can compare your own opinions and scores with those of the panel, congratulating yourself if you got it “right” and working out where you went “wrong” if your assessment was widely different.
Read my tasting notes for all 73 wines in the Training and Assessment Day then use your browser’s Back button to return here.
International Wine Challenge, 1998
I was delighted to be successful in all 10 categories of the competition and to be invited to judge at the IWC proper, held in London in May 1998. The format of the “real” Challenge is very similar to the TAD, except here the judging is done in teams. I did feel an added responsibility knowing that thousands of Wine readers might base their purchases on my opinion, and that some wines may or may not get to display the coveted IWC Medal symbol, depending on how we assessed them.
The whole tasting event was spread over 2 weeks to get through all 7,000 wines. Each morning, at around 9.30 am, the pool of judges assembled to be sorted into teams of 5 by the organisers. Specific tasting skills were matched with the flights to be tasted wherever possible. One particularly experienced judge from each team was appointed Chairperson. Each day there were up to 12 teams in operation simultaneously, each working their way through around 100 different wines. My team included the boss of a well known UK wine and spirit distributor, a wine buyer for the top UKsupermarket chain, an English wine-maker and a professional wine tour-guide. Wines were served in flights of between 8 and 16 bottles. Identities and pricing information were concealed: we knew only a coded reference number and a broad category. Included amongst those I tasted were unoaked whites, Germanics, Rhone style, Bordeaux style, light cabernets and spicy reds. Again this was a hugely interesting and educational experience – condensing so much concentrated tasting and serious discussion of wine into such a short space of time. Likewise, it was again pretty exhausting, both mentally and physically.
The format followed by our team was to taste, assess and mark each flight in silence and without collaboration. At the end of the flight, we would huddle together to read out our comments and scores in turn. Where we had broad agreement on the quality and score for a wine, the Chairperson simply recorded the scores on the master form and arrived at an average “team” score. If there was serious disagreement we re-tasted the wine in question, this time discussing it as we did so. Almost always this resulted in a consensus – one of us realising that we had misjudged some aspect. In the rare event that we could not agree, one of the day’s “super-jurors” was called in. The super-juror tasted the wine and pronounced a final verdict.
It will be fascinating to see the final results of the Challenge, once all 35,000 plus tasting notes are compiled and scores recorded. Having been a small cog in the process I will certainly view the published results in a different light. I’ll also be scouring the pages to see if I can recognise any of my tasting notes – until the results are published we judges have no idea what we were tasting.
What did I learn from the experience?
Of course, judging wines for quality – maybe even for unacceptable levels of “faultiness” – is an entirely subjective activity. I teach a wine appreciation course, and on this I constantly stress to the students that all opinions are valid and that nobody should feel their opinion is worth less than that of the so-called “experts”.
The wisdom and fairness of judging so many wines in such a brief time is a contentious issue that is frequently debated in wine circles. Assessing 100 wines in one sitting is not uncommon in major shows, competitions and trade tastings. On a practical level, the most obvious limitation of such an approach is the lack of opportunity to monitor how a wine evolves and develops in the glass. I have drunk many wines that offer little when first poured, yet blossom over the course of an hour. On the other hand, some wines can be stunning when poured, but collapse very quickly with exposure to oxygen. The competition format affords little opportunity for double checking and comparison – the relentless pace sees to that.
On a more philosophical level, it is an extremely harsh and unforgiving environment in which to judge such complex and personal creations as bottles of wine. Even with the best will in the world, wines of charm and subtlety must surely risk annihilation by the “blockbusters”. Not only do judges suffer the mental and physical fatigue of olfactory overload, but it is a constant effort to avoid adopting too competitive a mindset: obsession with finding fault and nit-picking for example. Judging the quality of a wine is not a mathematics exam: you cannot simply award objective marks. Nature and nurture have worked long and hard to create these bottles – even wines made on an industrial scale are small miracles of nature. Part of me thinks that surely they deserve a little more than a couple of minutes of sniffing and slurping in the cold light of a massive competition?
On the other hand, my final conclusion having taken part in a few mega-tasting events, is that the merits of the system do at least balance the flaws. Certainly, there are many positive aspects to the International Wine Challenge:
- It is organised with scrupulous fairness and the judging is taken very seriously
- It is inclusive but expensive – anyone can submit a wine, but at a price
- Team tasting ensures that most scoring quirks are ironed out
- All recommended wines go before a “super-judge” who has the power to re-assess
It is the great middle-ground of wines that often need most careful assessment. Many wines take care of themselves: poor wines are obviously poor, exceptional wines will always sing out (this is true even if they are tasted at the “wrong” stage of their development – as long as the tasters have enough experience). One great advantage of a competition like the IWC is that the tasting faculties of the judges get tuned-in quite finely as the palate is exposed to so many wines, and the pressure to concentrate is on. Quick, accurate, consistent assessment is possible and judging the relative merits of one wine against another is not too difficult. Judging absolute merit is a tougher task: “is this wine better than the last?” is a lot easier to answer than “is this a gold medal wine?”. The middle-ground is where most wine consumers will be making their purchases, and where most will look for advice. I believe it is also where there is most potential for error.
The experience confirmed a belief I’ve held firmly for some time: that invariably first impressions are the most valuable. I think this is especially true for those of us who are fairly experienced tasters. That concentrated first sniff/taste of a wine usually homes in directly on its essential qualities – subsequent sniffs and sips can, in fact, cloud the otherwise spot-on picture of your initial opinion. It is quite easy to convince yourself that characteristics and nuances are/are not present based on an intellectual process rather than a gut-reaction, and that intellectual process can occasionally defy the evidence of your own taste-buds.
Read all 73 Tasting Notes from the Training and Assessment Day.