cc: "Bamzai, Anjuli" , "Tim Barnett" , "Nathan" , "Phil Jones" , "David Karoly" , , "Tom Knutson" , "Toru Nozawa" , "Doug Nychka" , "Claudia Tebaldi" , "Ben Santer" , "Richard Smith" , "Daithi Stone" , "Stott, Peter" , "Michael Wehner" , "Xuebin Zhang" , "Francis Zwiers" , , "Amthor, Jeff" , "Chris Miller" date: Mon, 17 Sep 2007 00:58:30 +0100 from: "Myles Allen" subject: RE: near term climate change to: "Gabi Hegerl" , "JKenyon" Hi Gabi, I know this is what the modelers want to do, but I'm still not clear what relevance these experiments have either for attribution or for predicting trends in extremes. The ensembles are too small, the resolution is too low and there is no provision for systematically removing the impact of different forcings (apart from a nod towards GHGs, but it is unclear to me how uncertainty in current attributable warming is to be dealt with, particularly in the runs initialized from observations). I totally appreciate the community interest in the design you describe, and I don't blame you in the slightest. There is a lot of inertia here. But if we want to attribute current risks or predict trends in risk over the next couple of decades, then the best design still looks to me like large ensemble time-slice experiments with SSTs either prescribed or relaxed with a time-constant of only a few weeks. This way you get the signal-to-noise up, you benchmark against a good simulation of the present day and you can run high enough resolution and large enough ensembles actually to simulate the events people care about. What can they say about 100-year return-time events with 10-member ensembles? Wouldn't the best strategy just to say straight out that, while we support these runs being done, they aren't particularly interesting for attribution and certainly not for attribution of changes in extremes, nor for the very closely related problem of near-term prediction of trends in extremes. For that we need a different set of experiments. If people care about understanding and predicting changes in extremes, they can allocate time accordingly. My concern is that if we let people think of these runs (which will be very expensive) as "the attribution experiments", they will (a) expect us to generate results from them and (b) object to us asking for other experiments. Myles -----Original Message----- From: Gabi Hegerl [mailto:gabi.hegerl@ed.ac.uk] Sent: Friday, September 14, 2007 5:09 PM To: JKenyon Cc: Bamzai, Anjuli; Myles Allen; Tim Barnett; Nathan; Phil Jones; David Karoly; knutti@ucar.edu; Tom Knutson; Toru Nozawa; Doug Nychka; Claudia Tebaldi; Ben Santer; Richard Smith; Daithi Stone; Stott, Peter; Michael Wehner; Xuebin Zhang; Francis Zwiers; hvonstorch@web.de; Amthor, Jeff; Chris Miller Subject: near term climate change Hi all, I was at the WGCM meeting last week, and the issue of saving 20th century runs and high resolution runs was only discussed marginally among the big worry about scenarios and carbon cycle. However, there seems to be a lot of momentum to do initial value forced predictions. I think it would be very good to get for AR5 predictions based on various techniques including attributable ghg and initial values. So TIm Stockdale and I hammered out this proposal (with some suggetsed edits by me but those are still subject to Tims ok) - this may sound like its going down way to far the initial value trail for our interests, but it tries to serve all kinds of communities able to do some form of prediction. Comments welcome, Peter has a collegue going to a meeting in the Netherlands next week where this issue will be more discussed, so having a view say this weekend or monday would be particularly good Gabi -- Dr Gabriele Hegerl School of GeoSciences The University of Edinburgh Grant Institute, The King's Buildings West Mains Road EDINBURGH EH9 3JW Phone: +44 (0) 131 6519092, FAX: +44 (0) 131 668 3184 Email: Gabi.Hegerl@ed.ac.uk