cc: Juerg Luterbacher , Keith Briffa , Phil Jones , "Michael E. Mann" , Scott Rutherford date: Thu, 17 May 2001 15:12:10 -0400 from: Ed Cook subject: Re: Comments on "Extending NAO Reconstructions ..." to: "Michael E. Mann" Hi Mike, Points well taken. I guess I have heard some people (critics) look at the error bars on proxy recons and disparage their quality in what I feel is an unfair, and indeed ignorant, way because the uncertainty estimates are large. So, I am interested in hopefully refining how the errors are calculated that takes into account frequency-dependent aspects. I am convinced that, while the current way of estimating errors based on classical regression theory is indeed a useful start (I certainly agree with you here), it is also almost certainly not correct for the reasons that I described in my previous email. Of course, I may be totally "out to lunch" here. Maybe the "correct" errors will be larger as well! My main point is that we are not working with random samples when developing reconstructions from proxies. Rather, we are working with selected series of observations (the proxies) that have defined histories (their changes through time), which can not be changed much through random sampling (there is only one realization per proxy record, sampled from a population that is also affected by the same history of environmental change). This being the case, how do we interpret the regression errors when the individual estimates in the reconstruction are based on such time series? I don't know. Maybe I am more concerned about the interpretation of the errors, rather than their actual magnitudes in this case, which is also critically important in comparing current and past changes. Again, maybe I'm nuts here. I can think of a number of more important things to do than worry about error bars, so I am happy to move on if you can convince me that this is all a waste of time. Cheers, Ed >Hi Ed, > >On the road, but just had to chime into this debate briefly. > >What you say is of course true, but we have to start somewhere. Step #1 is >producing a reconstruction. Without some reasonable estimate of >uncertainty, a reconstruction isn't >very useful in my opinion. Step #2 is producing some reasonable estimate of >uncertainty. In my mind, this is based on looking at the calibration >residuals, seeing if they pass some basic tests for whiteness, normality, >etc., looking at the verification statistics, and seeing if this continues >to hold up in an independent sample. It is important to use the longest >instrumental records we have for independent verification where possible. >Of course, there may be additional biases in the predictors that are >difficult to identify even in a relatively long verification interval >(e.g., ultra low-frequency problems w/ fidelity). Step #3 is trying to >evaluate this as best we can (looking at the frequency domain structure of >the predictors themselves, seeing if there is loss of variance at very long >timescales, looking at the robustness of long-term trends to >standardization issues, etc.), etc...I see this as a successive series of >diagnostics and self-consistency checks that iterate towards getting a >reasonable handle on the uncertainties. This is the approach >that we have taken, and I think it is the most appropriate... > >I firmly believe that a reconstruction w/ out some reasonable estimate of >uncertainty is almost useless! If the community wants to use paleodata for >signal detection, model validation, etc. I believe that this is absolutely >essential to do, whether or not we can do a perfect job. > >I would be very surprised if Hans would disagree w/ my statement above! > >anyways, my two cents on the matter... > >mike > >At 09:50 AM 5/17/01 -0400, Ed Cook wrote: >>Hi Juerg, >> >>I've done an admittedly quick read of your paper "Extending NAO >>Reconstructions Back to AD 1500" and find it to be fine overall. One slight >>correction on pg. 3 concerning the Cook et al. (1998) recon. The tree-ring >>records used also included some from England, as well as the eastern US and >>northern Fennoscandia. On pg. 10, sentence 8-9 in Conclusions, the wording >>is a little confusing. You say "Including station pressure of Gibraltar and >>Reykjavik as predictors in 1821 lead to a decrease of the confidence >>estimates". This almost sounds like you are doing worse when adding in >>Gibraltar and Reykjavik, when I know you mean the opposite. So, a change in >>wording to something like "... lead to increased confidence in the >>estimates of monthly NAO". Also in Table 1, is the Cullen R4 NAO >>reconstruction the one with instrumental data in it? If so, it has used >>some of the same data as yours. I don't recall if R4 is the one with >>instrumental data. But if it is, you ought to mention that. >> >> >>On a thematic note that doesn't have much direct bearing on the paper as it >>stands now (but which may be of interest to Keith, Phil, and Mike as well), >>I have growing doubts about the validity and use of error estimates that >>are being applied to reconstructions, such as those you have applied in >>Fig. 3. First, as you say at the end of the paper, there is a clear >>frequency dependence in the strength of relationship between the actual and >>proxy-estimated data that is not being considered, i.e. "the SE ... become >>smaller when considering low-pass filtered time series" (pg. 10). The >>assumption of the error estimates as now estimated and applied is that the >>error variance is truly white noise, i.e. equally distributed across all >>frequencies. That is surely not the case. This is different from questions >>about autocorrelated residuals, which tell you nothing about the frequency >>dependence of the quality of the estimates. This is where classic >>regression theory falls down. It is based on the notion that each >>observation is a random sample with no time history or frequency domain >>structure. When we use long time series of observations (climate or proxy) >>to reconstruct some climate variable, we are also using predictors that >>have time series structure and history that cannot vary in a completely >>random fashion even if the data could be completely resampled. This is >>because they represent a series of prior "observations" of >>climatic/environmental conditions. This lack of randomness of the >>observations used for reconstructing past climate again causes me to doubt >>the validity of the error estimates being applied. The degree to which the >>reconstruction can actually vary from year to year within the prescribed >>error limits is itself constrained by the time history of the observations >>themselves used for reconstruction. In contrast, the 2SE limits in your >>Fig. 3 prior to 1821 contain almost all of the estimates. This result could >>be used to claim that there is effectively no useful time history of >>variation in the NAO reconstruction prior to 1821 because each estimate may >>fall with equal probability anywhere in the error envelop. I would regard >>this interpretation as completely wrong. Thus, I would say that the decadal >>period of above-average winter NAO in your reconstruction around AD 1700 is >>real, assuming that the predictors used are providing unbiased estimates, >>even though it is fully enclosed by the 2SE limits that intersect zero. >>This is getting towards the debate with Von Storch over "most probable" >>estimates. I am probably not explaining myself well here and undoubtedly >>need to think more about it. But I really think that error bars, as often >>presented, may potentially distort and unfairly degrade the interpreted >>quality of reconstructions. So, are the error bars better than nothing? I'm >>not so sure. >> >>Cheers, >> >>Ed >> >> >Hello Ed >> > >> >thanks very much for your nice mail. I hope these little >> >comments were useful for you and yes of course >> >we hope too that we can merge the data base sometime >> >later on. This would be great. >> > >> >Do you think that you could send me some comments >> >on our paper by tomorrow? >> >Is your paper for the Orense book? >> > >> >Many greetings and till later >> > >> >Juerg >> >> >>================================== >>Dr. Edward R. Cook >>Doherty Senior Scholar >>Tree-Ring Laboratory >>Lamont-Doherty Earth Observatory >>Palisades, New York 10964 USA >>Email: drdendro@ldeo.columbia.edu >>Phone: 845-365-8618 >>Fax: 845-365-8152 >>================================== > >_______________________________________________________________________ > Professor Michael E. Mann > Department of Environmental Sciences, Clark Hall > University of Virginia > Charlottesville, VA 22903 >_______________________________________________________________________ >e-mail: mann@virginia.edu Phone: (804) 924-7770 FAX: (804) 982-2137 > http://www.evsc.virginia.edu/faculty/people/mann.shtml ================================== Dr. Edward R. Cook Doherty Senior Scholar Tree-Ring Laboratory Lamont-Doherty Earth Observatory Palisades, New York 10964 USA Email: drdendro@ldeo.columbia.edu Phone: 845-365-8618 Fax: 845-365-8152 ==================================