cc: k.briffa@uea date: Wed Apr 19 15:31:17 2000 from: Tim Osborn subject: 2 SE confidence limits to: ewatson@julian.uwo.ca >>Date: Thu, 13 Apr 2000 10:49:41 -0400 (EDT) >>To: k.briffa@uea.ac.uk >>From: Emma Watson >>Subject: 2 SE confidence limits >> >>Dear Dr. Briffa, >> >>I brought home one of the single page copies of your poster from the Mendoza >>conference and noticed that your 2SE confidence limits (Figure 7) appear to >>increase back in time. In a recent paper you reviewed and accepted for >>Holocene (Watson and Luckman: Dendroclimatic Reconstruction of Precipitation >>for sites in the southern Canadian Rockies) we also had the error bars >>increase back in time (Figure 6 of that paper). The increase is not really >>noticeable because we truncated the reconstruction when chronology SSS fell >>below 0.85. I calculated the SE using the formula SEy=(Sy(SQRT (1-r2yx)) - >>where Sy is the standard deviation of y and 1-r2yx is the unexplained >>proportion of variance in y. I increased error back in time by altering the >>r2 value based on the reduction of explained variance (decreasing sample >>depth) estimated using the SSS. >> >>My query is twofold: is this the way you did it (is this presented in a >>paper I have missed?) and if not does this method seem reasonable? >> >>Thanks in advance for any comments you may have. >>Best Regards, >>Emma Dear Emma Keith has asked me to reply to this. I think your approach is reasonable, although a little different to ours. Our approach combines the uncertainty due to the residual temperature variance not captured by the calibration, together with the uncertainty due to the standard errors of the regression coefficients (the intercept and slope for a simple linear regression). Your formula matches the first of these, but you omit the second source of error. This isn't a major failing, since for simple linear regression and a reasonable calibration period, the standard errors of the regression coefficients are usually quite small. They can become larger when using multiple regression, particularly for the more minor predictors. Our errors are also timescale dependent, because the variance of the residuals is lower if, say, a 10-yr running mean is taken. The size of the reduction depends upon whether the residuals are autocorrelated (some authors assume they are not, but we explicitly compute and use the lag-1 autocorrelation of the residuals when calculating error ranges). This is all explained in the appendix of a paper that we are about to submit to The Holocene. The appendix is attached to this e-mail as an MS Word file. I hope you can read it, including the embedded equations. The main part of your question is, of course, how do we get the errors to be time-dependent (as opposed to timescale-dependent). The answer is that we recompute our regional-mean tree-ring time series, but using only a subset of the tree-ring chronologies, selected by the requirement that they have data back to a specified year (e.g., 1700, 1600, 1500, 1400). For each subset, we redo the calibration to obtain new standard errors of the regression coefficients, and the variance and autocorrelation of the residuals. Then, we use these new sets of statistical parameters to compute the uncertainty ranges for the years 1700, 1600, 1500 etc. (interpolating the parameters for intervening years, to obtain intervening uncertainties). This is explained in the main body of the paper mentioned above, which we are not quite ready to release. The paper reference will be: Briffa KR, Osborn TJ, Schweingruber FH, Jones PD, Shiyatov SG & Vaganov EA (2000) Tree-ring width and density data around the Northern Hemisphere: part 1, local and regional climate signals. To be submitted to The Holocene. Let me know if you have further questions/comments. Best regards Tim