Daily HUME: Daily Homogenization, Uncertainty Measures and Extremes (Homogenisierung täglicher Daten, Fehlermaße und Extreme)
Zusammenfassung der Projektergebnisse
The proposal aimed to develop a homogenization method for daily data that would work with global datasets. In the beginning we found that we were too optimistic about statistical homogenization due to our experience with the homogenization of European data and benchmarking statistical homogenization for corresponding dense observational networks. This report thus describes our work on understanding statistical homogenization in case the signal to noise ratio is low and on alternative ways to estimate how much the climate has changed. Theoretical work on statistical homogenization traditionally considered the detection of single breakpoints. In this case the null hypothesis is identically (typically normally) distributed noise. In case of multiple breakpoints the situation is more complicated and the series consists of a noise signal and a break signal. Somewhat oversimplified the problem of this situation is as follows. If the noise variance is larger than the break variance, statistical homogenization will set break points largely based on the noise, but this segmentation will also explain part of the break signal. The explained variance is thus larger than expected for a noise signal (the null hypothesis). The algorithm will thus rightly see the breaks as statistically significant (there are breaks in the series), but estimate the break positions wrongly. Such errors in the detected breaks lead to errors in the correction of any trend biases. Estimating the correction constants is a regression problem. That means that any problem with the predictors (break positions) will reduce the explained variance and will result in a partial correction of the trend bias. Weather and measurement noise just make the trend estimates noisier, but missing and spurious breaks as well as errors in the date of the break lead to systematic under-correction of trend errors. Interestingly, in case the break positions are perfectly known, the large-scale trend uncertainty is determined by the noise variance, not by the break variance. In combination this means that if the signal to noise ratio is below one we will not be able to correct large-scale trend biases sufficiently or at all. We have not found a solution for this fundamental problem of statistical homogenization. Consequently we started looking at alternative ways to study trend biases in climatological datasets. A comparison of high-quality national and regional datasets with global and continental datasets finds that the long national datasets show more warming over the instrumental period. Unfortunately this conclusion is mostly limited to European datasets. A larger number of shorter series, also from outside of Europe, do not show such a bias over the last decades. This may be because of the use of less powerful homogenization methods. Global temperature datasets do not show warming in the period around 1900, but there are many other indications that there was warming in this period, such as sea level rise, melting glaciers, the freezing and break-up dates of lakes and rivers and temperature reconstructions. That this warming is not seen in global instrumental temperatures could be related to the introduction of Stevenson screens, which seems to be insufficiently corrected for in in NOAA’s GHCNv3. A direct, but labor intensive way to study biases is the collection and analysis of parallel measurements from all over the world where the old and new measurement set-up is measuring side by side. Recent parallel measurements (Austria, Switzerland and Spain) have found considerable biases. We have few measurements in the tropics and have no parallel measurements in continental climates, but this transition seems to have produced a cooling of a few tenth of a degree Celsius. Similar preliminary studies for the transition to automatic weather stations show large biases for individual networks, but no significant global bias in the mean temperature. These results were one of the reasons to start discussion creating a global climate reference station network.
Projektbezogene Publikationen (Auswahl)
-
2016: The uncertainty of break positions detected by homogenization algorithms in climate records. International Journal Climatology, 36, no. 2, pp. 576–589
Lindau, R. and V. Venema
-
2017: A call for new approaches to quantifying biases in observations of sea-surface temperature. Bulletin of the American Meteorological Society, 98, pp. 1601–1616
Kent, E., J. Kennedy, T. Smith, S. Hirahara, B. Huang, A. Kaplan, D. Parker, C. Atkinson, D. Berry, G. Carella, Y. Fukuda, M. Ishii, P. Jones, F. Lindgren, C. Merchant, S. Morak-Bozzo, N. Rayner, V. Venema, S. Yasui and H. Zhang
-
2018: Inter-comparison of methods to homogenize daily relative humidity. International Journal Climatology, 38, pp. 3106–3122
Chimani, B., V. Venema, A. Lexer, K. Andre, I. Auer and J. Nemec
-
2018: On the Reduction of Trend Errors by the ANOVA Joint Correction Scheme Used in Homogenization of Climate Station Records. International Journal of Climatology, 38, 5255–5271
Lindau, R. and V. Venema
-
2018: The joint influence of break and noise variance on the break detection capability in time series homogenization. Advances in Statistical Climatology, Meteorology and Oceanography, 4, pp. 1-18
Lindau, R. and V. Venema
-
2018: Towards a global land surface climate fiducial reference measurements network. International Journal Climatology, 38, pp. 2760–2774
Thorne P.W., H.J. Diamond, B. Goodison, S. Harrigan, Z. Hausfather, N.B. Ingleby, P.D. Jones, J.H. Lawrimore, D.H. Lister, A. Merlone, T. Oakley, M. Palecki, T.C. Peterson, M. de Podesta, C. Tassone, V. Venema and K.M. Willett