Project Details
Projekt Print View

Daily HUME: Daily Homogenization, Uncertainty Measures and Extremes

Applicant Dr. Victor Venema
Subject Area Atmospheric Science
Term from 2014 to 2017
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 259061279
 
Global change not only affects the long-term mean temperature, but may also lead to further changes in the frequency distribution and especially in their tails. The study of the whole frequency distribution is important as, e.g., heat and cold waves are responsible for a considerable part of morbidity and mortality due to meteorological events. Daily datasets are essential for studying such extremes of weather and climate and therefore the basis for political decisions with enormous socio-economic consequences. Reliably assessing such changes requires homogeneous observational data of high quality. Unfortunately, however, the measurement record contains many non-climatic changes, e.g. homogeneities due to relocations, new weather screens or instruments. Such changes affect not only the means, but the whole frequency distribution. To increase the quality and reliability of global daily temperature records, we propose to develop an automatic homogenisation method for daily temperature data that corrects the frequency distribution. We propose to describe homogenisation as an optimisation problem and solve it using a genetic algorithm. In this way, entire temperature networks can be homogenised simultaneously leading to an increase in sensitivity, while avoiding setting false (spurious) breaks. By not homogenising the daily data directly, but by homogenising monthly indices (probably the monthly moments), the full power and understanding of monthly homogenization methods can be carried over to the homogenisation of daily data. Furthermore, in an optimisation framework, the optimal temporal correction scale can be determined objectively and straightforwardly, that is whether the corrections are best applied annually (all twelve months get the same correction), semi-annually, seasonally or monthly. All three aspects are new: the simultaneous homogenisation of an entire network, the objective selection of the degrees of freedom of the adjustments and of the temporal averaging scale of the correction model. This new method will be applied to homogenise the temperature datasets of the International Surface Temperature Initiative. This large dataset necessitates an automatic homogenisation method. To validate the method, we will generate an artificial climate dataset with known inhomogeneities. To be able to generate such a validation dataset with realistic inhomogeneities, we need to understand the nature of inhomogeneities in daily data much better. Therefore, we intend to collect and study parallel measurements (two set-ups at one location), which allow us to study the changes in the frequency distribution if one set-up is replaced by the other. Finally, we will study and quantify the uncertainties due to persistent errors remaining in the dataset after homogenisation and utilise this to improve the accuracy of the homogenisation algorithm. The knowledge of uncertainties is also indispensable for climatologists using the homogenised data.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung