dot
Detailansicht
Katalogkarte GBA
Katalogkarte ISBD
Suche präzisieren
Drucken
Download RIS
Hier klicken, um den Treffer aus der Auswahl zu entfernen
Titel Homogenisation algorithm skill testing with synthetic global benchmarks for the International Surface Temperature Initiative
VerfasserIn Katherine Willet, Victor Venema, Claude Williams, Enric Aguilar, Ian Joliffe, Lisa Alexander, Lucie Vincent, Robert Lund, Matt Menne, Peter Thorne, Renate Auchmann, Rachel Warren, Stefan Bronniman, Thordis Thorarinsdotir, Steve Easterbrook, Colin Gallagher, Giuseppina Lopardo, Zeke Hausfather, David Berry
Konferenz EGU General Assembly 2015
Medientyp Artikel
Sprache Englisch
Digitales Dokument PDF
Erschienen In: GRA - Volume 17 (2015)
Datensatznummer 250114648
Publikation (Nr.) Volltext-Dokument vorhandenEGU/EGU2015-15445.pdf
 
Zusammenfassung
Our surface temperature data are good enough to give us confidence that the world has warmed since 1880. However, they are not perfect - we cannot be precise in the amount of warming for the globe and especially for small regions or specific locations. Inhomogeneity (non-climate changes to the station record) is a major problem. While progress in detection of, and adjustment for inhomogeneities is continually advancing, monitoring effectiveness on large networks and gauging respective improvements in climate data quality is non-trivial. There is currently no internationally recognised means of robustly assessing the effectiveness of homogenisation methods on real data - and thus, the inhomogeneity uncertainty in those data. Here I present the work of the International Surface Temperature Initiative (ISTI; www.surfacetemperatures.org) Benchmarking working group. The aim is to quantify homogenisation algorithm skill on the global scale against realistic benchmarks. This involves the creation of synthetic worlds of surface temperature data, deliberate contamination of these with known errors and then assessment of the ability of homogenisation algorithms to detect and remove these errors. The ultimate aim is threefold: quantifying uncertainties in surface temperature data; enabling more meaningful product intercomparison; and improving homogenisation methods. There are five components work: 1. Create 30000 synthetic benchmark stations that look and feel like the real global temperature network, but do not contain any inhomogeneities: analog clean-worlds. 2. Design a set of error models which mimic the main types of inhomogeneities found in practice, and combined them with the analog clean-worlds to give analog error-worlds. 3. Engage with dataset creators to run their homogenisation algorithms blind on the analog error-world stations as they have done with the real data. 4. Design an assessment framework to gauge the degree to which analog error-worlds are returned to the original analog clean-worlds by homogenisation and the detection/adjustment skill of the homogenisation algorithms. 5. Present an assessment to the dataset creators of their method skill and estimated uncertainty remaining in the data due to inhomogeneity.