![Hier klicken, um den Treffer aus der Auswahl zu entfernen](images/unchecked.gif) |
Titel |
The application of Bayes probability theory for uncertainty assessments of Antarctic ice sheet predictions |
VerfasserIn |
Andreas Wernecke, Tamsin Edwards, Neil Edwards, Philip Holden |
Konferenz |
EGU General Assembly 2017
|
Medientyp |
Artikel
|
Sprache |
en
|
Digitales Dokument |
PDF |
Erschienen |
In: GRA - Volume 19 (2017) |
Datensatznummer |
250152688
|
Publikation (Nr.) |
EGU/EGU2017-17556.pdf |
|
|
|
Zusammenfassung |
Ice sheet models (ISMs) require a variety of inputs which are known with different levels of certainty. Our current knowledge of ISM sensitivities is mainly based on single or multi parameter perturbation studies which cover only a small subset of all model inputs due to the high dimensionality of ISMs and computational constraints. Here we present a framework to enhance this approach to a systematic statistical investigation of all major sensitivities based on the well-known Bayes probability theory.
We demonstrate that a principal component decomposition can be used to drastically reduce the dimensionality of field type components while retaining their structure. However, a systematic perturbation of all inputs is still not computationally feasible with grounding line resolving ISMs. Therefore we propose a Gaussian Process (GP) model trained on a set of ISM runs to emulate its behaviour and with it the sensitivities to input parameters. The beauty of a GP model is amongst other things that it provides probability distributions instead of only “best” estimates which promotes an iterative emulation: an initial set of ISM runs is used to train a GP model as emulator. This emulator is used to identify new ISM setups which are of high interest to improve the emulation (i.e. have wide probability distributions). Performing those setups leads to an updated emulator, and so forth.
This framework is not only a cost effective tool for ice sheet model analytics but also for predictive purposes. Applications may include model calibrations, updates of revised input datasets and setup adjustments for model inter comparisons with virtually no additional computational cost. |
|
|
|
|
|