|
Titel |
From climate predictability to end user applications: on the route to more reliable seasonal ensemble forecasts (Outstanding Young Scientist Lecture) |
VerfasserIn |
A. P. Weigel, M. A. Liniger, C. Appenzeller |
Konferenz |
EGU General Assembly 2009
|
Medientyp |
Artikel
|
Sprache |
Englisch
|
Digitales Dokument |
PDF |
Erschienen |
In: GRA - Volume 11 (2009) |
Datensatznummer |
250020073
|
|
|
|
Zusammenfassung |
The use of ensemble prediction systems has become a matter of routine in the context of short-term climate forecasting. However, while such ensembles can in principle quantify the forecast uncertainties arising from the uncertainties in the model initialization, they fail to capture the uncertainties arising from errors and simplifications in the model itself. In fact, seasonal ensemble forecasts typically underestimate the true forecast uncertainty and tend to be overconfident, i.e. they are too sharp while being centered at the wrong value. This poses a serious problem for many applications in climate risk management, as forecast signals may get over-interpreted and decisions may be taken too early.
In this presentation, two different routes to improve the reliability of seasonal ensemble forecasts will be presented and compared in detail: on the one hand "multi-model ensemble combination", i.e. the idea to combine information from different prediction systems in an optimum way (Weigel et al. 2008), on the other hand "recalibration", i.e. the idea to correct ensemble predictions a posteriori on the basis of the error statistics of past forecasts (Weigel et al. 2009). The mechanisms, prospects and limitations of either approach will be evaluated systematically for different attributes of prediction skill, both on the basis of conceptual considerations as well as real seasonal ensemble forecasts. It turns out that recalibration inevitably "dilutes" the potentially predictable signal, while multi-model combination – at least in the "ideal" case (i.e. if infinitely many independent models with comparable skill are combined) - retains the signal and improves the forecast sharpness. Therefore, multi-model combination is conceptually to be preferred. In reality, however, multi-models are not "ideal". Only a finite number of models are available, and the model errors are usually not independent, thus reducing the value of multi-models with respect to recalibrated single models. These findings lead to a discussion on the following questions: Can any of the two techniques be considered more valuable than the other one from a user perspective? Or should they be applied in unison? And if so, in which order?
Refrences:
Weigel A.P., M.A. Liniger and C. Appenzeller 2008: Can multi-model combination really enhance the prediction skill of probabilistic ensemble forecasts? Quart. J. Roy. Met. Soc. 134, 241-260
Weigel A.P., M.A. Liniger and C. Appenzeller 2009: Probabilistic ensemble forecasts: Are recalibrated single models as good as multi-models? accepted |
|
|
|
|
|