dot
Detailansicht
Katalogkarte GBA
Katalogkarte ISBD
Suche präzisieren
Drucken
Download RIS
Hier klicken, um den Treffer aus der Auswahl zu entfernen
Titel Model output: fact or artefact?
VerfasserIn Lieke Melsen
Konferenz EGU General Assembly 2015
Medientyp Artikel
Sprache Englisch
Digitales Dokument PDF
Erschienen In: GRA - Volume 17 (2015)
Datensatznummer 250114789
Publikation (Nr.) Volltext-Dokument vorhandenEGU/EGU2015-15605.pdf
 
Zusammenfassung
As a third-year PhD-student, I relatively recently entered the wonderful world of scientific Hydrology. A science that has many pillars that directly impact society, for example with the prediction of hydrological extremes (both floods and drought), climate change, applications in agriculture, nature conservation, drinking water supply, etcetera. Despite its demonstrable societal relevance, hydrology is often seen as a science between two stools. Like Klemeš (1986) stated: “By their academic background, hydrologists are foresters, geographers, electrical engineers, geologists, system analysts, physicists, mathematicians, botanists, and most often civil engineers.” Sometimes it seems that the engineering genes are still present in current hydrological sciences, and this results in pragmatic rather than scientific approaches for some of the current problems and challenges we have in hydrology. Here, I refer to the uncertainty in hydrological modelling that is often neglected. For over thirty years, uncertainty in hydrological models has been extensively discussed and studied. But it is not difficult to find peer-reviewed articles in which it is implicitly assumed that model simulations represent the truth rather than a conceptualization of reality. For instance in trend studies, where data is extrapolated 100 years ahead. Of course one can use different forcing datasets to estimate the uncertainty of the input data, but how to prevent that the output is not a model artefact, caused by the model structure? Or how about impact studies, e.g. of a dam impacting river flow. Measurements are often available for the period after dam construction, so models are used to simulate river flow before dam construction. Both are compared in order to qualify the effect of the dam. But on what basis can we tell that the model tells us the truth? Model validation is common nowadays, but validation only (comparing observations with model output) is not sufficient to assume that a model reflects reality. E.g. due to nonuniqueness or so called equifinality; different model construction lead to same output (Oreskes et al., 1994, Beven, 2005). But also because validation only does not provide us information on whether we are ‘right for the wrong reasons’ (Kirchner, 2006; Oreskes et al., 1994). We can never know how right or wrong our models are, because we do not fully understand reality. But we can estimate the uncertainty from the model and the input data itself. Many techniques have been developed that help in estimating model uncertainty. E.g. model structural uncertainty, studied in the FUSE framework (Clark et al., 2008), parameter uncertainty with GLUE (Beven and Binley, 1992) and DREAM (Vrugt et al., 2008), input data uncertainty using BATEA (Kavetski et al., 2006). These are just some examples that pop-up in a first search. But somehow, these techniques are only used and applied in studies that focus on the model uncertainty itself, and hardly ever occur in studies that have a research question outside of the uncertainty-region. We know that models don’t tell us the truth, but we have the tendency to claim they are, based on validation only. A model is always a simplification of reality, which by definition leads to uncertainty when model output and observations of reality are compared. The least we could do is estimate the uncertainty of the model and the data itself. My question therefore is: As a scientist, can we accept that we believe things of which we know they might not be true? And secondly: How to deal with this? How should model uncertainty change the way we communicate scientific results? References Beven, K., and A. Binley, The future of distributed models: Model calibration and uncertainty prediction, HP 6 (1992). Beven, K., A manifesto for the equifinality thesis, JoH 320 (2006). Clark, M.P., A.G. Slater, D.E. Rupp, R.A. Woods, J.A. Vrugt, H.V. Gupta, T. Wagener and L.E. Hay, Framework for Understanding Structural Errors (FUSE): A modular framework to diagnose differences between hydrological models, WRR 44 (2008). Kavetski, D., G. Kuczera and S.W. Franks, Bayesian analysis of input uncertainty in hydrological modeling: 1. Theory, WRR 42 (2006). Kirchner, J.W., Getting the right answers for the right reasons: Linking measurements, analyses, and models to advance the science of hydrology, WRR 42 (2006). Klemeš, V., Dilettantism in Hydrology: Transition or Destiny?, WRR 22-9 (1986). Oreskes, N., K. Shrader-Frechette, and K. Belitz, Verification, Validation and Confirmation of Numerical Models in Earth Sciences, SCIENCE 263 (1994). Vrugt, J.A., C.J.F. ter Braak, M.P. Clar, J.M. Hyman, and B.A. Robinson, Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation, WRR 44, (2008).