|
Titel |
A framework for benchmarking land models |
VerfasserIn |
Y. Q. Luo, J. T. Randerson, G. Abramowitz, C. Bacour, E. Blyth, N. Carvalhais, P. Ciais, D. Dalmonech, J. B. Fisher, R. Fisher, P. Friedlingstein, K. Hibbard, F. Hoffman, D. Huntzinger, C. D. Jones, C. Koven, D. Lawrence, D. J. Li, M. Mahecha, S. L. Niu, R. Norby, S. L. Piao, X. Qi, P. Peylin, I. C. Prentice, W. Riley, M. Reichstein, C. Schwalm, Y. P. Wang, J. Y. Xia, S. Zaehle, X. H. Zhou |
Medientyp |
Artikel
|
Sprache |
Englisch
|
ISSN |
1726-4170
|
Digitales Dokument |
URL |
Erschienen |
In: Biogeosciences ; 9, no. 10 ; Nr. 9, no. 10 (2012-10-09), S.3857-3874 |
Datensatznummer |
250007324
|
Publikation (Nr.) |
copernicus.org/bg-9-3857-2012.pdf |
|
|
|
Zusammenfassung |
Land models, which have been developed by the modeling community in the past
few decades to predict future states of ecosystems and climate, have to be
critically evaluated for their performance skills of simulating ecosystem
responses and feedback to climate change. Benchmarking is an emerging
procedure to measure performance of models against a set of defined
standards. This paper proposes a benchmarking framework for evaluation of
land model performances and, meanwhile, highlights major challenges at this
infant stage of benchmark analysis. The framework includes (1) targeted
aspects of model performance to be evaluated, (2) a set of benchmarks as
defined references to test model performance, (3) metrics to measure and
compare performance skills among models so as to identify model strengths and
deficiencies, and (4) model improvement. Land models are required to simulate
exchange of water, energy, carbon and sometimes other trace gases between the
atmosphere and land surface, and should be evaluated for their simulations of
biophysical processes, biogeochemical cycles, and vegetation dynamics in
response to climate change across broad temporal and spatial scales. Thus,
one major challenge is to select and define a limited number of benchmarks to
effectively evaluate land model performance. The second challenge is to
develop metrics of measuring mismatches between models and benchmarks. The
metrics may include (1) a priori thresholds of acceptable model
performance and (2) a scoring system to combine data–model mismatches for
various processes at different temporal and spatial scales. The benchmark
analyses should identify clues of weak model performance to guide future
development, thus enabling improved predictions of future states of
ecosystems and climate. The near-future research effort should be on
development of a set of widely acceptable benchmarks that can be used to
objectively, effectively, and reliably evaluate fundamental properties of
land models to improve their prediction performance skills. |
|
|
Teil von |
|
|
|
|
|
|