dot
Detailansicht
Katalogkarte GBA
Katalogkarte ISBD
Suche präzisieren
Drucken
Download RIS
Hier klicken, um den Treffer aus der Auswahl zu entfernen
Titel Generalisation for neural networks through data sampling and training procedures, with applications to streamflow predictions
VerfasserIn F. Anctil, N. Lauzon
Medientyp Artikel
Sprache Englisch
ISSN 1027-5606
Digitales Dokument URL
Erschienen In: Hydrology and Earth System Sciences ; 8, no. 5 ; Nr. 8, no. 5, S.940-958
Datensatznummer 250005861
Publikation (Nr.) Volltext-Dokument vorhandencopernicus.org/hess-8-940-2004.pdf
 
Zusammenfassung
Since the 1990s, neural networks have been applied to many studies in hydrology and water resources. Extensive reviews on neural network modelling have identified the major issues affecting modelling performance; one of the most important is generalisation, which refers to building models that can infer the behaviour of the system under study for conditions represented not only in the data employed for training and testing but also for those conditions not present in the data sets but inherent to the system. This work compares five generalisation approaches: stop training, Bayesian regularisation, stacking, bagging and boosting. All have been tested with neural networks in various scientific domains; stop training and stacking having been applied regularly in hydrology and water resources for some years, while Bayesian regularisation, bagging and boosting have been less common. The comparison is applied to streamflow modelling with multi-layer perceptron neural networks and the Levenberg-Marquardt algorithm as training procedure. Six catchments, with diverse hydrological behaviours, are employed as test cases to draw general conclusions and guidelines on the use of the generalisation techniques for practitioners in hydrology and water resources. All generalisation approaches provide improved performance compared with standard neural networks without generalisation. Stacking, bagging and boosting, which affect the construction of training sets, provide the best improvement from standard models, compared with stop-training and Bayesian regularisation, which regulate the training algorithm. Stacking performs better than the others although the benefit in performance is slight compared with bagging and boosting; furthermore, it is not consistent from one catchment to another. For a good combination of improvement and stability in modelling performance, the joint use of stop training or Bayesian regularisation with either bagging or boosting is recommended.

Keywords: neural networks, generalisation, stacking, bagging, boosting, stop-training, Bayesian regularisation, streamflow modelling

 
Teil von