![Hier klicken, um den Treffer aus der Auswahl zu entfernen](images/unchecked.gif) |
Titel |
Bayesian Attractor Learning |
VerfasserIn |
Wim Wiegerinck, Christiaan Schoenaker, Gregory Duane |
Konferenz |
EGU General Assembly 2016
|
Medientyp |
Artikel
|
Sprache |
en
|
Digitales Dokument |
PDF |
Erschienen |
In: GRA - Volume 18 (2016) |
Datensatznummer |
250137280
|
Publikation (Nr.) |
EGU/EGU2016-18498.pdf |
|
|
|
Zusammenfassung |
Recently, methods for model fusion by dynamically combining model
components in an interactive ensemble have been proposed. In these
proposals, fusion parameters have to be learned from data. One can view
these systems as parametrized dynamical systems.
We address the question of learnability of dynamical systems with
respect to both short term (vector field) and long term (attractor)
behavior. In particular we are interested in learning in the imperfect
model class setting, in which the ground truth has a higher complexity
than the models, e.g. due to unresolved scales. We take a Bayesian point
of view and we define a joint log-likelihood that consists of two terms,
one is the vector field error and the other is the attractor error, for
which we take the L1 distance between the stationary distributions of
the model and the assumed ground truth.
In the context of linear models (like so-called weighted supermodels),
and assuming a Gaussian error model in the vector fields, vector field
learning leads to a tractable Gaussian solution. This solution can then
be used as a prior for the next step, Bayesian attractor learning, in
which the attractor error is used as a log-likelihood term. Bayesian
attractor learning is implemented by elliptical slice sampling, a
sampling method for systems with a Gaussian prior and a non Gaussian
likelihood. Simulations with a partially observed driven Lorenz 63
system illustrate the approach. |
|
|
|
|
|